How Copiosis Solves Corruption

Photo by Markus Spiske on Unsplash

A Case Study.

Someone recently asked a great, but long, question:

How do you avoid people lobbying the citizen’s jury, and gaming the resulting algorithm? 

Every time people’s success is measured by some metric, we see that they game this metric to get the biggest reward for the least effort. So while the metric you’re creating here with this algorithm may seem to work well, why should I expect it to work well when people’s lives are graded according to it?

And even when the algorithm is changed to avoid such gaming, why would people who have benefited so far not just lobby those in the jury to rig the algorithm in their favor?

Like let’s say that we have two people, Alice and Bob. Alice has been chosen to the jury to (potentially) fix a loophole in the algorithm that lets people like Bob earn lots of NBR without really doing much that benefits society, or maybe even harms society. Bob then decides to invite Alice to a fancy dinner at an expensive place that costs a lot of NBR. Maybe he even provides Alice with a new expensive and fancy car for free. (which he can do because he decides the gateway and who to give it to) And so on. Bob will essentially bribe Alice to not fix the loophole. Of course if the loophole does get fixed, then this will not give Bob a lot/any NBR. But if he is successful in making the loophole not get fixed, wouldn’t this give him lots of NBR?

Because, according to the algorithm, exploiting this loophole is very beneficial to society. Therefore someone who is able to ensure the loophole stays generated a lot of benefit by ensuring people will keep being encouraged to exploit the loophole.

In fact it seems like the algorithm should punish people who attempt to change the algorithm? Because if you were to successfully change the algorithm, then you would make it so that people would no longer do what the current algorithm thinks is optimal.

Dealing with corruption/hackers

We often get questions like this. They’re good because they allow complete, thoughtful answers. Here’s how Copiosis handles such a situation.

Copiosis is always changing with the state of the art and our awareness of the state of the art of technologies and know-how at any given moment. The citizen jury process describe above has changed a lot.

Instead of people being selected randomly, everyone planet-wide participates in the jury process to the degree they desire, making the jury process much more dynamic and better reflecting humanity’s perspective.

That said, let’s answer the question based on the jury version described on our website, which is the old version.

The short answer is, gaming the system will happen but it’s not as big a deal as you might think. Several processes keep gaming in check. And the way Copiosis works makes such gaming not all that likely and not that impactful either, to the gamer or to society. More on that in a moment.

Benefit to all reduces bad behavior

The algorithm is designed to MAXIMIZE rewards to virtually any human act SO LONG as resources consumed and other processes are as sustainable as possible, and results are as net beneficial as possible as defined by the algorithm.

So not only should Bob’s rewards (and Alice’s) already be maximized, they both have unlimited potential to receive NBR as you’ll shortly see. This alone reduces instances of manipulation or corruption.

Generally though people will try to game any system/metric. But in this system that’s not a problem for several reasons. One, the gaming will be discovered and that’s good because we’ll know how to close that loop. Two, the only thing “gaming” produces is POTENTIALLY an artificial (i.e. fraudulent) increase in NBR for the gamer.

That’s not really a big deal either because 1) there’s an unlimited supply of NBR since they are created from nothing, and 2) they can only be used for luxuries.

IOW, they can’t be used like money today to buy votes or legislation or to prevent people from doing things or compel people to do other things such as ruin others’ lives or kill them because NBR is non-transferrable.

Three, no one goes hungry, loses their house, their livelihood or healthcare when another gets fraudulent NBR, as these things are necessities and are provided at no cost to all. So not much harm can come from such an act.

Everyone is rich

What’s more, unlike today, and this is really important, nearly everything a person does generates Net Benefit Value (NBV). Remember what we wrote above about maximizing rewards?

So in a given period, each person sees MULTIPLE streams of NBR filling their accounts as their acts create NBV. For example, all of the following generate NBR income in Copiosis (this is not an exhaustive list):

  • Helping a friend move
  • Advising a friend on a new computer to buy
  • Offering directions on something that helps someone particularly well
  • Writing a well researched review on Amazon
  • Identify someone trying to game the system and reporting it
  • Recommending a fix to the gaming attempt and having the fix adopted
  • Helping your dad fix something at his home
  • Spending time with your aging mother
  • Putting your garbage cans out for collection
  • Reading to your child
  • Spending time with your grandchildren
  • Playing tennis with another
  • Hosting a dinner party
  • Feeding someone
  • Aiding someone who has suffered an accident (i.e. providing first aid)
  • And, so long as the beneficial results of these acts persist, you keep getting NBR.

Can you see, then, how people ALREADY are getting rewards for “little effort” in Copiosis? And those rewards keep coming in.

Given that, why would someone feel the need to game the system to get more? Especially when, in addition to all this income, they’re getting income from what they do “full time” (their main passion) AND all their necessities are provided to them at no cost?

Our answer: because some people just want to game systems. And that’s ok!

Results, nothing else

You wrote “So while the metric you’re creating here with this algorithm may seem to work well, why should I expect it to work well when people’s lives are graded according to it?”

The algorithm doesn’t function in a way that “people’s lives are graded according to it”. The algorithm only measures one thing: results.

It doesn’t grade intent, motivation, meaning, or the people themselves. It just looks at what happened when a person acted, measures what happens, then rewards the actor commensurate with what it measured.

That’s a big difference from grading people’s lives relative to this problem you’re pointing to. Since it measures results, you can bet large rewards must come with an equally large amount of results. Said large results must involve and have impacted a lot of people. This is important to understand as we start addressing your Alice and Bob scenario.

So it’s impossible to get large amounts of NBR without having generated physical-world results consistent with the reward amount. Think about how software works. We have an algorithm, embedded in software. Presumably, your Bob has figured out a way to jigger the algorithm and the software, so that it not only rewards him with “Lots of NBR”, but somehow, also manages to falsify real-world evidence! That would be exceedingly difficult given how we’re designing the software.

For one, the blockchain registering results in our ledger must include confirmations from those who benefitted, as well as those participating in producing the results. Third party verifiers (acting from their passions) documenting the results are locked in too as are any watchdog and whistleblower reports.

Are you saying somehow Bob would be able to access and change the software, the blockchain AND the algorithm WITHOUT ANYBODY ELSE KNOWING THAT HAPPENED? Even though both the software and algorithm are totally transparent and open sourced? How would he be able to do that? And even if he did, how would it not be corrected almost immediately?

A secure process

So given all the above, including the fact that the jury process has improved to allow more participation and with far more accuracy with respect to humanity’s input, let’s look at your scenario directly, from the standpoint of the jury process as described on the website.

Citizen juries are selected randomly by device number. So unless Alice tells Bob, it would be hard for Bob to know Alice was selected. Bob can’t know everyone in his community. Communities these days are too big.

We didn’t say on the website how big a given jury is. Given these sessions don’t need to be in person, they can be rather large. That means Alice can’t make a decision by herself. She can only voice her position or she must reach an agreement, much like juries today, with all jury members. IOW, bribing Alice doesn’t do anything other than potentially get Bob in trouble.

What if Alice decides to report Bob? After all, the car is nice, and so might the meal be. But given all the NBR she is getting from things like the ones listed above, she likely has all the NBR she needs to have a nice car and meal. She doesn’t need Bob’s bribe. That puts Bob at risk and Alice at an advantage, because there’s no reason why Alice wouldn’t report Bob because doing so not only halts Bob’s plans in their tracks, Alice would be rewarded NBR for doing so.

Alice alone can’t sway the jury. She can only give her input. So bribing her is a waste of time.

You write: “Because, according to the algorithm, exploiting this loophole is very beneficial to society. Therefore someone who is able to ensure the loophole stays generated a lot of benefit by ensuring people will keep being encouraged to exploit the loophole. In fact it seems like the algorithm should punish people who attempt to change the algorithm? Because if you were to successfully change the algorithm, then you would make it so that people would no longer do what the current algorithm thinks is optimal.”

We’re not sure how you came to these conclusions. The algorithm rewards people who discover holes in the system because when discovered they can be closed. That’s a net benefit to the system. Same with problems. But once it’s closed, opening it again doesn’t create benefit, it degrades the system. There’s no benefit in that so reopening such holes makes no sense.

There are no punishments in Copiosis and no punishments built into the algorithm or the software. Punishing people is not net beneficial at all. The theory behind the algorithm is to reward people for benefiting people and the planet in net beneficial ways.

We’re not sure how you see someone benefitting people through one algorithm version, suddenly not being able to do that in a later version, that, presumably is improved only in the sense that a loophole is closed.

We think we’ve answered your questions and addressed your concerns. As we say to everyone who contacts us, if we didn’t, PLEASE let us know, so we can try again.

Thanks for writing!

Leave a Reply