Trust is a incredibly difficult concept to quantify. As outlined in my last post, on cryptography and anonymity, I mentioned how trust can be measured on a relative scale. For example, I might trust someone to house-sit, but I’d never trust them to babysit.
First, why are we trying to quantify trust and reputation? What good is it? Cryptographic trust and reputation systems can be used to:
- Squash propaganda machines.
- Ensure that the person you’re talking to is who you expect, not an impostor.
- Keep secrets between the people you tell them to.
- Keep nazi groups and other despicable filth from infecting our networks. More on this later.
So how the hell do you quantify such a nebulous, messy, emotional human concept as trust? Reputation, I’ll get to later. First, we have to establish what trust means.
Trust means the probability that someone is going to do what you expect. It’s a poor definition for actual human relationships, but I think that’s a reasonable assertion for our purposes, for now.
So how do you measure the probability that someone will do what you expect? Measuring past promises and outcomes is a good method.
How does this factor into a cryptographic system? Well, you could have a split key (based on Shamir’s secret sharing system), like the Coca-Cola recipe: you have to have, say, 2 out of 3 participants present to “unlock” the secret. If you give a key to someone and tell them not to use it, that’s a decent test of trust. Checking whether they accessed it digitally is trivial. If they tried to access it by themselves, that’s a breach of trust. If they try to collude with others to access it, that’s a major breach.
What about reputation? How do we define it?
Reputation is the sum total of your actions on a network and the way they’re perceived by everyone else. Again, this is a poor definition for everyday life, but for a cryptographic system, it may be sufficient. Part of the purpose of these essays is to explore whether these concepts have legs.
How do you quantify this definition of reputation? People involved in “influencer marketing” have attempted to do so in the past, but it’s a shallow measure, as far as I’ve seen. You’re essentially looking at effects of a post on social networks, using publicly available information – in our cryptographic system, reputation can be a concept built into the network that uses the actions of secret-keepers to truly get a feel for how trustworthy someone is (I’m mixing the concepts of trust and reputation here, but they are closely tied).
There’s an immediate issue I see with this simple definition: a group of people with an agenda against a given individual could smear them, ruining their reputation. How do we prevent “warfare” within the network? And the question from my last essay still stands: how do we expose bad actors and remove them from the network?
This comes back to the example of nazis on the network. For example, how do you prevent nazis from smearing and ruining the reputation of a gay, or Jewish, or black activist they don’t like? How do you prevent censorship on the network?
I realize that at this point, I’m talking about censoring nazis. Good. Fuck 'em. They deserve it.
What I don’t want is the censorship of vulnerable populations, such as the gay community, or the Jewish community, or the black community, just to give a few examples. Does this mean I’m okay with censoring white supremacists? In my network, you bet your sweet ass I’m more than willing. In other networks, others could be blocked if the majority of influential members decides it’s appropriate.
In other words, go start your own network, you punk-ass nazi cowards.
Now that I’ve addressed the elephant in the room, regarding “censorship”, let’s get back to the matter at hand: how do we keep nazi scum from invading our network and generally making it unbearable to be there?
Ultimately, it comes down to who’s present in the network. If the majority of people in the network with some threshold level of reputation and trust decide to kick you to the curb…bye! This is perfectly reasonable. We already do this as a society, as Americans, when we condemn nazis (well, not our president, but you know what I mean). To give a more universal example, we gladly and rightfully condemn pedophiles. Or cannibals. Or incestuous relationships.
Exposing bad actors, exposing nazis within the network, could be done as well. If we get back to the example of Shamir’s secret sharing and the Coca-Cola recipe, you could give the keys to your identity, so to speak, to a few close friends, or to designated arbiters, or to whoever you like. This could be a requirement of the network. If you’re being an asshole and spewing garbage, those people could expose your actual identity on the network. The Elixxir project is taking a similar tack.
This, of course, means that nazis could start their own network, find support, find an echo chamber, and generally sow discord and chaos in the minds of impressionable young people around the world, just as ISIS had done quite recently.
How do we prevent these people from starting their own network?
This, I’ll have to think on. For now, this is Mister Fahrenheit, signing off, wishing you a safe, happy, and nazi-free Saturday night.