In my last post, I threw around some theories on how you could potentially keep bad actors from overtaking a social network. In this post, let’s tie these things together, bring these scattered theories down to earth. In case you missed them, these past posts are important to understanding the context of this post:
- Privacy is dead. Long live privacy
- Cryptography, Anonymity, and their Roles in a Free Society
- Measuring Trust and Reputation in Social Networks
- Preventing Bad Actors from Overtaking Social Networks
We’ve discussed how a decentralized social network may be built, at a high level. We’ve talked about the paramount role of anonymity and privacy in a free society. We’ve talked about trust, reputation, and how they may be measured. We’ve talked about how to keep nazis off of your network.
What does it all mean?
It means we have the building blocks for a fundamentally different social network, one in which your information is not open for the world to see.
It means we can ingrain privacy, anonymity, trust, and reputation into the very networks we frequent.
It means we can be free from the bonds of social networks before us.
It means we can keep nazi scum out of our news feeds.
How the fuck do we do this, though? Not in theory – in real life.
My previous posts outline some of the pieces necessary for such a network, but I haven’t said anything that’s particularly concrete. I haven’t provided any building blocks. Here, I’d like to outline how someone might actually code a social network that follows these rules and guidelines I’ve written about.
Let’s start with the concept of “circles” of friends, circles of family, circles of acquaintances. This is easy to implement poorly – just follow the example of Google+. Make people manually put others into circles, and make those circles public, and bingo bango, you’ve fucked it up completely.
If you’d like to implement it well, you need some sort of notion of trust and reputation. You need to automate the process of placing people into certain circles. To do that, you need to look at:
- How people handle secrets the user has given them, and whether they try to access them.
- How often the user communicates with those around them.
- How intimate or casual those communications are.
Point #1 is easy. Point #2 is doable. Point #3 seems to require a full-fledged AI. However, it could be solved by a simpler system. Assume you have a button that allows you to “Share a Secret”. This secret is guaranteed to be encrypted (like all of your communications), but only the person reading it has access, at least ideally. Like Snapchat, we can tell if they take a screenshot. We can tell if they copy-paste the info. How often you share secrets can serve as a proxy of how intimate the information is (assuming, of course, that people click the button – I have my doubts).
Let’s assume you have a notion of trust, from this system, and you can infer reputation from how people use (or misuse) your trust. We can also determine reputation from how others perceive you. How do we determine perception? Upvotes and downvotes are a simple proxy. If someone doesn’t like what you’re posting, they downvote. Their reputation gives weight to that vote. Of course, this creates a circular definition: if reputation is based on votes, and votes are based on reputation, how do you set the initial values? Maybe everyone starts with zero reputation. Or, you could ask people directly how much they trust a given user. I’m not sure which is worse, but either will do until you get enough data to make a determination.
How do anonymity and privacy play into all of this? So far, we’ve just described a smart social network, not a private or anonymous one.
To me, anonymity and privacy are baked into this network from the beginning. You can sign up as an anonymous user, or a pseudonym, and that’s just as valid as a “true name”. Privacy is guaranteed whenever you talk to someone 1-on-1, or even in a group! It should take effort to post something that’s visible to the entire network. There is a ton of cryptographic foundation for these ideas, such as Diffie-Hellman key exchange, the RSA algorithm, and the dining cryptographers protocol. I won’t belabor those points here, but I strongly encourage you to read the papers I linked to, if you’ve read this far.
This brings us to the ultimate question of
life, the universe, and everything cryptography:
How do we keep people from abusing the right to privacy and anonymity? How do we keep nazis, pedophiles, and other despicable groups from exploiting the network? Can we?
I believe we can.
With a combination of techniques outlined in my last post, I think we can kill the problem at its root. We could cut the problematic people off from essential services provided by the larger networks. We could “out” these people to their friends, family, and coworkers as the scum they are. Better yet, we could ban them outright from the network, using the same techniques. Using Shamir’s secret sharing system, you could require each participant in the network to give the key to unlock their identity to elected representatives, and those representatives would have to act in unison to unlock the secret. If that person acts reprehensibly, it would be the duty of the representatives to expose these cretins to the world, or ban them from the network.
These techniques are not perfect. I am not perfect. I merely present these ideas for discussion, to give them air and watch them flourish (or perish), as they’ve churned in my head over the past 10 years. This is getting long-winded, but hopefully, I’ve provided some solid ground to build on. Until we meet again, this is Mister Fahrenheit, signing off and wishing you luck in the coming shitstorm.