Welcome to the Desert of the Real

In my last post, I threw around some theories on how you could potentially keep bad actors from overtaking a social network. In this post, let’s tie these things together, bring these scattered theories down to earth. In case you missed them, these past posts are important to understanding the context of this post:

We’ve discussed how a decentralized social network may be built, at a high level. We’ve talked about the paramount role of anonymity and privacy in a free society. We’ve talked about trust, reputation, and how they may be measured. We’ve talked about how to keep nazis off of your network.

What does it all mean?

It means we have the building blocks for a fundamentally different social network, one in which your information is not open for the world to see.

It means we can ingrain privacy, anonymity, trust, and reputation into the very networks we frequent.

It means we can be free from the bonds of social networks before us.

It means we can keep nazi scum out of our news feeds.

How the fuck do we do this, though? Not in theory – in real life.

My previous posts outline some of the pieces necessary for such a network, but I haven’t said anything that’s particularly concrete. I haven’t provided any building blocks. Here, I’d like to outline how someone might actually code a social network that follows these rules and guidelines I’ve written about.

Let’s start with the concept of “circles” of friends, circles of family, circles of acquaintances. This is easy to implement poorly – just follow the example of Google+. Make people manually put others into circles, and make those circles public, and bingo bango, you’ve fucked it up completely.

If you’d like to implement it well, you need some sort of notion of trust and reputation. You need to automate the process of placing people into certain circles. To do that, you need to look at:

  1. How people handle secrets the user has given them, and whether they try to access them.
  2. How often the user communicates with those around them.
  3. How intimate or casual those communications are.

Point #1 is easy. Point #2 is doable. Point #3 seems to require a full-fledged AI. However, it could be solved by a simpler system. Assume you have a button that allows you to “Share a Secret”. This secret is guaranteed to be encrypted (like all of your communications), but only the person reading it has access, at least ideally. Like Snapchat, we can tell if they take a screenshot. We can tell if they copy-paste the info. How often you share secrets can serve as a proxy of how intimate the information is (assuming, of course, that people click the button – I have my doubts).

Let’s assume you have a notion of trust, from this system, and you can infer reputation from how people use (or misuse) your trust. We can also determine reputation from how others perceive you. How do we determine perception? Upvotes and downvotes are a simple proxy. If someone doesn’t like what you’re posting, they downvote. Their reputation gives weight to that vote. Of course, this creates a circular definition: if reputation is based on votes, and votes are based on reputation, how do you set the initial values? Maybe everyone starts with zero reputation. Or, you could ask people directly how much they trust a given user. I’m not sure which is worse, but either will do until you get enough data to make a determination.

How do anonymity and privacy play into all of this? So far, we’ve just described a smart social network, not a private or anonymous one.

To me, anonymity and privacy are baked into this network from the beginning. You can sign up as an anonymous user, or a pseudonym, and that’s just as valid as a “true name”. Privacy is guaranteed whenever you talk to someone 1-on-1, or even in a group! It should take effort to post something that’s visible to the entire network. There is a ton of cryptographic foundation for these ideas, such as Diffie-Hellman key exchange, the RSA algorithm, and the dining cryptographers protocol. I won’t belabor those points here, but I strongly encourage you to read the papers I linked to, if you’ve read this far.

This brings us to the ultimate question of life, the universe, and everything cryptography:

How do we keep people from abusing the right to privacy and anonymity? How do we keep nazis, pedophiles, and other despicable groups from exploiting the network? Can we?

I believe we can.

With a combination of techniques outlined in my last post, I think we can kill the problem at its root. We could cut the problematic people off from essential services provided by the larger networks. We could “out” these people to their friends, family, and coworkers as the scum they are. Better yet, we could ban them outright from the network, using the same techniques. Using Shamir’s secret sharing system, you could require each participant in the network to give the key to unlock their identity to elected representatives, and those representatives would have to act in unison to unlock the secret. If that person acts reprehensibly, it would be the duty of the representatives to expose these cretins to the world, or ban them from the network.

These techniques are not perfect. I am not perfect. I merely present these ideas for discussion, to give them air and watch them flourish (or perish), as they’ve churned in my head over the past 10 years. This is getting long-winded, but hopefully, I’ve provided some solid ground to build on. Until we meet again, this is Mister Fahrenheit, signing off and wishing you luck in the coming shitstorm.

1 Like

its too late for the cryptographic methods they are soon dead with Quatum Computing. Prefer total transparency since only no real privacy exists. We need to watch the Watchers like Daivd Brins Book “The Transparent Society” suggest. adding ARk.io to messaging for now to allow for a consensus using your own AI’s .

1 Like

If cryptographic methods can be broken by quantum systems, e.g. if huge composite numbers can be factored into their prime components in order to break RSA, then the same computers could be used to find even larger primes, no? I suppose it depends on how fast composite numbers can be factored vs. how fast prime numbers can be found, but I wouldn’t quite say cryptography is dead yet.

Watching the watchers is an excellent step, but I refuse to accept the notion that we should give up on privacy and prefer absolute transparency. As long as evil exists in the world, as long as human nature is what it is, we can’t just forsake privacy. Take the example of a journalist who is trying to expose corruption; they must protect their identity as their very life depends on it (see: the case of Jamal Khashoggi). To give a more widely-relatable example, I wouldn’t give out details about my location, or my sex life, to complete strangers (and neither would any reasonable person). In an ideal, fully-transparent world, there wouldn’t be any corruption or evil, but we don’t live in an ideal world. The idea that the most powerful people in the world, let alone the general public, would open up all of their activity to public scrutiny is a pipe dream.

Ark looks interesting, at a glance, but I’m wary of any blockchain-based system, as the ledger is entirely open by default, by design. I am curious, though: what do you mean by “consensus using your own AIs”?

1 Like

By consensus we use our own blockchain (Proof 0f Stake) with a carbon sequestration system. So all ssensors need to agree offline in our systems before our AI Edge computing will act on sensor data, if they do not agree it shuts down. This privies us with the Triple ledger system needed for third party audits and to sync online with regular blockchain to transfer Carbon credits. on Twitter @HagoCO2

on the Qauntum Computing i was refering to the MIT article about IBM s current system it could take 8 hrs to break a 2048 RSA key. Lots of Governments have QC now so its only time before another MT GOX evaporates $500 million. The IBM QC is available for developers. We have no more privacy Governments all can watch if they want too. Most look for ecrypted files as suspect. So more attention is given to those who encrypt. #5eyes needs no Warrents to wire tap us. and the US has Palantir without Congressional oversight now. It will benifit the first QC hackers not to let on for 15 years while they hack elections and banks at their leaisure. it also could be ehy NSA does not seem to worried about its old tools being stolen by Snowden. Expect to see large data breechs from Credit bureuas while they sell our data on the Dark web and charge us to protect it. https://www.technologyreview.com/s/613596/how-a-quantum-computer-could-break-2048-bit-rsa-encryption-in-8-hours/

1 Like

Regarding QC, this only means that more advanced cryptographic systems must be built. I’m well aware of the 5 eyes, Palantir, and so on, but I’m not willing to just throw my hands up and say “it’s all over, I’m done, fuck it”. Cryptography has always been a cat-and-mouse game. For details, see Codebreakers by David Kahn, or Crypto by Steven Levy. The Vigenère cipher was broken, too, and that was “le chiffre indéchiffrable”, the undecipherable cipher, for hundreds of years!

I understand where you’re coming from. I gave up on this 10 years ago, when no one would listen to what I had to say about the NSA’s spying programs. Only recently has my fire been ignited again. I realized something:

To give up is a disgrace to my ancestors. To give up is to say this is ok. To give up is to be a slave. I refuse to give up. I’m ready to fight this until the death, because if I don’t, my ancestors gave their lives in vain.

I realize I probably won’t succeed in my efforts, that this is an uphill battle, but that doesn’t matter. What matters is effort and force of will. What matters is that I try. If I don’t try…what kind of life is that?

The cost is to great outside my individual IT budget to try to out compute governments. Think whoever runs a QC first would also not want to tell us its running or things would lose value very fast. just take over governments and rigg elections, Not giving up just plenty of other ways to secure and earn. We do most of our AI with Fog Computing so no online Internet Weather affects us.

That’s the key phrase, isn’t it? We created one-way functions (like those employed in RSA) so we wouldn’t have to out-compute governments. Of course that’s impractical, impossible. What we need are functions that are very easy to compute one way, but not the other. Like breaking a plate: it’s easy to drop a plate and break it, but extremely difficult to put the pieces back together into the original plate.

This is very true. Given this, maybe we should operate under the assumption that prime-based systems are dead, and we need a new system. Been re-reading David Khan’s Codebreakers in anticipation of this.

Secure from whom? From casual attackers, sure, there are a thousand ways one can secure the network from attack. But from a government with a QC? Dissidents don’t stand a chance.

Does this protect you against a determined attacker or eavesdropper? Honest question – I had honestly never heard the term “fog computing” before today.