Mister Fahrenheit is with you once again, riding the waves of the aether into your mindstream. Last time, I discussed how you might actually implement some of my batshit crazy ideas.

I left the last post on a bit of a cliffhanger, though. I begged the question. How does one actually implement a decentralized, trusted network?

Decentralization is fucking hard.

Peer-to-peer networks are the ideal scenario: you connect directly with the people you want to communicate with, full stop. In practice, this is much harder than it sounds. See, in between you and the internet is your router, and that router’s very purpose is to keep outside traffic from getting to your computer directly. This is better known as NAT, or network address translation. Essentially, your router has the IP address that’s visible to the wider internet, and your PC, laptop, or phone has an IP that’s only known to the local network (you may have seen this address in the form of 192.168.x.x, or 172.16.x.x).

How do you translate these local addresses? In other words, how do you give a peer, a friend, a colleague, your external IP, and translate that into something the router understands? There are a few techniques, the one I’m most familiar with being ICE, or interactive connectivity establishment. There are many, many layers to this, too many to go into in this meager post, but through this convoluted protocol, you may be able to convince the system to allow traffic through, to punch through the NAT. Maybe.

If all else fails, you can use a federated network to proxy traffic, to pass traffic between peers as though they were communicating directly. This is the approach that the Signal app takes, if I’m not mistaken. This is flawed, in that it gives power to a decidedly central authority, but we can work around this. We can mitigate the effect.

What if the federated peers, the peers who proxy traffic, were spread out? What if it wasn’t a central server, but a collective of peers?

This raises all sorts of ugly questions around control of the network, but it’s the best we have outside of a full-on mesh network (and that’s a pipe dream, for now).

Where do we go from here?

From here, we can start to build. We can start to create a decentralized network of peers that communicates with one another.

Anonymity is a topic I’ve sort of glossed over. Wanna cover that? Good, me too.

There’s an absolutely gorgeous protocol known as the “dining cryptographers”, a hypothetical scenario in which coin flips (i.e. random number generators) can determine whether someone at the table paid for dinner, or the NSA paid, while preserving the anonymity of the buyer at the table, if there was one. This can be used to transmit messages anonymously with a simple modification to the protocol.

The dining cryptographers protocol was designed in the 80s. It was pure theory. Until now, we simply didn’t have the bandwidth to carry out this intricate protocol, with its requirements for single-user-transmission, allowing for error correction. It was just a dream.

Until now.

Now, I plan on writing a working version of this protocol to demonstrate its efficacy in protecting the privacy of journalists, to keep them from meeting the same fate as Jamal Khashoggi.

Of course, such a network is predicated on the underlying system being totally secure. Phones are out. The only hope we have is a secure desktop OS, and with initiatives like Intel’s “trusted” computing platform, which run an entire fucking operating system at the most privileged level of execution, what hope do we have?

We have the hope of AMD, although I don’t trust any large corporation to protect my freedoms as far as I can throw them (and that, you see, is a rare example of absolute trust, or lack thereof). I would suggest that the government step in and regulate this horrific practice, but the three-letter agencies (FBI, NSA, CIA, etc.) benefit far too much from this privileged level of execution to simply give it up.

What we need, my dears, is open source hardware. I’m a big fan of RISC-V.

But what hope do we have of any consumers, let alone corporations, adopting this technology? My bet: massive casualties. As gruesome, as grotesque, as morbid as it may be, software is eventually going to kill people en masse. It’s going to get ugly. And at that point, we’re either going to have to self-regulate, or be regulated. Swim or die. If we want to self-regulate, a huge part of that is going to be open source. After all, how can you regulate a black box?

I’ve been a huge advocate of open source self-driving cars. Sharing the code. Sharing the safety of systems to create something greater than the sum of its parts. Like the three point seat belt, safety technology should be shared for the good of humanity. Hell, self-driving cars might actually drive us to the point of needing open source hardware (no pun intended), by causing such mass death and destruction in their open-street beta testing that legislators demand action.

What am I getting at? (I wonder myself, sometimes.)

I’m getting at the point that not only is open source hardware desirable, for the purposes of a decentralized, anonymous social network, but it will become necessary, vital to our survival as a high-tech society. Just as standardized screws became necessary as manufacturers created a plethora of screw types and sizes, so will open source hardware become a reality. I’d be willing to make a long bet on that one.

This is getting long winded, and it’s getting late. Until we shall meet again, this is Mister Fahrenheit, wishing you open systems and closed doors in the age of ever-encroaching surveillance.