But beyond a statement about the internet infrastructure, this is also about figuring out a solution to avoid losing the internet. So to help it, we need to define a way to build the atomic components that will help it become resilient to any attack, whether they are from repressive dictators or over-reaching corporations.
Genesis of the Particle Protocol
Fortunately, the internet happens to be at a key point in terms of its evolution, with the transition from IPv4 to IPv6 being upon us. With one major effort to upgrade large part of the infrastructure, it seems that new efforts could help us increase the overall resilience of the net.
A few years ago, I put together some basic requirements for a tool that I would like to see, something I had called a “Personal Relationship Manager.” A few years later, there are several of those so I’m thinking that I can start planting similar ideas into the ground for possible implementations by people who understand protocols much better than I do. The following are imperfect thoughts based on my understanding of core internet protocols and discussions I’ve had around them with several people over the last couple of years.
A lot of this, of course, has substantial precedence. For example, the idea of completely rewiring connectivity is not really a new one. Here’s Doc Searls, about 3 years ago:
Connectivity-as-infrastructure is soft in several senses. One is that you don’t need a big utility company to provide it. Another is that data and its protocols are soft. They have no physical substance, yet they have supportive qualities that are substantive in the extreme. That’s because the Net is a way of connecting. It is not the wires and waves that do the connecting.
.. and there has been a lot of work put into making the internet protocols faster and more reliable but few have taken the radical approach of making the net completely self-reliant.
So without further ado, here are some basic requirements:
- Easy to use
I will now go into the thinking behind each of these points.
a set of guidelines or rules.
The challenge then becomes who is responsible for setting those guidelines or rules. Ownership of the responsibility for setting the guidelines or rules should be diffused around the community of interest on the internet. In that sense, the particle protocol should be a protocol without a head group. Decisions around what to include and exclude in its core should come from the community as a whole, with no central office, no central committee, no central individual, ultimately responsible for it.
Open is the way of the net, where ideas are given dominance based on their individual value and not based on the value of the individuals that brought them forth.
Because it is headless, open is uncontrollable. One could argue that peer-to-peer networks are the closest thing we have to open networks as every node in the network serves and routes things for every other node and the disappearance of an individual node does not impact the network as a whole for very long.
A corollary to open then seems to be that the network will be peer-to-peer, making it impossible to shutdown the network altogether. Peer to peer networks have been the bane of the music and movie industry for a decade because they cannot be shut down and it seems that if we are to build a network that cannot be shut down, we can learn from that model.
Open also means unencumbered from any pre-existing patent. The particle protocol should be something that is owned by absolutely everyone and by no one in particular. The reason for this is that lack of ownership means that the owners cannot be leaned on by any organization or government. With that point of friction removed, the ability to create backdoors or shut down such a protocol would be more limited and require substantial efforts on the part of the people trying to do the shutting down.
Open also means that the particle protocol should be sitting at the lowest level of the infrastructure stack with little or nothing below it. Once again, this is to ensure its resilience as the closer it is to the foundation, the harder it is to remove.
Last but not least, is that open is not about money. That is because the core portions of the particle protocol should be free in a monetary sense too. However, beyond the core, innovation should be allowed so anyone can build (and make money) by providing extra components for the particle protocol. However, the people doing so must realize that any changes they decide to make to the core are dictated by the underlying principles regarding the protocol and must be redistributed in the same open fashion.
The particle protocol should have the lightest CPU and memory footprint possible. Some may feel it is too much of a constraint but the particle protocol should be so light that it can run on most devices. For its initial version, I think that the ability to run, without impacting their pre-existing operations, on mobile phones, computers, and devices with as low a footprint as a 400Mhz CPU and 128Mb of RAM (Apple watchers may recognize this as the original specification behind the first iPhone: it is no accident as I believe the particle protocol should run on any smartphone in the future).
Light, in my view, also means unattached, which means that the particle protocol would be wireless by default. Sure, devices could be created to connect some points of the network to some wired network (and this could turn into a whole new sector for the telecom infrastructure industry).
Finally, light also means unencumbered of extras. The problem to be solved here is resilience (ie. it can’t be shutdown). Anything beyond that is extra. So the particle protocol should allow for TCP/IP to run on top of it but things like extra security, guarantee of services, and so on, should not be part of its core. However, I’d like to see some kind of a plug-in approach that could allow that protocol to be extended with such features by anyone who wants to.
Easy to Use
The first dotcom boom taught me an important lesson about technology: if it is not easy to use, people won’t use it. The internet was around for a long time prior to 1995 but it wasn’t until then that people adopted it. Why was that? I think it was due to two factors: first, Microsoft built a TCP/IP stack into their operating system, making internet access a question of configuration and AOL started splattering the world with their disks, making access to the online world just a question of setting up a username and password and handing out your credit card information to them. The rest was automated.
In order for the particle protocol to succeed, it should be easy to install and easy to use. By easy to install, I mean that it should be a question of downloading it and, if needed, clicking on an icon to install it but that would be it. The software would install itself, look for ways to connect to its peers, identify any peers nearby, and automatically connect, becoming another node in the network.
By easy to use, I mean that there ought to be no actual work to use it once installed. The first thing the protocol installer would look for is all the ways in which it can connect to other devices (wired: eg. via a modem or ethernet / wireless: eg. WiFi, Bluetooth, EDGE, 3G, 4G, etc…) and attach itself to all the available modes without disturbing the other software attached to those. There should be, embedded in the protocol itself, a logic as to how it would prioritize its connectivity, based on how many nodes are available in a particular connectivity mode and how reliant other nodes are on its connectivity to more than one connection (eg. tying 3G communication to WiFi links).
By being completely invisible, the protocol would become something that can exist without being acknowledged and can be installed without much notice after installation. So, if you were to take Libya as an example again, hacktivist could work to install the particle protocol on every communication devices the government owns, and protesters would leverage those installation for their own communication.
The only way to stop such a protocol would be to completely shutdown every electronic device available in an area/country. While it is not impossible that some strongmen could go down that route (I’m thinking of places like North Korea, maybe), the impact would be that the only way to shut things down is to shut down your own communications line. While it is theoretically possible, such a shutdown could create a race as to who is bringing their own network back up in order to communicate. If we were to take into account network theory, this is basically creating resilience by ensuring that the information assymetry created by a network shutdown forces ALL the players to rush back to restoring it, thus restoring nodes for all sides at the same time. In a perverse way, it leverages the assymetry to get rid of it.
Many years ago, my good friends Doc Searls and David Weinberger argued that the internet was a World of Ends. The principles were sound but unfortunately, by creating a view based on ends, they opened the possibility for creating points of controls.
If the internet has ends, it can be closed down.
But what if it didn’t have end-point. What if it had addresses that changed on a more random basis. Then exerting control over one point would not necessarily work. What if the addressing were to change on a time and location basis as well as some other factors based on sudden changes in traffic (spikes or drops) with violent drops in traffic resulting in a complete re-assignement of the addressing space along with a drastic change in how long devices would attach to that space before changing address again.
Without those ends, and by creating a network protocol that would carry traffic while seeing radical changes in its addressing space could create a situation where an attack against a portion of the network would be seen as an attack against the network as a whole and solutions would be handled on a global basis.
So whether that network is shutdown because a political strongman decides to do or an earthquake damages a region, the network as a whole would have some form of self-healing capacity to start rearranging the damaged parts quickly and without any involvement from the users in the affected areas (network management should be the least of people’s problems in a time of crisis).
Beyond the principles: Addressing
Since this would be a relatively new protocol, I would throw some backward compatibility away. As protocol development takes place, I can only assume that it won’t be until 2012 that we would see the first implementations of this. As a result, I would go as far as to venture that the particle protocol should not have to worry about IPv4 addressing and should focus on working with IPv6 instead. The reason behind such an approach is that IPv6 will increasingly be the new standard for addressing beginning in 2012. IPv4 support, as a result, would be great to support legacy systems but this is about fixing problems in the future so let’s support the systems that are future proof.
Beyond the principles: Implementations
Ultimately, protocols live and die by their implementation. The first step towards implementation would be a lightweight version of the particle protocol that could work on linux, android and iOS devices.
Why those first?
First, linux. Linux is available in a variety of forms, including as an embedded OS for devices. In the future, I think we could see the particle protocol as something that would be available over embedded devices (particle boxes) that could be assembled cheaply and connected to power sources and network sources. Such boxes should be relatively inexpensive to produce (in discussion with people, I’ve been using the price of $25 in parts as a stake in the ground) and all schematics should be open-sourced.
However, the challenge with the hardware only solution is that linux is not something the general population uses on a regular basis. So creating a mostly linux-based solution would attract the attention of people who want to disconnect things to those devices and get the to disconnect them.
More difficult to disconnect, however, is an overall telecom infrastructure and here are I am making some technical bets: that iOS and Android will be the major operating systems powering mobile phones in the future. Taking that approach, a version of the particle protocol working on those devices could turn every smartphone with those OSes on them into a network point. I’m sure that this might make some people unhappy (Apple would probably not approve) but I suspect that it could allow for quick deployment of devices in regions needing them.
Any other implementations would be welcome, of course.
Protocols are agreements and this set of concepts is only a proposed set. I’d like to see discusion around the concepts in the technical community but, at the core, the problem is simple: we need an communication network that works based on network effects, making the network much stronger with every node that joins it. Recent events, both geopolitical (Egypt, Libya) and environmental (earthquakes and tsunami in Japan) have shown that our networks are still brittle.
The particle protocol is the beginning of a discussion to strengthen the network at one of its lowest layers and ensure that disruption in one physical location can be healed by its proximity to other locations.