Editorial: The dangers of openness
Below, my introduction from last night's Innovation Reading Circle, on Jonathan Zittrain's The Future of The Internet. Things we touched on the subsequent debate which I don't mention below included tethering of devices, data portability and tinkering. More to be posted at the event site in due course.
In its calm and reserved way, The Future of the Internet is a call to arms. In it, Jonathan Zittrain argues that the point is rapidly approaching where we will need to take positive steps to preserve aspects of the internet’s character that he argues has powered its success, its widespread adoption and, as a result, has positively influenced wider society.
His core argument is that we have experienced and benefitted from a ‘generative internet’; that is, ‘a platform that invites contributions from anyone that cares to make them’. However, the seeds of the platform’s destruction are sown in that openness: ‘the generative features that invite contribution and that worked so well to propel the first stage of innovation begin to invite trouble and reconsideration, as the third-party contribution destabilizes it’s first set of gains.’ (p18). One of the counters to this destabilization is the rise of ‘sterile appliances’ and an ‘appliancized network, that incorporates some of the most powerful features of today’s Internet while greatly limiting its innovative capacity’. (p8) Appliances such as the iPhone; tethered and generally difficult to tinker with.
In the main, Zittrain views this shift as negative, and the concern of the book is to show how this state can be, if not reversed, stopped; and if not stopped, at least entered into with an awareness that we are entering into it.
He is not blind as to the reasons as to why this shift is happening; after his opening chapters outline how the internet came to have the current character that it does, he shows how the crisis in cybersecurity is causing assumptions to change as to what and can should be permitted. Generally, a combination of less technically able users in the mainstream and corporate interests mean that there is less willingness to run the sort of risks that a generative environment implies. Which is, of course, an opportunity cost.
For Zittrain, a generative environment is analogous to – but not the same as – ideas and movements such as theories of the commons, the Free Software movement and network neutrality. But as he makes clear in his discussion of the latter, it is not the same as these, as for him generativity has to have ‘participation’ at its core, relative to the layer of the internet one might be involved with at that given moment.
And it is that principle of participation that underpins the majority of his solutions. In the main his evidence for them are drawn from his analysis of Wikipedia. He writes:
The elements of Wikipedia that have led to its success can help us come to solutions for problems besetting generative successes at other layers of the Internet. They are: verkersbordvrij, a light regulatory touch coupled with an openness to flexible public involvement, including a way for members of the public to make changes, good or bad, with immediate effect; a focus of earnest discussion, including reference to neutral dispute resolution policies, as a means of being strengthened rather than driven by disagreements; and a core of people prepared to model an ethos that others can follow. (p146)
These ideas are based on what I’ll pretentiously call a Lessigian framework. Let me unpack that. In Lawrence Lessig’s ‘Code’, his seminal book on cyberlaw, he sets out a general theory of regulation, explaining how objects can be controlled. One means is recourse to law and legal instruments. But there are others: price and market mechanisms, social norms and the architecture and designs of systems themselves, giving rise to the notion of ‘code as West Coast law’.
For the majority of his recommendations, Zittrain is concerned to bolster social norms and the design of systems – and social norms through the design of systems – rather than market-based remedies or legal tools.
Frankly, his recommendations are buried in the text, and difficult to discern – even when tentatively applied in his concluding discussion on the future of privacy – but follow this broad thrust, best summed up in this quote from early internet network engineers: ‘We reject: kings, presidents and voting. We believe in: rough consensus and running code.’ (p 28)
Zittrain uses the example of ‘robots.txt’ code, a voluntary, but generally accepted means of publishing material online but preventing it from being indexed by search engines. ‘Through robots.txt, site owners can indicate preferences about what parts of the site ought to be crawled and by whom.’ (p 223) He goes on to say that it is ‘a simple, basic standard created by people of good faith can go a long way toward resolving or forestalling a problem containing strong ethical or legal dimensions.’ (p 225)
For me, this hope that change can be affected at all layers of the internet via two out of the four of Lessig’s modes of regulation is optimistic to say the least. In this sense, it is the market that is driving the rise of the appliancized network, and at the moment I can’t see – and the book didn’t persuade me how – social norms can overcome, trump or even move in a more progressive harmony with this development.
And I was also left with a nagging question: how bad will the internet as a non-generative system be anyway? The parallel I drew was with cars. There are hobbyists and tinkers in the automobile market, for sure, but innovation and creativity haven’t necessarily been stifled by the absence of a wider ability to get under the hood of all vehicles. How do we value future possibilities that we can’t know about?
Overall, The Future of The Internet is a dense, supple and subtle book, but one that doesn’t ultimately fulfill its subtitle. Sometimes steel is needed to persuade.