Nothing Hurts The Godly

by Misha Lepetic

One fish says, “So, how's the water?”
The other fish replies, “What water?”

N-RICHARD-STALLMAN-large570Ladies and gentlemen, I give you Richard Stallman, shuffling onto the stage at Cooper Union's Great Hall. Accompanying Stallman is the veritable Platonic Ideal of a potbelly; left behind are his shoes, which are almost immediately discarded and left by the podium. Padding around the same stage where, in 1860, Abraham Lincoln gave the speech that ignited his political career, Stallman proceeded to subject his New York audience to a rambling disquisition on freedom and computer code, consisting of oftentimes astonishingly petty invective, and peppered with various requests that veered from the absurd to the hopelessly idealistic, but which ultimately served to drive away a good portion of the audience, including myself, well before its conclusion, nearly three hours later.

Why is this recent encounter with a nerd's nerd at all worth recounting? (While entertaining, I will forego the petty bits, although you can view the whole talk here). Simply because, in computing circles, Stallman is an archetype: the avenging angel of free software. Over 30 years ago, he founded the Free Software Foundation (FSF), which has since that time been developing the GNU system, a free operating system that was completed by the addition of Linus Torvald's Linux kernel. It is no understatement to say that the smooth functioning and scalability of much of the Internet is thanks to the overall availability and robustness of the GNU/Linux operating system and its various derivative projects. These, in turn, are the result of probably millions of hours of volunteer labor.

So when Stallman says ‘free,' he really means it, and this is where the trouble begins. According to the FSF, free software allows anyone

(0) to run the program,

(1) to study and change the program in source code form,

(2) to redistribute exact copies, and

(3) to distribute modified versions.

This is a simple and powerful set of axioms. It also requires certain conditions to be met, the most challenging of which is access to the code in its source form. Any time the chain of modification and distribution is broken – say, if the person modifying the code chooses to make the source code unavailable, or chooses to charge a fee for the modification – the code is no longer considered free. Of course, ‘unfree' code can also be made free (this is in fact what Torvalds did with Linux).

Stallman is an idealist and makes no bones about it – in his ongoing capacity as GNU's leading light, he enjoys referring to himself as “the Chief GNUisance.” I admire this – like many purists, he is as constant as the North Star. You always know where you stand with him, which generally means the only question is how short you fall of his ideals. As with any purist, I suspect that there are only two kinds of people in his worldview: free software advocates and everyone else. Unfortunately, this jihadi attitude leads some of us to consider a different binarism: that the world consists of those who are free software advocates, and those who think that free software advocates are insufferable assholes. This is unfortunate.

*

Here is something else that is unfortunate: three brief critiques that do not undermine the axioms above, but rather make those axioms irrelevant, or at the very least vastly less impactful than FSF advocates might hope.

1) Not everyone can read source code, or wants to. When I'm not mouthing off on 3QuarksDaily, I help to design, develop and run a custom-coded internal learning technology platform for a fairly large multinational. On Friday afternoon, the developers pushed through an update to the platform that did not seem to be particularly intricate but that nevertheless wound up breaking much of the platform's functionality. Given that this internal site is viewable by upwards of 50,000 people, I issued an all-hands-on-deck (in the spirit of inventing new collective nouns, I would like to propose ‘a compile of developers' for such occasions) and, following a six-hour conference call, we managed to return the platform to a more-or-less steady state.

What I want to point out here is not the fact that software breaks – this is more often the case than not, as software, despite its name, is inherently brittle. More salient is the fact that it took five or six people who are contract professionals in their field a good chunk of time to understand and fix what had gone wrong in an information system of, frankly, only mild complexity. Software has reached a state of complexity that challenges even the people who originally wrote the code themselves. So we can confidently say that the number of people who can evaluate almost any non-trivial source code is drastically limited. This is to say nothing of whether one is being held accountable for the stability and integrity of said code via compensation. It is one thing to be able to fire your developers for incompetence, since you can just as easily hire others to fix it. When the entire system of free software is predicated on potlatch principles institutional actors lose leverage to get time-sensitive work done, and done to their specifications.

Internet_open2) Not all outcomes on the Internet are driven by whether code is free. There has recently been much talk about the demise of “net neutrality,” especially as a result of the piss-up between Netflix and Comcast. This is a complex topic (with excellent explanations here and here) but suffice to say that it is the principle that travel of all content across the network is treated the same. In theory, the Internet is designed to not favor the delivery of cat videos over the State of the Union Address. The relevance to free software is simply this: the Internet depends not only on software. In previous times, the argument leveled against free software advocates is that you still needed the vast infrastructure of hardware to make that software, free or otherwise, relevant. No one was going to build a server farm for free. Indeed, whoever came up with the term ‘the cloud' earned their marketing stripes, since it is nothing more than the outcome of decades of exponential progress in, and decrease in the cost of, computing power, bandwidth and memory. The materiality of this technology has not decreased at all, but, like factory farming, has merely been removed from view. However, the philosophy of the FSF is about software, not hardware.

In the case of net neutrality, the burning question is about the system of payments that guarantees the distribution of content. What is fair and equitable, and who gets to decide? Until recently – that is, until the advent of video streaming – the existing agreements and competition were sufficient to guarantee the timely delivery of content to users. Rather coincidentally, the decentralized architecture of the Internet was able to absorb existing demand. But with Netflix and YouTube's video streaming service taking up about half of downstream Internet traffic, we now have a giant tug-of-war between firms that handle traffic from its point of origin to the point of consumption.

In the logic of network economics, one of the ways to resolve this tug-of-war is for firms to merge, sometimes horizontally but especially vertically. While this may improve service, competition nevertheless suffers. These mergers result in companies evolving ever closer towards monopoly, and things reach a toxic boil when this integration combines both access providers (eg, a classic Internet Service Provider that is only interested in providing the pipes) with content providers (eg, Comcast, which in addition to providing access also owns or co-owns NBC, E!, Hulu, etc). Suddenly the access provider is now incentivized to privilege its traffic over that of its clients, like Netflix.

The FCC has been caught flat-footed by this eruption and, in the resulting regulatory vacuum, players like Comcast and Netflix have proceeded to make their own arrangements. Aside from being ultimately detrimental to consumers (has anyone seen their cable bill go down as a result of vertical or horizontal mergers praised for their intention to create economies of scale?), the landscape is much sparser, and until the government catches up and begins regulating the Internet as a utility, there is little recourse for content providers, let alone consumers. If you don't think the Internet is important enough to be considered a utility like electricity or telephony, consider the fact that (the much-derided) healthcare.gov website is in fact the first major government service to be offered exclusively on line – and that it will scarcely be the last.

Note that in the entire discussion above, there is no mention of whether the code being used to run all this is free or proprietary. That's because it just doesn't matter. It's why the old joke about fish and water is appropriate here. The fish have more important things to think about, like where dinner is coming from, and how to avoid becoming someone else's dinner.

3) Not all devices are accessible, even if you have access to source code. Concerning the Internet's future, this is probably the most important category of all. In fact, it's a combination of the two preceding critiques: individual ability/willingness and access to hardware.

Encapsulated in the term the Internet of Things, we are talking about the entirely reasonable, and in fact inevitable, sensorization of everything, and the ensuing connection of all those sensors to the Internet. The classic example is the refrigerator that notices you are low on milk and helpfully puts it on your list, or just goes ahead and orders it for you. At the same time, it seems that these same fridges have been recruited by hackers to send out spam mail (technology is occasionally not without its moments of irony), so obviously there is plenty of room for improvement.

SpamBut say that you want to fix your fridge so that the only spam you get out of it is some kind of dodgy meat product? Even if you had access to the source code and had the ability to read and modify it, into where would you plug your laptop? Perhaps the handy USB port provided for just such an occasion by General Electric? Fat chance. It is the rare manufacturer that is interested in opening its hardware to the masses (although Jaron Lanier, former roommate and current nemesis of Richard Stallman, strong-armed Microsoft into doing so for its Kinect hardware, and to great results). We can argue as much as we like about the general disarray in which intellectual property law finds itself, or how an overly litigious culture discourages companies from allowing people to tinker with their stuff, but the point is that free software, in Stallman's stern manifestation, does not begin to address the much more salient question of access to devices in the actual, physical world. And, as with the instance of net neutrality discussed above, almost no one but an overarching regulatory agency will ever be able to mandate any such availability.

This truth becomes even more expansive when we consider that the Internet of Things goes well beyond toasters and thermostats (although the latter are big business indeed). To a large degree, the entire concept of “smart cities” is predicated upon the generation of enormous amounts of data – data that can only be conjured by millions of sensors placed throughout the built environment. This is, to put it mildly, a double-edged blade, with the promised efficiencies inextricable from the specter of a command-and-control tyranny. However, the charge towards smart cities is driven wholly by corporations, and bought and paid for by governments. I can't think of two entities that, working in concert, would be less amenable to the idea of opening source code to all comers.

Indeed, the Internet of Things brings up another, even more explosively fragmented future: one in which computers themselves are limited to only specific tasks. In a fascinating talk delivered in 2011 entitled “The Coming War On General Purpose Computation,” author and general gadfly Cory Doctorow lays out a picture of a computing landscape where firms manufacture purpose-built computers that carry a reduced instruction set. In this case, none of the software built up over the past thirty years by the free software movement will even run on these machines. Forget about free vs. proprietary: to Doctorow, the fight is about keeping tomorrow's devices able to run software unintended for them at all.

Think_free_as_in_free_speech_not_free_beer_tshirt-rd16acfb6ac1e4522933ddc204bed414f_va6lr_512In all three critiques, we can actually come to an understanding of why free software was successful, because that is inextricably linked to where it was successful, and when. The GNU/Linux OS has been supremely successful – and vital – to providing the Internet's software backbone, a very deep and unfamiliar place to most of us. You basically had to be an expert even to find the conversation in the first place. Moreover, this was technology developed primarily in the 1980s and early 1990s, when the World Wide Web didn't quite yet exist and Internet was non-commercial. There were simply fewer players, and there was also less at stake. This is not to say that the hacker ethos does not live on, nor that people aren't choosing to become further involved in re-making their digital (and physical) lives. But these movements are either decidedly on the periphery, or, once they become visible or useful to the mainstream, are quickly assimilated, bought or legislated out of existence.

One could make an argument that the free software movement made the contribution it did precisely because the form of its social organization and ethos was exceptionally well-suited to the circumstances of the time. The uncompromising stance created a legacy that lives on today – for example, an astonishing 61% of web servers run on Apache, another free software project derived from GNU. But at the same time this purity points to another fatal flaw: if it's so great and obviously the best way to go, why isn't free software everywhere? Back at Cooper Union I thought I caught a glimpse of the answer. Richard Stallman, for all his quirky grandstanding, awful joke-telling and Bush-bashing (yes, it is 2014 and he was gleefully Bush-bashing), never once admitted that he or the free software movement had ever made a mistake. This is the problem with purists – all controversies have been settled long ago, whether it is about dinosaur fossils, the number of virgins awaiting us in heaven, or the real value of gold. I dearly wanted to ask Stallman if there was anything that he would have done differently in the past – perhaps the gentlest form that that sort of question can take – but weighing his right to speech versus my right to have a drink, left to have a few beers around the corner instead.