“All this – all the meanness and agony without end
I sitting look out upon
See, hear and am silent.”
~ Walt Whitman
On a recent Facebook thread – about what, heaven help me remember – someone posted a comment along the lines of “This is what happens when we live in a post-truth society.” I honestly cannot recall what the original topic was about – politics? GamerGate? Climate change? Who knows – you can take your pick, and in the end it's not really that important. The comment struck me as misguided, though, and led me to contemplate not so much the state of ‘truth' as a category, which has always been precarious (see: 2,500 years of philosophy), but of the conditions that may or may not lead to the delineation and bounding of what we may consider to be sufficiently, acceptably truthful, and how technology has both helped and hindered this understanding today.
I responded to the commenter by suggesting that we live not so much in a ‘post-truth' society as a ‘post-accountability' society. It is not so much that truth is disrespected, distorted or ignored more than ever before, but rather that the consequences for doing so have (seemingly) dwindled to nearly zero. One could argue that this is vastly more damaging, because the degree of our accountability to one another profoundly influences how and if we can arrive at any sort of truth, period. Prior to the onset of information technology, there were well-established (and of course, deeply flawed) mechanisms for generating and enforcing accountability. Now, this mechanism of information technology that has relieved us of accountability is already so deeply enwoven into our society that not only will we never put the genie back in the bottle, we are at a loss to imagine how to ever get this genie to play nice. Except the problem is that this kind of righteous outrage is, in fact, entirely an illusion.
Instead of arguing about truth as an objective, abstract and hopefully attainable category, let's assume that truth (or whatever you want to call it) is a sort of consensus, and that consensus is reached through processes of trust (we respect each other's right to have a say) and accountability (we take some responsibility for what we say to each other). These are all fundamentally social processes, and as such haven't really changed very much over time. What interests me is how the insertion of technology into this discourse has changed our perceptions of the burdens that these concepts –truth, consensus, trust and accountability – are expected to bear.
Roughly speaking, technology has begotten two completely contradictory streams of development in this regard. This is old news – one person finds a better way to make fertilizer and someone else finds a way to build a better bomb using that fertilizer. In this sense technology merely functions as an amplifier for whatever tendencies are coursing through society's veins. Within the context of accountability, the two streams may seem to be paradoxical, but this is only superficial. Let's first touch on how technology has played a largely beneficial role in the elaboration of the paradigm of accountability.
Most obviously, there are the successes that have allowed a tremendous blossoming of commerce. An early, pressing problem faced by ecommerce was the creation of trust between buyers and sellers in an anonymous, disembodied marketplace. Buyers were interested in what they could buy online, but reluctant to fork over cash to anonymous strangers. In 1995, eBay was one of the first to propose a simple accountability mechanism for trader-to-trader transactions: buyers and sellers left feedback for one another confirming (or critiquing) speed of shipping, quality of goods, etc. Today, the approach is received wisdom, but at the time no one knew if would actually work. But this feedback system has continued to underpin the success of eBay and many other ecommerce sites, as witnessed by the success of AliBaba, current record-holder for the world's largest stock market IPO. It's no mean feat to create trust between buyers and sellers in a market as notoriously dodgy as China's.
Moreover, the applications of this mechanism seem to have grown well beyond the simple trader-to-trader transaction. We are now accustomed to reading book reviews on Amazon, restaurant reviews on Yelp, accommodation reviews on TripAdvisor, among many others. Reviews are also arguably being used to put the screws on part-time entrepreneurs such as AirBnB hosts and Uber drivers, but that is a topic for another time. It is sufficiently uncontroversial to say that, in a very concrete sense, we are becoming ever more reliant on an army of anonymous commenters to help us in our sensemaking of what to read, eat, buy or see.
Trust and accountability mechanisms have expanded in even subtler ways, specifically in the way that machine participants trust one another within a given system. Perhaps the most compelling example of this is bitcoin, the crypto-currency whose wild price oscillations (and shady applications) managed to grab global headlines for, well, at least a few minutes. The obvious need to prevent a party from double-spending an amount of bitcoin, which after all is a bunch of numbers sitting on a hard drive somewhere, led bitcoin's designers to include the notion of a block chain. The block chain accomplishes this through a concept called proof-of-work:
[Proof-of-work] is counterintuitive and involves a combination of two ideas: (1) to (artificially) make it computationally costly for network users to validate transactions; and (2) to reward them for trying to help validate transactions. The reward is used so that people on the network will try to help validate transactions, even though that's now been made a computationally costly process. The benefit of making it costly to validate transactions is that validation can no longer be influenced by the number of network identities someone controls, but only by the total computational power they can bring to bear on validation.
Basically, each machine on the network must validate all transactions, and all transactions must match across all machines. In the meantime, all transactions remain anonymous, even though the block chain, stored on each participant's machines, retains the entire record of all transactions (you can really go down the rabbit hole here). The computational intensity required means that no one individual can fake a transaction and fool the other participants. This is counterintuitive because we think of the goals of software design as privileging lighter, faster and simpler solutions.
A waggish take might see this as little more than make-work for the digital age. Nevertheless, the critical element here is that there is no central authority that vets the transactions. The network validates itself as it goes along, and, if everything works as it should, participants that act in bad faith are rooted out as a matter of course. I suspect that this sort of decentralized, distributed trust mechanism will find itself refined and deployed in many ways – for example, in credit systems for validating bottom-of-the-pyramid consumers. But it also occupies an important place within our narrative: this is what accountability looks like if you're a machine. From the point of view of a machine, it is a straight line from accountability to trust, and from there to consensus and truth. You just need plenty of electricity.
*
The looming problem with all the cases I have described so far is that they fall within a very narrow category: that of trader-to-trader transactions. In every case, the subject under discussion is clearly an object or service that is to be consumed (or evaluated or whatever – but the final purpose is consumption, let's be clear about that). There is always an implied value at stake – the feedback or ranking or other process being applied to it is simply there to clarify, refine or nudge the final value one way or the other. This is the meat and potatoes of not just microeconomics, but almost every “disruptive” idea to come out of Silicon Valley. As a result, the amount of attention these cases command is far out of proportion to our sensemaking as a whole. In this worldview, truth is indistinguishable from, or is rather interchangeable with, price discovery.
But there is still all that squishy stuff where technology has hung us out to dry. Why has technology failed to help us resolve, on a social level, issues like the link between autism and vaccines, or whether Barak Obama was born on American soil or not? Let alone the realities of climate change or evolution? Why do sites like Snopes.com or the Annenberg Center's FactCheck.org seem to be engaged in a Sisyphean struggle to disabuse us of disinformation, or why do we need them at all? Most importantly, why has technology, which otherwise has been such a staunch ally in concretizing the invisible hand, been unable to bring us any closer when it comes to a shared set of values?
At the beginning of the second essay of In The Shadow Of The Silent Majorities, French philosopher Jean Baudrillard writes:
The social is not a clear and unequivocal process. Do modern societies correspond to a process of socialisation or to one of progressive desocialisation? Everything depends on one's understanding of the term and none of these is fixed: all are reversible. Thus the institutions which have sign-posted the “advance of the social” … could be said to produce and destroy the social in one and the same movement.
Baudrillard asserted that political action – or at least, the kind of political action that mattered – becomes impossible when social processes disallow the “masses” from anything but the observation of spectacle. This process takes protest – or for that matter any kind of political action – and subsumes it into media, which then converts it into merely another object for consumption. Writing in 1978, Baudrillard was essentially finishing off Marxism as a plausible revolutionary theory. But he was mostly concerned with top-down media technologies and the manner in which once-meaningful events are rendered into meaningless theater, or rather whose meaning resided exclusively in their own theatricality. A good example is his examination of the transformation of political party conventions here in the United States. Once political conventions became televised, decisions of any consequence ceased to be made at those events. They simply became spectacle; the spectacle of the thing in question becomes the thing itself. If you want a good overview of what he had in mind, see Paddy Chayefsky's “Network”, filmed a few years earlier: Howard Beale and the Ecumenical Liberation Army are essentially Baudrillardian poster children.
A good twenty years later, the World Wide Web began its inexorable crawl across (and of) the globe. Baudrillard was a troublemaker and a provocateur, so I assume that he would have gleefully jumped on the subject, but in a 1996 interview he admitted “I don't know much about this subject. I haven't gone beyond the fax and the automatic answering machine…. Perhaps there is a distortion [of oneself online], not necessarily one that will consume one's personality. It is possible that the machine can metabolize the mind.” In one of his last major works, The Vital Illusion he lamented in a Nietzschean fashion that “The corps(e) of the Real – if there is any – has not been recovered, is nowhere to be found.”
Fifteen years after publication of The Vital Illusion, we are in a better place to evaluate the effects of technology, and the view is not encouraging. For the same mechanisms that have allowed such a preternatural calibration of transactional value seem to be exacerbating the consensus around values that cannot be transacted. The fact is that there is an entirely different set of assumptions at work here. Venkatesh Rao put it well on his stimulating blog, Ribbon Farm, when he discussed the differing nature of transactions when participants are price-driven (ie, traders) or values-driven (as he puts it, saints):
Traders view deviations from markets as distortions, and fail to appreciate that to saints, it is recourse to markets that is distortionary, relative to the economics of pricelessness. Except that they call it “corruption and moral decay” instead of “distortion.” To trade at all is to acknowledge one's fallen status and sinfulness.
If we consider the insertion of technology into this dynamic, the fact emerges that we have not designed technology to help us in our, shall I say, more saintly endeavors. Technology subsumes these squishier, values-driven behaviors into itself as best as it can, but it cannot ever do so completely. What's left is the flotsam and jetsam of Reddit, White House petitions, comment threads anywhere, Anonymous and LulzSec and cross-platform flame wars ranging from Mac vs PC to Palestine vs Israel. There is no shortage of bridges under which Internet trolls lurk, waiting to pounce on anyone who displeases them.
For anyone who doubts that there are real-life consequences to this, GamerGate is perhaps the best example of this. When the women targeted in this shitstorm are confronted with such a quantity of death and rape threats that they flee their homes, or are forced to cancel speaking engagements because a university cannot guarantee that someone won't bring a concealed weapon to a lecture, I am left with a distinct pining for that good old Baudrillardian unreality. Whether there will be any real-life consequences for the people who commit such acts, this remains to be seen. Furthermore, there is no reason why unaccountability cannot, and will not, continue its expansion. Like cosmic inflation, it does not need a reason to keep going, or anticipate a boundary to detain it.
There is an old Wall Street adage about any significant market downturn: “When the tide goes out, you see who's been swimming naked.” The Web has flipped this on its head: the tide just keeps coming in, and more and more people are leaving their trunks on the beach. Moreover, it is simply too late to redesign the Internet for greater accountability. The last (or first?) idea that had any hope of accomplishing this was Ted Nelson's Xanadu Project. Nelson invented the very idea of hypertext, but in his world, which he originally conceived in 1960 and is detailed in one of the best articles to ever appear in Wired, every image or piece of text would be traceable back to its source. This past June, in an Onion-worthy headline, The Guardian announced the “World's most delayed software released after 54 years of development“.
Perhaps in another, alternative universe, Xanadu became the default design template for an Internet that encouraged not just price accountability. In the meantime, and back in this universe, what technology has exposed is only what we have always known: that we are a fractious, quarrelsome and undependable lot. This is why I maintain that any hand-wringing about the state of the conversation on the Web is ultimately a red herring. That we haven't designed one of our most extraordinary technological infrastructures to help us get closer to any sort of ‘truth' shouldn't surprise us in the least. As for the original Facebook conversation that sparked this contemplation, after making my ‘post-accountability' suggestion, my comment received a dutiful ‘like' or two. As far as civilized dialogue goes, I'll take it.