by Muhammad Aurangzeb Ahmad
A cultural shift in our understanding of the arbitration of the truth is afoot. The shift is subtle but it has been creeping up in the collective unconscious for the last decade or so. Discourse on alternative facts and fake news has come to prominence since the last US presidential election and Brexit. This phenomenon is however not new but has a long and notorious lineage: Propaganda is as old as human civilization. The Nazi minister of Propaganda Joseph Goebbels is well known for developing a master plan for spreading false information and influencing the German populace. The Soviet Union and the Warsaw Pact nations had their own versions of “truths” where all of history was rewritten through the singular lens of Marxism. The supposed end of history with the Fall of Soviet Union did not end the need for propaganda. People and states still need to spread misinformation as before. There is however one thing that has changed about spreading mass misinformation: In prior times, spreading falsehood on a massive scale almost always required access to state resources. Then the internet happened followed by the rise of social media; this has made the spread of misinformation on a global scale a truly democratic endeavor.
What then, is the effect of this democratization? For all practical purposes, “truth” has become synonymous with what gels with one’s values and what is accessible. When people look for information online, the search engine ranking determines what information that they are exposed to. Most people do not click after the first few pages when going through searching results. The implication here being that to get your version of facts heard you need to have your web pages at the top of the search results. In other words, searching engine optimization becomes part of the propaganda process. Alternatively, you may be one of those people who do not trust Google or Bing because of its supposed liberal bias. In that case Facebook is your friend as you can readily get information from your friends who are likely to have the same biases as you. This is not to suggest that the information bubbles are limited to the conservative segments of the society. Manipulating Wikipedia to suit one’s agenda is also a well-studied phenomenon. But if you are one of those people who think that Wikipedia is too liberal and too mainstream then there is Conservapedia which seeks to provide a ‘corrective’ to the Wikipedia from a conservative perspective. What is common in these examples is the democratization of means of production of information.
The amount of misinformation that could be pumped into the news cycle has traditionally been constrained by the number of people who are dedicated to the cause of spreading the information, their access to technologies to spread information and the biological constraints with respect to the amount of effort that one can spend in day e.g., everyone need to sleep to properly function. The arrival of intelligent bots and other automation tools on social media now allow wannabe propagandists to transcend such limitations. So, what does the future of misinformation look like? We already have systems that can write news stories. In the near future one would be able to give these systems cues regarding what kind of news to generate and lo and behold they could generate tens of thousands of reasonable sounding news stories with varying levels of untruths embedded in them.
Just as there are sign that we are already outsourcing part of human decision and even morality to algorithms and computers, we as a culture may end up making computers the arbiters of truth. The case of Tom Scocca’s article on caramelizing onions is somewhat telling: If one searches for something as innocuous and simple as the time to caramelize onions, one gets incorrect information on Google. Despite having the corrected information available online, the search results still come out on the top because they are cited incorrectly by other sources without any check. Even though this problem was pointed out multiple times by Scocca and others, Google results and thus the “facts” did not changed to reflect reality until there were enough counter-links to unbias the “truth”. The problem here is that once the incorrect information surfaces in the forefront of search results it snowballs from there and becomes authoritative in the eyes of the populace. Finding the genealogy of knowledge thus becomes an important part of asking the right questions about knowledge. Picasso once stated, “Computers are useless. They can only give you answers.” The presence of bots that can generate conversations and even entire histories in an instant is going to allow computers to give answers and lead one to more questions.
With image and video rendering technologies reaching parity with human vision and going beyond that, it is only a matter of time where footage entire incidents are generated by algorithms on the fly. At that point, we would have reached a milestone where the value of evidence itself would come into question. Who does one trust if the actual footage of an event and its algorithmically altered version are indistinguishable. Even without such technologies people still hold onto their beliefs despite evidence to the contrary but with the help of evidence generation people would be able to make up evidence on the spot. Is the death of data close at hand? In 1984, George Orwell describes how the Ministry of Truth shreds records of the past and rewrites new versions of history which conforms to their ideology. With democratization of AI, one would not need a whole ministry to do that; alternate histories, just like the alternate facts of today, may just be a click away.
It won’t be long before people, especially on the opposite sides of political or religious spectrum, will pick up their cellphones and appeal to the authority of their respective politically biased apps and ask them to arbitrate truth. This is not what Leibniz had in mind when he stated that in the future people will state let us calculate to settle disputes. Nonetheless such a future might await us where we have conversations of the form: “My app is saying that you are lying. Obviously, my app is being powered by unbiased data and unbiased algorithms unlike your app which is heavily biased by your ideology.”