by Fabio Tollon
Human beings are rather silly creatures. Some of us cheer billionaires into space while our planet burns. Some of us think vaccines cause autism, that the earth is flat, that anthropogenic climate change is not real, that COVID-19 is a hoax, and that diamonds have intrinsic value. Many of us believe things that are not fully justified, and we continue to believe these things even in the face of new evidence that goes against our position. This is to say, many people are woefully irrational. However, what makes this state of affairs perhaps even more depressing is that even if you think you are a reasonably well-informed person, you are still far from being fully rational. Decades of research in social psychology and behavioural economics has shown that not only are we horrific decision makers, we are also consistently horrific. This makes sense: we all have fairly similar ‘hardware’ (in the form of brains, guts, and butts) and thus it follows that there would be widely shared inconsistencies in our reasoning abilities.
This is all to say, in a very roundabout way, we get things wrong. We elect the wrong leaders, we believe the wrong theories, and we act in the wrong ways. All of this becomes especially disastrous in the case of climate change. But what if there was a way to escape this tragic epistemic situation? What if, with the use of an AI-powered surveillance state, we could simply make it impossible for us to do the ‘wrong’ things? As Ivan Karamazov notes in the tale of The Grand Inquisitor (in The Brothers Karamzov by Dostoevsky), the Catholic Church should be praised because it has “vanquished freedom… to make men happy”. By doing so it has “satisfied the universal and everlasting craving of humanity – to find someone to worship”. Human beings are incapable of managing their own freedom. We crave someone else to tell us what to do, and, so the argument goes, it would be in our best interest to have an authority (such as the Catholic Church, as in the original story) with absolute power ruling over us. This, however, contrasts sharply with liberal-democratic norms. My goal is to show that we can address the issues raised by climate change without reinventing the liberal-democratic wheel. That is, we can avoid the kind of authoritarianism dreamed up by Ivan Karamazov.
It is widely accepted in the scientific and political communities that we are on the precipice of a climate catastrophe. Through our own actions (and increasingly, inactions) we have plundered our planet and entered what some have called the ‘Anthropocene’: an age in which we have the agential powers equivalent to geological forces. This has given us the capacity to dominate the earth and spread the environmental disaster that is human civilization all over the planet. One good feature of civilization, however, has been the establishment (and defence) of liberal democracy. Such participatory democracies (at least in theory) encourage robust debate, diversity, and inclusivity. They also preserve individual liberty: freedom is an important political value (although of course not the only one that matters), and manipulation and coercion would be seen as unjustifiable in the pursuit of some or other political objective (even if this objective is climate change mitigation).
This brings us to the problem I introduced at the beginning of this piece: our liberal-democratic framework, by giving us the ‘freedom to choose’, has resulted in human stupidity being the guiding logic in many attempts to ameliorate the climate crisis. Might we be better off giving up our freedom and establishing an authoritarian state that could force compliance with global climate goals? The dilemma is as follows: Is there a way to deal with an existential threat such as climate change whilst also preserving liberal democracy? My own view (and that of Mark Coeckelbergh in his recent book on the topic) is that liberal democracy is worth fighting for, and it might just be possible for us to preserve it and not go extinct. To do so, however, we have to take seriously the authoritarian threat posed by AI powered systems as ‘solutions’ to the climate crisis.
Advanced AI systems are ubiquitous. From social media timelines, video recommendations on YouTube, and the kinds of adverts we see online, AI, in a very real sense, filters the world we see. Whether that world is rose tinted or stained red ultimately depends on what the algorithm thinks will maximise revenue for its owner in the best possible way. These systems also play on our emotions: we are fed information that confirms our biases, exposes us to a very narrow range of interests, and promotes extreme views. This content is often inflammatory, divisive, and polarizing. Combine this with work done on nudging in social psychology and behavioural economics, and what you get is the potential for an AI-driven authoritarian monster. Proponents of nudge theory argue that due to how easily ‘hackable’ the human mind is, we ought to design our world in such a way that makes it easier for us to ‘do the right thing’. For example, we might stop placing sugary treats at children’s eye level in stores in the hope that they might not see this form of legal poison, thus making it easier to lower their refined sugar intake. Termed ‘libertarian paternalism’ by its defenders, it is claimed that this approach does not necessarily infringe on the freedom of individuals.
However, it is difficult to see how having ‘nudgers’ decide on our behalf what they think is good for us ‘nudgees’ is not an infringement on our freedom. Nudgers decide for us. Why? Because we are not to be trusted. And here, I think, is where we see the guiding presupposition that has been operative in my discussion thus far: that we cannot trust human beings. If left to our own devices, we will inevitably muck things up (evidence for this can be found by either looking around or reading a book). Thus, our freedom can be curtailed justifiably because when given freedom we do Very Bad Things with it. And if data-driven AI systems operated by surveillance states are the way to implement such restrictions, and they mitigate the climate crisis, then so be it (call this the Hobbesian solution, named after Thomas Hobbes and his view of human nature).
Nonetheless, this view operates with an impoverished understanding of freedom. More precisely, we can see freedom or liberty in both positive and negative terms. On the mistrustful view of human nature outlined above, freedom is framed as purely negative. Negative freedom is our freedom ‘from’ things: freedom from interference by the state or other persons. Positive freedom, on the other hand, is about our freedom ‘to’. What we can do with our freedom, what its limits and possibilities are. On the Hobbesian solution, regulations and restrictions are justified, but are nonetheless framed as infringing on our negative liberties. What such an approach is silent on, however, is positive liberty. What if we are not inherently awful, what about those times when we are rational, and what about the potential we have to do Very Good Things? What if we can be trusted?
From a democratic perspective this would be preferable. Trust in human nature means that we can trust our leaders to get it right when it matters, and that we can trust citizens to vote for virtuous leaders. Of course, in practice, this is very difficult to achieve. However, when we see our political options through the lens of positive freedom, we begin to see how it might be possible. For example, one approach could be to enhance certain capabilities. In this way we can increase our positive freedom by creating conditions and policies that enhance individual and collective well-being. Moreover, the means by which we achieve this should be democratic. Following Dewey, we can see such democratic processes as ‘organized intelligence’, where we can bring conflicts to the table, air them out, potentially solve them, and make progress. The goal of our liberal-democratic political institutions should therefore be to create the conditions under which human beings have their capability to increase overall welfare enhanced.
In this way it is possible to move beyond the Hobbesian solution and to place trust in our democratic institutions. Additionally, we can see how constraints and regulations to mitigate the climate crisis are not simply infringements on our negative liberties. Rather, they can be viewed as enhancing positive liberty by safeguarding our biosphere for both currently existing humans and future generations. This is not to say that we should limit our political discussions to anthropocentric perspectives. Human caused climate change is of course also devastating to animal and plant life, and it is essential that we include these in our political discussions. An important part of dealing with the problems posed by climate change will involve establishing non-anthropocentric political discourse. This would involve protecting the rights of nature, and the interests of animals. With this in mind, it is no use to blindly defend negative liberty when such a defense results in the destruction of our environment. Dealing with the climate crisis will require us to go beyond such narrow framings of our freedom.
What does any of this have to do with AI? To enhance positive liberty with respect to climate change we need regulation and oversight, and the same is true of AI. While many tout the wonderful things that will be made possible by new technologies, we must also remain cognizant of how such technologies can reinforce existing social inequalities. Not just that, but overreliance on algorithmic systems can blind us to (social) problems that might not have technical solutions. The climate crisis is not a problem that we can solve with science alone: it is also a deeply political issue, a kind of ‘hybrid’, and so if we want to use AI as a means to ameliorate it, we cannot ignore such political questions. The notion of positive liberty provides us with a framework which we can use to both evaluate attempts to align AI with human values and interventions in the service of our biosphere. Instead of a focus negative liberty, we should also be cognizant of the ways that restrictions, regulations, etc. might also enhance positive liberty.
What unites both the climate crisis and emerging technologies like AI is that they call into question various liberal-democratic norms. In both cases, the tug of authoritarianism on the one side, and unfettered libertarianism on the other, put strain on our political institutions. What I have hoped to do here is to show that we can escape this binary way of framing things, while still preserving that which is good about democratic participation.