by Malcolm Murray

I recently finished reading Rewiring Democracy: How AI Will Transform Our Politics, Government and Citizenship – a book by Bruce Schneier and Nathan Sanders on the effects of AI on democracy. It comes out soon (October 25). It is a good read, worth reading for its myriad examples of AI in action at all levels of the democratic system. Ultimately, though, it seems to be a missed opportunity, failing to engage with many potential larger ways in which AI might affect democracy.
The book’s strength lies in its meticulous and hyper-granular description of all the ways that AI might affect elements of a democratic society, from enabling citizen power, to assisting in court cases, to empowering politicians. It offers many examples of how AI has been, will be, or could be adopted, for good and for ill. It maintains an admirably balanced and neutral stance throughout, detailing both the ways AI can be used to empower individual citizens, as well as how it could empower powerful vested interests. It is thoroughly organized, with separate sections on politics, legislation, administration, citizen and courts, and a starting briefer describing the relevant AI capabilities for each before outlining use cases and providing examples. The book admirably outlines the need for Public AI – AI as a common infrastructure provided by government, akin to water and electricity.
On the whole, however, the book feels like a missed opportunity to take the authors’ highly detailed knowledge and push their conclusions further. These limitations come through on a few different levels. First, the book feels dated in terms of its conception of AI. Although some examples are from this year, such as DOGE’s attempted hollowing-out of government, many pre-date the LLM era and could easily have been included in a book written five years ago, which is an eternity in AI. Many examples relate to models trained on thousands of examples, not billions, and the authors do not distinguish enough between which examples are more relevant for pre-ChatGPT AI models, and which for today’s General-Purpose AI.
Second, the authors pull their punches. The authors make clear that they do not want to engage with the more speculative risks of “AI doomers”, but in staying away from anything that smells of speculation, they stay too close to the world of yesterday, thereby excluding many of the risks from AI to democracy that we are already seeing signs of. They outline a framework early on for how AI increases scale, scope, speed and sophistication, but only do not draw to its conclusion what machine speed and scale might result in. Take deepfakes for example. A book on AI and democracy seems remiss to not delve deeply into how deepfakes may threaten democracy through the loss of a consensus view on reality. It is true that we have so far seen much more limited damage from deepfakes in elections than some Cassandras proclaimed in 2024. However, it seems naïve to assume this will necessarily remain the case. Three years into the ChatGPT revolution, anyone with a computer can generate completely lifelike photos, audio and short videos. The average voter might soon have no idea if the words they hear from their local politician were actually said or not. This would seem to affect democracy much more than politicians being able to do a few more robocalls than before.
Similarly, one of the main trends in recent years has been the steadfast focus of AI companies to develop increasingly autonomous AI agents. We now already have agents that can act independently for a period of hours, taking numerous actions on their own. At the same time, the cost per token continues to drop precipitously. We are not far from a world in which massive amounts of agents could be independently operating in the public sphere. We don’t have to go into potential misalignment to discuss how the sheer quantity and speed of AI agents could overwhelm existing governance mechanisms. Assuming that door-to-door voter canvassing will still be a relevant tactic in the age of AI agents seems myopic.
Another missed opportunity, doubly so given the cyber credentials of one of the authors, is that of LLM-enabled cyberattacks undermining democratic infrastructure. A recent Delphi study we conducted shows potential double-digit increases in damages from cyberattacks in the coming year. As lower-level cybercriminals can generate sophisticated malware at the touch of a button, this could disrupt the election system and wound democracy far more than politicians being able to do slightly more personalized voter targeting.
The authors make reference to the concentration of power risk, with AI companies soon being in charge of large parts of society’s cognitive infrastructure. This is a risk that increasing amounts of AI scholars are concerned about. We have already seen with social media how a few tech companies wield enormous power over citizens’ beliefs and this risks becoming much worse as we rely more on AI for information gathering and decision making. However, this risk is only mentioned in passing, and not given proper attention as a full-fledged risk to democracy.
The authors sometimes veer closer to more transformative scenarios, such as an AI trained on your preferences being able to vote on your behalf or AI models being trained on legislators so that their legal intent can be consulted forever. However, these are quickly labeled speculative and the authors return to safer ground. These scenarios do not even seem that speculative by now. Speculative as of 2025 would be envisioning a society where AI welfare has become validated and AIs far outnumbering humans have been given voting rights. Or a society where mass labor disruption has put large parts of the population on Universal Basic Income with severely decreased agency, and politicians no longer have to care about petitioning for their votes. Or a society where the level of AI-enabled surveillance makes 1984 seem like child’s play, court systems are fully AI-automated without access to human recourse.
The authors conclude the book by stating that AI is just another technology and that all the risks it will cause already existed, making AI risks just slightly sped-up versions of existing risks. This seems questionable as we are already seeing new-in kind risks appear, such as AI-psychosis. They also state that AI is just a tool, whose evolution will remain fully under human, democratic control, another highly questionable conclusion. We are already seeing signs of AI being put in charge of decision-making in areas where it will be hard to disentangle. In the military realm, once all sides have the AI-driven speed advantage on the battlefield, no one actor can unilaterally unplug it. In computer programming, once a majority of the code is generated by AI, we will not be able to understand and edit it without using more AI.
Overall, a book worth reading. It helps the reader get up to speed on all the interesting use cases of AI in societal institutions and provides a foundation for the many ways AI could affect democracy. It provides a detailed map of the current state. However, readers who want to look more than a year or two into the future and envision how the territory might change are left with a potentially rapidly-aging map.
Enjoying the content on 3QD? Help keep us going by donating now.
