The Magic Ponies of AI Advocacy

by Dwight Furrow

Let’s grant, for the sake of argument, the relatively short-range ambition that organizes much of rhetoric about artificial intelligence. That ambition is called artificial general intelligence (AGI), understood as the point at which machines can perform most economically productive cognitive tasks better than most humans. The exact timeline when we will reach AGI is contested, and some serious researchers think AGI is improperly defined. But these debates are not all that relevant because we don’t need full-blown AGI for the social consequences to arrive. You need only technology that is good enough, cheap enough, and widely deployable across the activities we currently pay people to do.

On that narrower and more concrete point, there is a lot of disturbing data. The global management firm McKinsey estimates that current generative AI plus existing automation technologies have the potential to automate work tasks that absorb 60–70% of employees’ time today. The International Monetary Fund, addressing the world economy, predicts that AI is likely to affect around 40% of jobs globally, with advanced economies being more exposed. MIT’s Iceberg project reports that “AI technical capability extends to cognitive and administrative tasks spanning 11.7% of the labor market—approximately $1.2 trillion in wage value across finance, healthcare, and professional services.”

So the question is not whether job disruption is likely. The question is what kind of thinking is smuggled in when pro-AI commentators describe that disruption as painless, self-correcting, and—this is the favorite word—“inevitable.” The pattern I want to diagnose is magical thinking, the tendency to treat a desired outcome as if it follows automatically from the introduction of a powerful tool, as if social coordination, political conflict, and institutional design were minor implementation details. Each instance of magical thinking I designate as a magic pony because the confidence with which they are asserted often has the character of a bedtime story: comforting, frictionless, and uninterested in real world constraints.

Magic pony #1: “New technologies always create new jobs.”

This is the standard lullaby. Don’t worry. Automation eliminates tasks, but it creates new roles; history shows it. History does in fact show that. The industrial revolution surely created many new jobs. The problem is this is offered as a general law, when it is at best a historically local pattern with very specific enabling conditions such as labor movements with bargaining power, fast-growing sectors of the economy that absorbed displaced workers, and most importantly technologies that complemented large portions of the workforce rather than directly substituted for them.

What changes when the target is “most cognitive tasks”? In that scenario, new jobs don’t automatically expand because the underlying capability of cognitive performance generalizes across most domains including those new jobs. If the claim is that AI will be able to do “most cognitive tasks better than most humans,” then why assume the newly created jobs won’t also be within AI’s competence? One can reply: “because some work is embodied, interpersonal, or ethically sensitive.” True. But it is a non sequitur to move from “some work will remain human” to “enough work will remain human at sufficient scale to stabilize incomes and social order.”

The early data on jobs, although limited and contested, already point toward pressure on entry level positions. A Stanford Digital Economy Lab paper reports that workers aged 22–25 saw a 13 % decline in employment from late 2022 to July 2025 in the most AI-exposed occupations, while the level of more experienced workers was stable or showing some growth. And Gallup’s recent survey finds that “The percentage of U.S. employees who reported using AI at work at least a few times a year increased from 40% to 45% between the second and third quarters of 2025. Frequent use (a few times a week or more) grew from 19% to 23%, while daily use moved less, ticking up from 8% to 10% during the same period.”

ChatGPT was released only three years ago and only in the past year or so have LLM’s begun to have economic impact outside the tech industry. These numbers that report AI penetration will grow as the models become more capable and firms figure out how to insert AI into their workflow. The mechanism is easy to understand. Entry-level jobs are often bundles of routinized cognitive tasks that constitute a slow apprenticeship into more complex tasks that require judgment and learned experience. If AI performs the routinized portion, organizations can thin the bottom of the ladder hiring fewer juniors while demanding more from the ones they keep. But that pipeline problem is not a side issue. It is how professions reproduce themselves.

The magic pony, then, is not merely the claim that “jobs will be created.” It is the stronger, usually unspoken claim that job creation will be timely, distributed, and accessible—and more importantly, not replaceable by AI. With the rapid improvement in AI models, which is slowing but nevertheless still robust, why assume that?

Magic pony #2: “Productivity gains will shower wealth on society.”

A second pony appears when the discussion shifts from labor markets to macroeconomics. The story goes like this: AI raises productivity; productivity raises output; output raises wealth; therefore, we all win. But this is a fairy tale with two missing chapters.

The first missing chapter is distribution. Productivity gains do not, by a natural law of the universe, flow to wages. They can flow to profits, monopoly rents, executive compensation, or capital gains. They can also be absorbed as lower prices, which is helpful but not a substitute for income if your job is gone. When proponents pass over the distribution worry with a cheery “the economy will be richer,” they are changing the subject from who benefits to whether someone benefits. Of course someone benefits, but it likely won’t be you.

The second missing chapter is demand. Suppose we really do replace a significant share of paid cognitive labor with machines. Well, then, we increase supply capacity while undercutting the wage-based demand that purchases what is supplied. This is not an exotic theoretical puzzle. A firm does not become wealthy because it can manufacture more goods that no one can buy. If we accelerate the capacity to produce while collapsing the purchasing power of large parts of the population, instead of utopia, we get a politically unstable condition of overcapacity plus resentment. It’s an economy that can do everything except justify itself to the people it no longer needs.

This is magic pony #2: the fantasy that “massive wealth generation” can be decoupled from mass purchasing power, as if capitalism could run on productivity alone. This is bonehead supply side economics run amok.

Magic pony #3: “UBI will solve it—and someone else will pay.”

At this point the conversation often turns, with a kind of performative benevolence, to a post-work future: people won’t have to work; we’ll provide a universal basic income (UBI). Lovely. Now do the arithmetic.

The U.S. population estimate for July 1, 2024 was 340,110,988, and 21.5% are under 18. That puts the adult population at roughly 267 million. The Census Bureau reports real median earnings for full-time, year-round workers in 2022 of $60,070. Providing that level of income to ~267 million adults entails a transfer on the order of $16 trillion per year, over half of total U.S. GDP.

And what is the available tax capacity? OECD “Revenue Statistics” puts the U.S. tax-to-GDP ratio at 25.2% in 2023. If we apply something like that to a roughly $29–30 trillion U.S. economy (BEA puts current-dollar GDP in Q3 2024 at about $29.35 trillion), we are in the neighborhood of $7–8 trillion in total tax revenue.

This is not close to what UBI requires and this is before funding defense, healthcare, debt service, education, infrastructure, and the rest of the expenses of government.

Now maybe the UBI wouldn’t be $60,000. But can people actually live on less income without severe downward mobility? If you fund a modest UBI that doesn’t replace wage income, you haven’t solved mass displacement; you’ve subsidized an economy that is still taking away people’s primary means of survival.

“Tax the Hell out of the rich,” the winners in the AI race, is often proposed as the solution. Well, that is conceptually possible but politically and administratively problematic. Even with much higher tax rates, the numbers above still don’t add up. Furthermore, given the unprecedented political and economic power they will have, why think any of these masters of the universe will consent to higher taxes?

“Yes, but won’t AI expand the wealth base?” Perhaps, but that magic pony has already been fed and sheltered above. We now have compounded uncertainties: optimistic productivity assumptions, optimistic diffusion assumptions, optimistic political assumptions, and optimistic distribution assumptions. Stack enough optimism and it becomes less like analysis and more like prayer.

The international dimension makes the pony even more extravagant. The IMF analysis cited above made a further point about unequal capacity: Advanced economies are likely to feel both benefits and disruptions sooner. Meanwhile, much of the AI industry’s capital concentration remains heavily U.S.- and China-centered. “Global UBI” is not even a proposal; it is a weak gesture in the direction of moral comfort. The idea that world governments could cooperate on something like this is, on its face, absurd especially after our experience with climate change.

Magic pony #4: “The physical world will politely cooperate.”

AI discourse tends to be disembodied. Models live in the cloud; the cloud lives nowhere; therefore, scaling is mainly a matter of “innovation.” But compute is not “spirit.” It runs on chips, water, land, and an increasingly stressed electrical grid.

The U.S. Department of Energy, summarizing an LBNL report, notes that data centers consumed about 4.4% of total U.S. electricity in 2023 and are expected to consume 6.7% to 12% by 2028, with total data center electricity rising from 176 TWh (Terrawatt hours) in 2023 to 325–580 TWh (2028). The IEA projects global data-center electricity demand is set to more than double by 2030 (to around 945 TWh), with AI a major driver; it also projects AI-optimized data center demand to more than quadruple by 2030.

This matters because a “post-work” story that cannot speak concretely about energy, infrastructure buildout, and environmental tradeoffs is not a plan. The magic pony here is the assumption that the material preconditions of limitless scaling will simply fall into place with cheap energy, abundant chips, stable geopolitics, permissive regulation, and no serious backlash from communities asked to host the physical footprint.

Magic pony #5: “It’s not our responsibility.”

Finally, the most ethically revealing pony: Silicon Valley can build the tool; it’s society’s job to figure out how to deal with the consequences. Sometimes this is stated bluntly, sometimes it appears as a shrug dressed up as libertarian realism.

But technology is not an exogenous meteor headed toward earth. It is a set of choices about deployment. Firms decide whether to use AI to automate or augment. They decide whether to eliminate jobs or redesign jobs. They decide whether to treat labor as a cost center to be minimized or as a capability to be developed. In short: “not our responsibility” is itself a political stance that assumes a great deal of social patience, which appears to be in limited supply, in any case, if you pay attention to the news.

Do proponents really believe that billions of people will watch their livelihoods erode, their social status collapse, their local economies hollow out and simply accept it as progress? Social order is a delicate, negotiated achievement, maintained by institutions that distribute burdens as well as benefits. If the benefits are privatized while the burdens are so blatantly and comprehensively socialized, the likely outcome is not acquiescence but conflict. The backlash to all of this will be enormous and I don’t get the sense the tech lords are prepared for it.

This is not a call to ban AI. I’m not an AI skeptic. The tools are useful and will become more so in time. My point is narrower and practical. Pro-AI discourse often behaves as if the hardest parts of the transition are technical. The other stuff—employment, distribution, demand, fiscal capacity, global inequality, energy infrastructure, and political legitimacy—will take care of themselves. These are magic ponies all the way down.

If we want fewer magic ponies, we need fewer bedtime stories and more institutional realism: policies that assume disruption and budget for it, treat training pipelines as civic infrastructure, confront tax capacity honestly, and constrain the temptation to use AI as a substitute for wages and social insurance. Otherwise, the “future of work” will be decided the way many things are decided, not by the elegance of our models, but by the ugliness of our violence.