by Dwight Furrow
Let’s grant, for the sake of argument, the relatively short-range ambition that organizes much of rhetoric about artificial intelligence. That ambition is called artificial general intelligence (AGI), understood as the point at which machines can perform most economically productive cognitive tasks better than most humans. The exact timeline when we will reach AGI is contested, and some serious researchers think AGI is improperly defined. But these debates are not all that relevant because we don’t need full-blown AGI for the social consequences to arrive. You need only technology that is good enough, cheap enough, and widely deployable across the activities we currently pay people to do.
On that narrower and more concrete point, there is a lot of disturbing data. The global management firm McKinsey estimates that current generative AI plus existing automation technologies have the potential to automate work tasks that absorb 60–70% of employees’ time today. The International Monetary Fund, addressing the world economy, predicts that AI is likely to affect around 40% of jobs globally, with advanced economies being more exposed. MIT’s Iceberg project reports that “AI technical capability extends to cognitive and administrative tasks spanning 11.7% of the labor market—approximately $1.2 trillion in wage value across finance, healthcare, and professional services.”
So the question is not whether job disruption is likely. The question is what kind of thinking is smuggled in when pro-AI commentators describe that disruption as painless, self-correcting, and—this is the favorite word—“inevitable.” The pattern I want to diagnose is magical thinking, the tendency to treat a desired outcome as if it follows automatically from the introduction of a powerful tool, as if social coordination, political conflict, and institutional design were minor implementation details. Each instance of magical thinking I designate as a magic pony because the confidence with which they are asserted often has the character of a bedtime story: comforting, frictionless, and uninterested in real world constraints. Read more »


We have slid almost imperceptibly and, to be honest, gratefully, into a world that offers to think, plan, and decide on our behalf. Calendars propose our meetings; feeds anticipate our moods; large language models can summarize our desires before we’ve fully articulated them. Agency is the human capacity to initiate, to be the author of one’s actions rather than their stenographer. The age of AI is forcing us to answer a peculiar question: what forms of life still require us to begin something, rather than merely to confirm it? The best answer I’ve been able to come up with is that we preserve agency by carving out zones of what the philosopher 


Vitamins and self-help are part of the same optimistic American psychology that makes some of us believe we can actually learn the guitar in a month and de-clutter homes that resemble 19th-century general stores. I’m not sure I’ve ever helped my poor old self with any of the books and recordings out there promising to turn me into a joyful multi-billionaire and miraculously develop the sex appeal to land a Margot Robbie. But I have read an embarrassing number of books in that category with embarrassingly little to show for it. And I’ve definitely wasted plenty of money on vitamins and supplements that promise the same thing: revolutionary improvement in health, outlook, and clarity of thought.







Someone else gets more quality time with your spouse, your kids, and your friends than you do. Like most people, you probably enjoy just about an hour, while your new rivals are taking a whopping 2 hours and 15 minutes each day. But save your jealousy. Your rivals are tremendously charming, and you have probably fallen for them as well.