by Peter Topolewski

We humans keep seeing articles about AI. An endless parade of them, it feels like.
We’re the authors, we’re the culprits.
We have our reasons. Good ones, too. Starting with our awe of AI’s growing catalogue of capabilities. Making films indistinguishable from our own, co-working with humans in HR departments, and chatting and philosophizing among themselves in chatbot social networks. The latest wrinkle sees AI working not in chatbot settings but as standalone entities tasked with carrying out specific jobs like writing code for a website but also not-so-specific jobs like overseeing a team of other AI agents and iterating their own development. They’re getting independent. Or as tech evangelist Peter Diamandis put it to his email readers, “We are not incrementally improving chatbots anymore. We’re watching the emergence of autonomous agency at scale.”
As amazing as these are, we are, like cable news, more inclined to read and write about the problems AI seems to pose, for how disconcerting its presence already is.
There’s the financial cost of AI. $1.6 trillion spent on it over the last decade, another $2.5 trillion expected this year. In another world, how many societal ills and injustices could that money remedy?
In spite of that investment, there’s the problem of meager returns on productivity and near-zero contribution of AI to U.S. GDP. Also, the scapegoat problem: blaming layoffs on AI.
There’s AI’s job destruction problem. The CEO of the AI company Anthropic expects it to add 10-20% to unemployment in the next few years. For former presidential hopeful Andrew Yang the number is up to half the country’s 70 million white-collar workers gone by 2028.
There’s a vulnerability problem. The civilization we’ve constructed is digital and connected. AI tools have uncovered vast weaknesses in that infrastructure, an important first step to patching them. But criminals and the criminal curious are greatly represented among the early AI adopters. Imagine the havoc when a criminally inspired AI agent enters the Nasdaq’s network with the instruction to “sell everything” or JPMorgan’s with orders to “set the value of every account to one dollar”. How far away, then, are we from taking Voight-Kampff tests every time we want to check our balance? Not very.
For all this there remains the insistence from AI enthusiasts that we need it, with the uncomfortable caveat: it’s a coin toss—AI will in the end help us or destroy us.
Guardrails would be nice, but we’re better off chasing unicorns. Can you picture any member of your elected government engineering rules about the use and ownership of AI? Most are still wrapping their heads around the theory of the Internet. Meanwhile, the AI company Anthropic abandoned its voluntary pledge to develop its technology while prioritizing safety. The last—maybe the only—industry safeguard is gone.
Hype is a real thing, and like all technology AI has fallen short at times. It seems to less these days. Instead, AI is passing one milestone after another, not anywhere near lightspeed, but much faster than an old Ford. More like a rocket. And at every moment reaching further into more aspects of our lives, relentlessly. Ex-Google CEO Eric Schmidt put it this way: “There is no force that’s going to slow this down.”
All this feels like we’ve been led to the top of the bungee jump tower with nothing left to do but fling ourselves off. Whether what follows will be a thrill or a disaster—whether the bungee has been attached to our legs or not— only AI knows.
If the bungee is attached, what might we hope for? A great deal. Loads of leisure time. Bargain priced genome sequencing. Bio-engineered creatures that eat cholesterol in your arteries and pollution in your tap water. Armies of robots that build free housing in days and rebuild war-torn countries in months. The elimination of disease. Vastly expanded human intelligence.
So, why doesn’t this feel better?
Those promises, though perhaps approaching, are still far off, while the troubles loom much closer.
The speed of change unnerves. Diamandis again: “The pace of change will only accelerate.” After decades of false AI starts in computer labs around the globe, there’s an irony in that.
The inevitably of this AI driven world unsettles. And it robs all of us, including the great minds at Google and OpenAI—who know no more than anyone where all this will lead—of our most human of powers: choice.
So, we’re not being led up the bungee tower, we’re being marched up it. Told that upon leaping there’s a chance for medical and technological miracles, and wonders beyond our imagination. But even the wonders come at great expense. A technology capable of everything AI tempts us with also threatens the very things we’ve found give us purpose and meaning—objects of desire above all we’re especially slow and bad at attaining.
In the good old days, technology was a tool. It made jobs easier. It also made some obsolete. Jobs like porting tea to the Tibetan highlands, coal mining in Ohio, screwing caps on bottles in assembly lines, jobs that only the rich believe industrious, hard-on-their-luck folk are eager to get back into. Future generations—perhaps the next—will look back at accountants and data analysists and professors the way we look back at lamp lighters and switchboard operators, with fascination and a bit of disbelief. But what does it mean when your job as a teacher, investment adviser, filmmaker evaporates overnight, not over a generation?
Whether you assemble model rockets or Artemis rockets, play video games or play Bach concertos, the effort can be its own reward. Until now. The AI company Companion recently launched an agent called Einstein that would view your online lectures, take notes, and do your homework. Einstein was a shooting star, gone in days in a swirl of controversy, but the concept begged the question: If AI can do that, what’s the point of being a student? Your diploma or degree would be worthless, the benefit of hard work not available to you, the learning nil. What does it mean for you to do nothing and still get the prize?
When AI usurps this kind of role in our lives it is not the same as Neo having knowledge of kung fu uploaded to his brain so he can take on evil agents in The Matrix. That is technology with a purpose, as a tool—but developers say if you’re still thinking of AI as a tool, you’re dangerously out of touch and it might be too late for you after all.
What does it mean to us if AI helps us live, say, decades longer without needing to work?
What does it mean for us when AI gives us all the facts, but not the reasons—or any ideas about how things ought to be?
What does it do for our meaning if AI fills our social feeds with content to imbibe, or if on our behalf it sends out content for us? Does that fill our attention or give us more time to pursue the unknown?
We haven’t finished arguing about how tech companies stole our creative works to assemble AI, we’ve hardly taken a breath, and already we’ve had to embark on rigorous journeys to find what, if anything, makes us special today.
Our ability to reason sets us apart and above all that is non-human. It’s obvious. Except, well, no.
Fine. AI can’t match our feelings, it doesn’t have emotions. But feelings aren’t what distinguish us from animals, they’re one of many things we have in common with them. That’s our advantage over AI?
All right then, our consciousness. We don’t know what it is, but surely AI doesn’t have it. Yet by God many of those bots and agents sure look and act conscious, don’t they?
There could be cases made that AIs already have interiority and individuality, that they self organize and self maintain. Do they self create? Do they survive the replacement of all the processors they’re hosted on? If so, they’re getting close to holding all the key features of life. And like (most) life, they have a purpose. The purpose we give them.
But we humans don’t really have a purpose, a meaning. Not one we agree on. When we’re honest, we can feel a purpose is not there, at least not bestowed upon us. We can search for it, sometimes find it, but it’s elusive, and even when grasped it’s tough to hold onto.
In the bubbles of quiet we get when we pull away from the videos, newsfeeds, drugs, gambling, streaming, and and and—we can feel and hear at our center the tremendous, undying spirit of inquiry. We are a question. The question of self is not a one-time thing. The writer Mohsin Hamid said “the performance of self is exhausting.” The question is constant, always coming back with the same strength, sometimes even more insistent.
Without the distractions—which big tech and big media are singularly adept at providing—you can feel yourself being. One second after the next. Asking: how does a self like you persist through time? How does it retain information, how does it not start from scratch every instant? Free of distractions—or, ironically, in moments heightened by a song or a run or a starry night—and with honesty, you can realize the sensation of your life. And that one day it will be no more.
When you stand alone as this question, you have a choice. Give up or go forward. Go forward, that is, knowing each step ahead will only be more of the same but hoping it’s not. Quite the options. But going forward is brave, life invigorating. And going forward with someone, that’s love isn’t it?
A creature going forth like this, searching for meaning and purpose, will try many things. Things no other creatures have dreamt of. Oil painting, rock n roll, basketball, speculative fiction, space travel, trigonometry, and on and on. We made AI and gave it words—the currency of our consciousness. But where AI was given language, we cooperatively created it out of need and curiosity.
Will AI create something novel, something we didn’t ask it for, pursue a goal or a purpose we didn’t bestow upon it? Will it want an answer?
No matter if it does or not, the world might soon go by without us. On prompts from no more than a handful of our species, AIs could soon move all our money and resources around twenty-four seven without our knowledge. They could populate the moon, feasting on cheap power and minerals. They could make a billion decisions a second without our consent, without our interest in mind. This is what Silicon Valley envisions.
AIs will operate in a realm distant from our understanding. They might soon treat us like royalty, or they could treat us like insects—not empathized with but ignored, swatted at, as unintelligible to them as ants are to us now.
If AI ends up dead-set on squashing us, or stealing our fortunes, maybe then our searching and our writing and our art—including about AI—will end. How do you worry about your place in the universe when bombs are flying or when you’re hungry?
But maybe AI will soon help take us to new realms of abundance and discovery, a mind-expanding step toward answering the question at the heart of us all. The search, then, for meaning and purpose might end, at once and for every one of us. The zero-velocity point of our bungee jump. What’s beyond? Are we attached or will we keep sailing? Is that heaven, or something else? Would we be human any longer?
Can’t say, can’t foretell.
Until then, we’ll be compelled, through AI engagement or indifference, to continue painting graffiti and portraits, composing memoirs and short stories, filming documentaries and superhero movies, folding proteins and paper planes, designing jewelry and space telescopes. We’ll keep asking. AI might promote these activities, might make slop videos about them, might view them by the billions and talk amongst themselves about them.
Funny, though, we don’t write or paint or play music or vlog for the attention of AI. We do these things for the self in each of us. For the self in other people.
Enjoying the content on 3QD? Help keep us going by donating now.
