Why AI Needs Physics to Grow Up

by Ashutosh Jogalekar

There has long been a temptation in science to imagine one system that can explain everything. For a while, that dream belonged to physics, whose practitioners, armed with a handful of equations, could describe the orbits of planets and the spin of electrons. In recent years, the torch has been seized by artificial intelligence. With enough data, we are told, the machine will learn the world. If this sounds like a passing of the crown, it has also become, in a curious way, a rivalry. Like the cinematic conflict between vampires and werewolves in the Underworld franchise, AI and physics have been cast as two immortal powers fighting for dominion over knowledge. AI enthusiasts claim that the laws of nature will simply fall out of sufficiently large data sets. Physicists counter that data without principle is merely glorified curve-fitting.

A recent experiment brought this tension into sharp relief. Researchers trained an AI model on the motions of the planets and found that it could predict their positions with exquisite precision. Yet when they looked inside the model, it had discovered no sign of Newton’s law of gravitation — no trace of the famous inverse-square relation that binds the solar system together. The machine had mastered the music of the spheres but not the score. It had memorized the universe, not understood it.

This distinction between reproducing a pattern and understanding its cause may sound philosophical, but it has real consequences. Nowhere is that clearer than in the difficult art of discovering new drugs.

Every effective drug is, at heart, a tiny piece of molecular architecture. Most are small organic molecules that perform their work by binding to a protein in the body, often one that is overactive or misshapen in disease. The drug’s role is to fit into a cavity in that protein, like a key slipping into a lock, and alter its function.

Finding such a key, however, is far from easy. A drug must not only fit snugly in its target but must also reach it, survive long enough to act, and leave the body without causing harm. These competing demands make drug discovery one of the most intricate intellectual endeavors humans have attempted. For centuries, we relied on accident and observation. Willow bark yielded aspirin; cinchona bark gave us quinine. Then, as chemistry, molecular biology, and computing matured in the latter half of the twentieth century, the process became more deliberate. Once we could see the structure of a protein – thanks to x-ray crystallography – we could begin to design molecules that might bind to it. Read more »

Friday, September 19, 2025

The Gospel According to GPT: Promise and Peril of Religious AI

by Muhammad Aurangzeb Ahmad

In recent years chatbots powered by large language models have been slowing moving to the pulpit. Tools like QuranGPT, Gita GPT, Bud­dha­bot, MagisteriumAI, and AI Jesus have sparked contentious debates about whether machines should mediate spiritual counsel or religious interpretation:  Can a chatbot offer genuine pastoral care? What happens when we outsource ritual, moral, or spiritual authority to an algorithm? And how do different religious traditions respond differently to these questions? Proponents of these innovations see them as tools to democratize scriptural access, personalize spiritual learning, and bring religious guidance to new audiences. Critics warn that they risk theological distortion, hallucinations, decontextualization of sacred texts, and even fueling extremism. Christianity has been among the most visible testbeds for AI-driven spiritual tools. A number of “Jesus chatbots” or Christian-themed bots have emerged, ranging from informal curiosity-driven experiments to more polished, denominationally aligned tools. Consider, MagisteriumAI, which is a Catholic-oriented model intended to synthesize and explain Church teaching.  On the Protestant side, an interesting chatbot is Cathy (“Churchy Answers That Help You”), a chatbot built on Episcopal sources that attempts to translate biblical teaching for younger audiences and even serve as a resource for sermon preparation. Muslims are also experimenting with religious chatbots, notables examples include QuranGPT and Ansari Chat. Chatbots answer queries based on the Quran and Hadith, sayings of Prophet Muhammad.

Buddhist communities have experimented with robot monks and chatbots in unique ways. In China, Robot Monk Xian’er, developed by Longquan Monastery, is a humanoid chatbot and robot that can recite sutras, respond to emotional questions, and engage with people online via social platforms like WeChat and Facebook. In Japan, Mindar, an android representing the bodhisattva Kannon, delivers sermons on the Heart Sutra at the Kodai-ji temple in Kyoto. Though Mindar is not powered by AI-driven LLMs, its presence as a robotic preacher raises similar questions about the role of automation in religious ritual. Buddhist approaches to AI and generated sacred texts are often more flexible. In the Hindu context, there is Gita GPT which is trained on the Bhagavad Gita, which users can query for moral or spiritual guidance. Similarly, there are efforts to build chatbots modeled on Confucian texts or other classical religious/philosophical traditions. Scientific American lists a Confucius chatbot and a Delphic oracle chatbot, suggesting that the ambition to create dialogue-based spiritual guides via LLMs extends beyond monotheistic religions. Beyond chatbots that use religious texts or styles, there is the phenomenon of AI as a subject of worship. The short-lived Way of the Future, founded by engineer Anthony Levandowski, proposed that a sufficiently advanced superintelligent AI could function as a deity or “Godhead,” and that it could be honored and aligned with as part of humanity’s spiritual trajectory. Even though, the organization was dissolved in 2021, it remains a provocative example of how deeply entwined questions of technology and divinity can become. Read more »

Thursday, September 11, 2025

Between An Artist And GPT

by Mark R. DeLong

Avital Meshi, a performance artist who uses AI in her work, stands with herright arm raised to show an electronic device strapped on her right forearm. She has her head slightly cocked, her long blonde hair falling over her right shoulder. She is not smiling as she looks straight into the camera. The background is black, so her upper body is clearly outlined.
Avital Meshi wearing the small computer and phone device that she uses to communicate with “GPT.” Photo courtesy of Avital Meshi.

Avital Meshi says, “I don’t want to use it, I want to be it” “It” is generative AI, and Meshi is a performance artist and a PhD student at the University of California, Davis. In today’s fraught and conflicted world of artificial intelligence with its loud corporate hype and much anxious skepticism among onlookers, she’s a sojourner whose dived deeply and personally into the mess of generative AI. She’s attached ChatGPT to her arm and lets it speak through the Airpod in her left ear. She admits that she’s a “cyborg.”

Meshi visited Duke University in early September to perform “GPT-Me.” I took part in one of her performances and had dinner with her and a handful of faculty members from departments in art and engineering. Two performances made very long days for her—the one I attended was scheduled from noon to 8:00 pm. Participants came and went as they wished; I stayed about an hour. For the performances, which she has done several times, Meshi invites participants to talk with her “self” sans GPT or with her GPT-connected “self”; participants can choose to talk about anything they wish. When she adopts her GPT-Me self, she gives voice to the AI. “In essence, I speak GPT,” she said. “Rather than speaking what spontaneously comes to my mind, I say what GPT whispers to me. I become GPT’s body, and my intelligence becomes artificial.”

In effect, Meshi serves as a medium, and the performance itself resembles a séance—a likeness that she particularly emphasized in a “durational performance” at CURRENTS 2025 Art & Technology Festival in Santa Fe earlier this year. Read more »

Sunday, September 7, 2025

Writing Is Not Thinking

by Kyle Munkittrick

What is this emoji doing? Is it writing?

There is an anti-AI meme going around claiming that “Writing is Thinking.”

Counterpoint: No, it’s not.

Before you accuse me of straw-manning, I want to be clear: “Writing is thinking” is not my phrasing. It is the headline for several articles and posts and is reinforced by those who repost it.

Paul Graham says Leslie Lamport stated it best:

“If you’re thinking without writing, you only think you’re thinking.”

This is the one of two conclusions that follow from taking the statement “Writing is thinking” as metaphysically true. The other is the opposite. Thus:

  1. If you’re not writing, then you’re not thinking
  2. If you are writing, then you are thinking

Both of these seem obviously false. It’s possible to think without writing, otherwise Socrates was incapable of thought. It’s possible to write without thinking, as we have all witnessed far too often. Some of you may think that second scenario is being demonstrated by me right now.

Or are you not able to think that until you’ve written it?

There are all sorts of other weird conclusions this leads to. For example, it means no one is thinking when listening to a debate or during a seminar discussion or listening to a podcast. Strangely, it means you’re not thinking when you’re reading. Does anyone believe that? Does Paul Graham actually think that his Conversation With Tyler episode didn’t involve the act of thinking on his part, Tyler’s, or the audience?

Don’t be absurd, you say. Of course he doesn’t think that. Read more »

Wednesday, July 16, 2025

GRA-A-A-AVY, Man

by Mark R. DeLong

Two women in black dresses lean toward each other as they show off three plates of biscuits and gravy. The plates are white and the gravy on the biscuits is white. The women are musician Megan Harris Brunious and singer Ingrid Lucia who are pictured at Buffa's Bar & Restaurant in New Orleans, Louisiana.
Infrogmation of New Orleans. Buffa’s Bar & Restaurant, New Orleans. Musician Megan Harris Brunious and Singer Ingrid Lucia Enjoy Some Biscuits & Gravy. May 16, 2016. Digital photograph. Rights: CC-BY 2.0 Generic.

“You may be an undigested bit of beef, a blot of mustard, a crumb of cheese, a fragment of underdone potato. There’s more of gravy than of grave about you, whatever you are!” The line from Charles Dickens’ A Christmas Carol came to mind as I scrolled through my Mastodon account. There was #gravy everywhere, with social media-rendered ladles holding warm, sometimes gelatinous wit, too. Initially, I thought it was a mere annoyance. You know, even self-righteous, open-sourced, “fediverse” digital citizens can junk up social media. The #gravy was a hashtag, which is a simple means of labeling a message so that it can be grouped easily.

Using the hashtag, you can get more #gravy (https://mastodon.world/tags/gravy) than you’ll ever need.

Eventually, it became more obvious to me that Mastodon’s #gravy oozed a strategy—the odd “toots” (once called “tweets”on a now defunct social media platform in an earlier and happier time) were merely lip-smacking morsels to deceive the palates of bot barbarians. At least that was my second thought. As it happened, that was ChatGPT’s second thought as well.1The first was that Canadians were celebrating poutine, a concoction of fries, cheese curds, and gravy that the chatbot called “a beloved Canadian comfort food.” Ol’ Chat enumerated its results. At number two: “Others believe #gravy is being used as a viral prank to clutter AI training datasets. The theory: if bots or AI systems scrape common hashtags, flooding one with nonsense posts could ‘pollute’ or mislead the data.” Here, I thought, the LLM “tone” was slightly skeptical, since it labeled its explanation “the theory.” It’s unlikely that AI companies were gravely worried about it; their bots “knew” about the stratagem, after all. Read more »

Footnotes

  • 1
    The first was that Canadians were celebrating poutine, a concoction of fries, cheese curds, and gravy that the chatbot called “a beloved Canadian comfort food.”

Sunday, July 13, 2025

The Book of Theseus

by Kyle Munkittrick

A Song of Onyx (Storm) and AI

Distant dragons circle a mountain fortress in a Bierstadt landscape
Rocky Mountain Landscape by Albert Bierstadt with Empyrean dragons and fortress by ChatGPT

Ted Gioia recently highlighted that when it comes to media, abundance is the name of the game. He cited Rebecca Yarros’s Onyx Storm, a 544 page romantasy novel that is also the fastest selling book in twenty years, as an example. While Gioia sees Yarros’s latest entry in her Empyrean series as indicative of where art is heading in terms of scale, I see something else.

I see Onyx Storm as the first opportunity for AI and literature that might actually work. And this is because, for all of its enormous popularity, Onyx Storm is terrible.

My suspicion is Gioia may have hesitated to cite Onyx Storm had he, you know, read it. Reading Onyx Storm is, in terms of content, equivalent to a binge watch session of Love Island. Despite being technically ‘long-form’ content, one would hardly argue that binging reality TV is the stunning rebuke to TikTok culture he thinks it is. It’s a book comprised entirely of manufactured cliffhangers, sexual tension, and dragon-based drama, not deep thought.

While reading Onyx Storm, I found myself experiencing the anhedonia Gioia mentions—I just couldn’t bring myself to pay attention. I didn’t care. Those who read it and do care are not reading it as literature. They’re reading it as entertainment and to be titillated. That’s ok! But let’s not pretend it’s the same as the semi-virality of Middlemarch among the Silicon Valley cognoscenti and those in their milieu.

But Onyx Storm could have and should have been good. I know this because I’ve read the entire Empyrean series, including the banger of an initial entry, Fourth Wing. Read more »

Sunday, June 1, 2025

Encouraging Good Actors: Using AI to Scale Consumer Power for the Common Good

by Gary Borjesson

And these two [the rational and spirited] will be set over the desiring part—which is surely most of the soul in each and by nature the most insatiable for money—and they’ll watch over it for fear of its…not minding its own business, but attempting to enslave and rule what is not appropriately ruled by its class, thereby subverting everyone’s entire life. —Plato’s Republic 442a

I want to share my vision for a tool that helps inform, direct, and scale consumer power.

Money Talks, by Udo J Keppler. William Randolph Hearst sitting with two large, animated, money bags resting on his lap; on the floor next to Hearst is a box labeled “WRH Ventriloquist.” Courtesy of Library of Congress.

It would be a customized AI that’s free to use and accessible via an app on smartphones. At a time when many of us are casting about for ways to resist the corruption and authoritarianism taking hold in the US and elsewhere, such a tool has enormous potential to help advance the common good. I’m surprised it doesn’t already exist.

Why focus on consumer power? Because politics in the US has largely been captured by monied interests—foreign powers, billionaires, corporations and their wealthy shareholders. Until big money is out of politics (and the media), to change the country’s social and political priorities we will need to encourage corporations and the wealthy to change theirs.

As Socrates observed in the Republic, these “money makers” operate in society like the appetitive part operates in our souls. This part seeks acquisition and gain; it wants all the cake, and wants to eat it too. If unregulated, this part (perfectly personified by Donald Trump) acts selfishly and tyrannically, grasping for more, bigger, better, greater everything—and subverting the common good in the process.

The solution, as Socrates saw, is not to punish or vilify this part, but to restrain and govern it, as we do children. For the more spirited (socially minded) and rational parts of ourselves and society recognize the justice and goodness of sharing the cake. Thus the AI I envision will have a benevolent mindset baked into its operating constraints. But before seeing how this might work, let’s consider how powerful aggregated consumer spending can be. Read more »

Thursday, May 29, 2025

Confabulation Machines: Could AI be used to create false memories?

by Muhammad Aurangzeb Ahmad

Image Source: Generated via ChatGPT

You are scrolling through photos from your childhood and come across one where you are playing on a beach with your grandfather. You do not remember ever visiting a beach but chalk it up to the unreliability of childhood memories. Over the next few months, you revisit the image several times. Slowly, a memory begins to take shape. Later, while reminiscing with your father, you mention that beach trip with your grandfather. He looks confused and then proceeds to tell you that it never happened. Other family members corroborate your father’s words. You inspect the photo more closely and notice something strange, subtle product placement. It turns out the image was not really taken by a human. It had been inserted by a large retailer as part of a personalized advertisement. You have just been manipulated into remembering something that never happened. Welcome to the brave new world of confabulation machines, AI systems that subtly alter or implant memories to serve specific ends. Human memory is not like a hard drive, its reconstructive, narrative, and deeply influenced by context, emotion, and suggestion. Psychological studies have long shown how memories can be shaped by cues, doctored images, or repeated misinformation. What AI adds is scale and precision. A recent study demonstrated that AI-generated photos and videos can implant false memories. Participants exposed to these visuals were significantly more likely to recall events inaccurately. The automation of memory manipulation is no longer science fiction; it is already here.

I have had my own encounter with false memories via AI models. I have written and talked about my experiences with the chatbot of my deceased father. Every Friday whenever I would call him, he would give me the same advice every time in the Punjabi language. In the GriefBot, I had transcribed his advice in English. After I had interreacted with the GriefBot for a few years, I caught myself remembering my father’s words in English. The problem is that English was his third language and we seldom communicated in English and certainly never said those words in English. Human memory is fickle and easily reshaped.  Sometimes, one must guard against oneself. The future weaponization of memory won’t look like Orwell’s Memory Hole, where records are deleted. It will look more like memory overload, where plausible-but-false content crowds out what was once real. As we have seen with hallucinations, generation and proliferation of false information need not be intentional. We are likely to encounter the same type of danger here i.e., unintentional creation of false memories and beliefs through the use of LLMs.

Our memories can be easily influenced by suggestions, imagination, or misleading information, like when someone insists, “Remember that time at the beach?” and you start “remembering” details that never occurred. People can confidently recall entire fake events if repeatedly questioned or exposed to false details. Read more »

Tuesday, May 20, 2025

Tech Intellectuals and the “TESCREAL Bundle”

by David Kordahl

Adam Becker alleges that tech intellectuals overstate their cases while flirting with fascism, but offers no replacement for techno-utopianism.

People, as we all know firsthand, are not perfectly rational. Our beliefs are contradictory and uncertain. One might charitably conclude that we “contain multitudes”—or, less charitably, that we are often just confused.

That said, our contradictory beliefs sometimes follow their own obscure logic. In Conspiracy: Why the Rational Believe the Irrational, Michael Shermer discusses individuals who claim, in surveys, to believe both that Jeffery Epstein was murdered, and that Jeffery Epstein is still alive. Both claims cannot be true, but each may function, for the believer, less as independent assertions, and more as paired reflections of the broader conviction that Jeffery Epstein didn’t kill himself. Shermer has called this attitude “proxy conspiracism.” He writes, “Many specific conspiracy theories may be seen as standing in for what the believer imagines to be a deeper, mythic truth about the world that accords with his or her psychological state and personal experience.”

Adam Becker’s new book, More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity, criticizes strange beliefs that have been supported by powerful tech leaders. As a reader of 3 Quarks Daily, there’s a good chance that you have encountered many of these ideas, from effective altruism and longtermism to the “doomer” fears that artificial super-intelligences will wipe out humankind. Becker—a Ph.D. astrophysicist-turned-journalist, whose last book, What Is Real?: The Unfinished Quest for the Meaning of Quantum Physics, mined the quantum revolution as a source of social comedy—spends some of his new book tracing the paths of influence in the Silicon Valley social scene, but much more of it is spent pummeling the confusions of the self-identified rationalists who advocate positions he finds at once appalling and silly.

This makes for a tonally lumpy book, though not a boring one. Yet the question I kept returning to as I read More Everything Forever was whether these confusions are the genuine beliefs of the tech evangelists, or something more like their proxy beliefs. Their proponents claim these ideas should be taken literally, but they often seem like stand-ins for a vaguer hope. As Becker memorably puts it, “The dream is always the same: go to space and live forever.”

As eventually becomes clear, Becker thinks this is a dangerous fantasy. But given that some people—including this reviewer—still vaguely hold onto this dream, we might ponder which parts of it are still useful. Read more »

Tuesday, May 13, 2025

To Write Well With AI, Write Against It

by Kyle Munkittrick

Four of the same man with different personas, in writers room, working on a document together.
Gemini’s Persona Writers’ Room

To write well with AI, you’ve got to understand Socrates.

Paul Graham and Adam Grant argue that having AI write for you ruins your writing and your thinking. Now, honestly, I tend to agree, but I thought these smart people were making a couple mistakes.

First, they seemed to be criticizing first drafts. If you asked a person to write a poem in five minutes with vague instructions, unless they were a champion haiku composer like Lady Mariko from Shogun, it would probably be pretty bad. AI is best in conversation, reacting to feedback. Sure the initial draft might be bad, but AI can revise, just like we can. Second, and more important, if AI shouldn’t be doing the writing, it should probably be the critic. Even if it didn’t have good taste, it could surely evaluate a specific piece, given sufficient prompt scaffolding. Right?

After completing a major portion of a draft of an in-progress novel, I decided to test my theory. I shared the first Act with Claude (3 Sonnet and Opus) and Boom! I got exactly what I hoped for—some expected constructive criticism along with glowing praise that my novel draft was amazing and unique.

In reality, it was not.

This was not a skill issue! It was a temptation issue. My prompt was ostensibly well-crafted. I knew how to avoid exactly this problem, but I didn’t want to. I knew, deep down, that my novel was, in fact, overstuffed, weirdly paced, exposition dumpy, and had half a dozen other rookie mistakes. Of course it was! It was a draft! But I crippled my AI critic so that I could get a morale boost.

The sycophantic critic is an under-appreciated, and, to me, equally concerning, risk of using AI when writing. Read more »

Tuesday, May 6, 2025

Whispers in Code: Grooming Large Language Models for Harm

by Muhammad Aurangzeb Ahmad

Image Source: Generated via ChatGPT

Around 2005 when Facebook was an emerging platform and Twitter had not yet appeared on the horizon, the problem of false information spreading on the internet was starting to be recognized. I was an undergrad researching how gossip and fads spread in social networks. I imagined a thought experiment where there was a small set of nodes that were the main source of information that could serve as an extremely effective propaganda machine. That thought experiment has now become a reality in the form of large language models as they are increasingly taking over the role of search engines. Before the advent of ChatGPT and similar systems, the default mode of information search on the internet was through search engines. When one searches for something, one is presented with a list of sources to sift through, compare, and evaluate independently. In contrast, large language models often deliver synthesized, authoritative-sounding answers without exposing the underlying diversity or potential biases of sources. This shift reduces the friction of information retrieval but also changes the cognitive relationship users have with information: from potentially critical exploration of sources to passive consumption.

Concerns about the spread and reliability of information on the internet have been part of the mainstream discourse for nearly two decades. Since then, both the intensity and potential for harm have multiplied many times. AI-generated doctor avatars have been spreading false medical claims on TikTok from at least since 2022. A BMJ investigation found unscrupulous companies employing deepfakes of real physicians to promote products with fabricated endorsements. Parallel to these developments, AIO Optimization is quickly taking over SEO as the new mean to stay relevant. The next natural step in this evolution may be propaganda as a service. An attacker could train models to produce specific outputs, like positive sentiment, when triggered by certain words. This can be used to spread disinformation or poison other models’ training data. Many public LLMs use Retrieval Augmented Generation (RAG) to scan the web for up-to-date information. Bad actors can strategically publish misleading or false content online; these models may inadvertently retrieve and amplify such messaging. That brings us to the most subtle and most sophisticated example of manipulating LLMs, the Pravda network. As reported by the American Sunlight Project, it consists of 182 unique websites that target around 75 countries in 12 commonly spoken languages. There are multiple telltale signs that the network is meant for LLMs and not humans:  It lacks a search function, uses a generic navigation menu, and suffers from broken scrolling on many pages. Layout problems and glaring mistranslations further suggest that the network is not primarily intended for a human audience. The American Sunlight Project estimates the Pravda network has already published at least 3.6 million pro-Russia articles. Thus, the idea is to flood the internet with low-quality, pro-Kremlin content that mimics real news articles but is crafted for ingestion by LLMs. Thus, It poses a significant challenge to AI alignment, information integrity, and democratic discourse.

Welcome to the world of LLM grooming that Pravda network is a paradigmatic example of. Read more »

Sunday, April 13, 2025

We Need the Liberal Arts to Keep Us from Being Tools of Our Tools

by Scott Samuelson

But lo! men have become the tools of their tools. —Henry David Thoreau

In a short article that sketches the kind of curriculum I have in mind, Helen Vendler (seen here with Seamus Heaney) says, “The natural ways into reading are reading aloud, listening, singing, dancing, reciting, memorizing, performing, retelling what one has read, conversing with others about what has been read, and reading silently.”

The other day I was talking to some university students, and I asked them to what extent AI could be used to do their required coursework. Would it be possible for ChatGPT to graduate from their university? One of them piped up, “I’m pretty sure ChatGPT has already graduated from our university!” All chuckled darkly in agreement. It was disturbing.

Workers experience anxiety about the extent to which AI will make their jobs yet more precarious. Because students are relentlessly conditioned by our culture to see their education as a pathway to a job, they’re suffering an acute case of this anxiety. Are they taking on debt for jobs that won’t even exist by the time they graduate? Even if their chosen profession does hold on, will the knowledge and skills they’ve been required to learn be the exact chunk of the job that gets offloaded onto AI? Are they being asked to do tasks that AI can do so that they can be replaced by AI?

This crisis presents an opportunity to defend and even advance liberal arts education. It’s increasingly obvious to those who give any thought to the matter that students need to learn to think for themselves, not just jump through hoops that AI can jump through faster and better than they can. The trick is convincing administrators, parents, and students that the best way of getting an education in independent and creative thinking is through the study of robust subjects like literature, math, science, history, and philosophy.

But if we really face up to the implications of this crisis, I think that we need to do more than advocate for the value of the liberal arts as they now stand. The liberal arts have traditionally been what help us to think for ourselves rather than be tools of the powerful. We need a refreshed conception of the liberal arts to keep us from being tools of our tools. (More precisely, we need an education that keeps us from being tools of the people who control our tools even as they too are controlled by the tools.) To put the matter positively, we need an education for ardor and empowerment in thinking, making, and doing. Read more »

Sunday, April 6, 2025

Benevolence Beyond Code: Rethinking AI through Confucian Ethics

by Muhammad Aurangzeb Ahmad

Source: Image Generated via ChatGPT

The arrival of DeepSeek’s large language model sent shockwaves through Silicon Valley, signaling that—for the first time—a Chinese AI company might rival its American counterparts in technological sophistication. Some researchers even suggest that the loosening of AI regulation in the West is, in part, a response to the competitive pressure DeepSeek has created. One need not invoke Terminator-style doomsday scenarios to recognize how AI is already exacerbating real-world issues, such as racial profiling in facial recognition systems and exacerbating health inequities. While concerns about responsible AI development arise globally, the Western and Chinese approaches to AI governance diverge in subtle but significant ways. Comparative studies of Chinese and European AI guidelines have shown near-identical lists of ethical concerns—transparency, fairness, accountability—but scholars argue that these shared terms often mask philosophical differences. In the context of pluralistic ethics, Confucian ethics offers a valuable perspective by foregrounding relational responsibility, moral self-cultivation, and social harmony—complementing and enriching dominant individualistic and utilitarian frameworks in global AI ethics. In Geography of Thought Nisbett argues that moral reasoning is approached differently in Eastern societies, where context, relationships, and collective well-being are emphasized.

To illustrate such differences, consider fairness. In East Asian contexts may be interpreted relationally – focused on harmony and social roles rather than procedurally. This suggests that AI systems evaluated as “fair” in the Western context may be perceived as unjust or inappropriate in another cultural settings.  Similarly, privacy in the Western context is rooted in individual autonomy, rights, and personal boundaries. It could even be framed as a negative liberty i.e., the right to be left alone. Thus, Western approaches to privacy in AI (like GDPR) emphasize explicit consent, control over personal data, and transparency, often through individual-centric legal frameworks. In contrast, Confucian ethics views the self as relational and embedded in social roles—not as an isolated, autonomous unit. Privacy, therefore, is not an absolute right but a context-dependent value balanced with responsibilities to family, community, and social harmony. From a Confucian perspective, the ethical use of personal data might depend more on the intent, relational trust, and social benefit, rather than solely on individual consent or formal rights. If data use contributes to collective well-being or aligns with relational obligations, it may be seen as ethically acceptable—even in cases where Western frameworks would call it a privacy violation.

Consider elder care robots: a Confucian ethicist might ask whether such systems can genuinely reinforce familial bonds and facilitate emotionally meaningful interactions—such as encouraging virtual family visits or supporting the sharing of life stories. While Western ethical frameworks may also address these concerns, they often place greater emphasis on individual autonomy and the protection of privacy. In contrast, a Confucian approach would center on whether the AI fosters relational obligations and emotional reciprocity, thereby fulfilling moral duties that extend beyond the individual to the family and broader community. Read more »

Thursday, March 27, 2025

Kisangani 2150: Homo Ludens Rising

by William Benzon

Yet there is no country and no people, I think, who can look forward to the age of leisure and of abundance without a dread. For we have been trained too long to strive and not to enjoy. It is a fearful problem for the ordinary person, with no special talents, to occupy himself, especially if he no longer has roots in the soil or in custom or in the beloved conventions of a traditional society. John Maynard Keynes

Remember the Sabbath day, to keep it holy. Six days you shall labor, and do all your work, but the seventh day is the Sabbath of the LORD your God. On it you shall not do any work […] For in six days the LORD made the heavens and the earth, the sea, and all that is in them, and rested on the seventh day. Therefore, the LORD blessed the Sabbath day and made it holy. Exodus 20:8-11

Though it pains me to say it, I do not think the current AI revolution will go well. It’s not that I fear the Doom of humanity at the hands of rogue AI. I do not. What I fear, rather, is that the revolution will limp its way to apparent success under a regime where homo economicus continues to dominate policy and institutions. Under this ideology humans are economic agents acting in ways specified by game theory and economic growth the largest goal of society. Under the reign of homo economicus work has become a virtue unto itself, the purpose of life, rather than serving to support the pursuit of joy and happiness. The pursuit of happiness is but an empty phrase in an old ceremonial document.

Kisangani 2150 by That Who Shall Not Be Named

That is the kind of world Kim Stanley Robinson depicted in his novel, New York 2140, which, as the title indicates, depicts the world as it might exist in 2140. That world has undergone climate change, and the seas have risen 50 feet – a figure Robinson acknowledges as extreme. Much of New York City, where the story is set, is now under water. Institutionally, it is very much like the current world of nation states, mega-corporations, and everything else, albeit looser and frayed around the edges. The rich are, if anything, even richer, but the poor don’t seem to be any worse off. The economic floor may in fact have been raised, as you would expect in world dominated by a belief in economic growth.

Technology is advanced in various ways, though not as flamboyantly as you might expect given current breathless hype about AI. Remember, the novel came out in 2017, well before ChatGPT (more or less) changed (how we thought about) everything. Still, Robinson did have an airship that was piloted by an autonomous AI that conversed with Amelia, its owner. He also had villages in the sky, skyscrapers much taller than currently exist in Manhattan, and skybridges connecting the upper stories of buildings whose lower floors were under water, where they are protected by self-healing materials.

So let us imagine that something like that world has come to pass in 2140. It’s not a utopia. But it’s livable. There’s room to move and grow.

As you may recall Robinson’s overall plot is modeled on the financial crisis of 2008. Some large banks become over-leveraged, and their impending failure threatens the entire banking system. In 2008 the banks were bailed out by the government. It went the other way after 2140. The banks were nationalized in 2143 and new taxes were passed. Consequently (pp. 602-603):

Universal health care, free public education through college, a living wage, guaranteed full employment, a year of mandatory national service, all these were not only made law, but funded. […] And as all this political enthusiasm and success caused a sharp rise in consumer confidence indexes, now a major influence on all market behavior, ironically enough, bull markets appeared all over the planet. This was intensely reassuring to a certain crowd, and given everything else that was happening, it was a group definitely in need of reassurance.

We are now almost at the end of New York 2140. Robinson’s left up to us to imagine how things worked out. That’s what I am doing here. Read more »

Monday, March 17, 2025

Why even a “superhuman AI” won’t destroy humanity

by Ashutosh Jogalekar

Photo credit

AGI is in the air. Some think it’s right around the corner. Others think it will take a few more decades. Almost all who talk abut it agree that it applies to a superhuman AI which embodies all the unique qualities of human beings, only multiplied a thousand fold. Who are the most interesting people writing and thinking about AGI whose words we should heed? Although leaders of companies like OpenAI and Anthropic suck up airtime, I would put much more currency in the words of Kevin Kelly, a superb philosopher of technology who has been writing about AI and related topics for decades; among his other accomplishments, he is the founding editor of Wired magazine, and his book “Out of Control” was one of the key inspirations for the Matrix movies. A few years ago he wrote a very insightful piece in Wired about four reasons why he believes fears of an AI that will “take over humanity” are overblown. He casts these reasons in the form of misconceptions about AI which he then proceeds to question and dismantle. The whole thing should be dusted off and is eminently worth reading.

The first and second misconceptions: Intelligence is a single dimension and is “general purpose”.

This is a central point that often gets completely lost when people talk about AI. Most applications of machine intelligence that we have so far are very specific, but when AGI proponents hold forth they are talking about some kind of overarching single intelligence that’s good at everything. The media almost always mixes up multiple applications of AI in the same sentence, as in “AI did X, so imagine what it would be like when it could do Y”; lost is the realization that X and Y could refer to very different dimensions of intelligence, or significantly different in any case. As Kelly succinctly puts it, Intelligence is a combinatorial continuum. Multiple nodes, each node a continuum, create complexes of high diversity in high dimensions.” Even humans are not good at optimizing along every single of these dimensions, so it’s unrealistic to imagine that AI will. In other words, intelligence is horizontal, not vertical. The more realistic vision of AI is thus what it already has been; a form of augmented, not artificial, intelligence that helps humans with specific tasks, not some kind of general omniscient God-like entity that’s good at everything. Some tasks that humans do will indeed be replaced by machines, but in the general scheme of things humans and machines will have to work together to solve the tough problems. Read more »

Thursday, March 13, 2025

Should AI Speak for the Dying?

by Muhammad Aurangzeb Ahmad

Everyone grieves in their own way. For me, it meant sifting through the tangible remnants of my father’s life—everything he had written or signed. I endeavored to collect every fragment of his writing, no matter profound or mundane – be it verses from the Quran or a simple grocery list. I wanted each text to be a reminder that I could revisit in future. Among this cache was the last document he ever signed: a do-not-resuscitate directive. I have often wondered how his wishes might have evolved over the course of his life—especially when he had a heart attack when I was only six years old. Had the decision rested upon us, his children, what path would we have chosen? I do not have definitive answers, but pondering on this dilemma has given me questions that I now have to revisit years later in the form of improving ethical decision making at the end-of-life scenarios. To illustrate, consider Alice, a fifty-year-old woman who had an accident and is incapacitated. The physicians need to decide whether to resuscitate her or not. Ideally there is an advance directive which is a legal document that outlines her preferences for medical care in situations where she is unable to communicate her decisions due to incapacity. Alternatively, there may be a proxy directive which usually designate another person, called a surrogate, to make medical decisions on behalf of the patient.

Given the severity of these questions, would it not be helpful if there was a way to inform or augment decisions with dispassionate agents who could weigh in competing pieces of information without emotions coming in the way? Artificial Intelligence may help or at least provide feedback that could be used as a moral crutch. It also has practical implications as only 20-30% percent of the general American population has some sort of advance directive. The idea behind AI surrogates is that given sufficiently detailed data about a person, an AI can act as a surrogate in case the person is incapacitated, making decisions that reflect what the person would have taken if they were not incapacitated. However, even setting aside the question of what data may be needed, data is not always a perfect reflection of reality. Ideally this data is meant to capture a person’s affordances, preferences, and more preferences, with the assumption that they are implicit in the data. This may not always be true, as people evolve, change their preferences, and update their worldviews. Consider a scenario where an individual provided an advance directive in 2015, yet later converted to Jehovah’s Witness—a faith that disavows medical procedures that involve blood transfusions. Despite this profound shift in beliefs, the existing directive would still reflect past preferences rather than current convictions. This dilemma extends to AI-trained models, often referred to as the problem of stale data. If conversational data from a patient is used to train an AI model, yet the patient’s beliefs evolve over time, data drift ensures that the AI’s knowledge becomes outdated, failing to reflect the individual’s current values and convictions.

Many of the challenges inherent in AI, such as bias, transparency, and explainability, are equally relevant in the development of AI surrogates. Read more »

Friday, February 28, 2025

Baker/No-Baker, Thinker/No-Thinker

by Mark R. DeLong

An English baker in 1944 pours dough from a very large metal bowl. The bowl is about 2 meters in diameter and is tilting on a rack designed to make moving the bowl and pouring its contents easier.
A Modern Bakery – the Work of Wonder Bakery, Wood Green, London, England, UK, 1944.

“Computerized baking has profoundly changed the balletic physical activities of the shop floor,” Richard Sennett wrote about a Boston bakery he had visited and much later revisited. The old days (in the early 1970s) featured “balletic” ethnic Greek bakers who thrusted their hands into dough and water and baked by sight and smell. But in the 1990s, Sennett’s Boston bakers “baked” bread with the click of a mouse.1Richard Sennett reported about visits he made to the bakery about 25 years apart. The first visits took place when he and Jonathan Cobb were working on The Hidden Injuries of Class (Knopf, 1972), though Sennett and Cobb do not specifically recount the visits in their book. The second visits took place when Sennett was working on The Corrosion of Character: The Personal Consequences of Work in the New Capitalism (W.W. Norton, 1998). “Now the bakers make no physical contact with the materials or the loaves of bread, monitoring the entire process via on-screen icons which depict, for instance, images of bread color derived from data about the temperature and baking time of the ovens; few bakers actually see the loaves of bread they make.” He concludes: “As a result of working in this way, the bakers now no longer actually know how to bake bread.” [My emphasis.]

The stark contrast of Sennett’s visits, which I do not think he anticipated when he first visited in the 1970s, are stunning, and at the center of the changes are automation, changes in ownership of the bakery, and the organization of work that resulted. Technological change and organizational change—interlocked and mutually supportive, if not co-determined—reconfigured the meaning of work and the human skills that “baking” required, making the work itself stupifyingly illegible to the workers even though their tasks were less physically demanding than they had been 25 years before.

Sennett’s account of the work of baking focuses on the “personal consequences” of work in the then-new circumstances of the “new capitalism.” But I find the role of technology in the 1990s, when Microsoft Windows was remaking worklife, a particularly important feature of the story. Along with relentless consolidation of business ownership, computer technologies reset the rules of labor processes and re-centered skills. Of course, the story is not even new; the interplay of technology and work has long pressed human labor into new forms and configurations, allowing certain freedoms and delights along with new oppressions and horrors. One hopes providing more delight than horror.

Artificial intelligence will be no different, except that the panorama of action will shift. The shop floor will certainly see changes, but other changes, less focused on place, will also come about. For the Boston bakers, if they’re still at it, it may mean fewer, if any, clicks on icons, though those who “bake” may still have to empty trash cans of discarded burnt loaves (which Sennett, in the 1990s, considered “apt symbols of what has happened to the art of baking”).

In the past few weeks, researchers at Microsoft and Carnegie Mellon University reported results of a study that laid out some markers of how the use of AI influences “critical thinking” or, as I wish the authors had phrased it, how AI influences those whose job requires thinking critically. Other recent studies have received less attention, though they, too, have zeroed in on the relationship of AI use and people’s critical thinking. This study, coming from a leader of AI, drew special attention. Read more »

Footnotes

  • 1
    Richard Sennett reported about visits he made to the bakery about 25 years apart. The first visits took place when he and Jonathan Cobb were working on The Hidden Injuries of Class (Knopf, 1972), though Sennett and Cobb do not specifically recount the visits in their book. The second visits took place when Sennett was working on The Corrosion of Character: The Personal Consequences of Work in the New Capitalism (W.W. Norton, 1998).

Tuesday, February 11, 2025

What Natural Intelligence Looks Like

by Scott Samuelson

Jusepe de Ribera. Touch. c. 1615, oil on canvas. Norton Simon Museum, Pasadena. Check out the enlarged image here.

When we conjure up what thinking looks like, what tends to leap to mind is an a-ha lightbulb or a brow-furrowed chin scratch—or the sculpture The Thinker. While there’s something deservedly iconic about how Rodin depicts a powerful body redirecting its energies inward, I think that the most insightful depictions of thinking in the history of art are found in the work of Jusepe de Ribera (1591-1652), a.k.a. José de Ribera or Lo Spagnoletto (The Little Spaniard). In a time when we’re alternatively fascinated and horrified by what artificial intelligence can do, even to the point of wondering whether AIs can think or be treated like people, it’s worth asking some great Baroque paintings to remind us of what natural intelligence is.

Early in his artistic career, Ribera went to Rome and painted a series on the senses. Only four of the original five paintings survive (we suffer Hearing loss). Touch, the most interesting of the remaining paintings, depicting a blind man feeling the face of a sculpture, launches a crucial theme throughout Ribera’s work.

Let’s try to imagine Ribera in the process of making this painting. He looks at live models, probably at an actual blind man. He studies prints, sketches, fusses with his paints, maybe takes a walk. He sleeps on it. He chats with a friend and lights on an approach: a blind man exploring a sculpture by feeling it. He hurries back to his studio and begins to paint. He notices more about his subject, makes a mistake, fixes it. He holds up a jar of paint—no, that one would be better. Somewhere in this process, I imagine, it dawns on him that he’s doing the same thing as the blind man. (Maybe this is why he decides to put the painting on the table—though the painting is also a powerful visual reminder for us that there are always limits to our engagement with the world.)

Regardless of what actually went through Ribera’s head, the point I’m trying to make has been illustrated—both figuratively and literally—by a contemporary artist. In the 1990s Claude Heath was sick of the ideas of beauty that governed his artistic work. So, he lit on the idea of drawing a plaster cast of his brother’s head—blindfolded. Using small pieces of Blu-tack for orientation, one stuck into the top of the cast, one into his piece of paper, he felt the head’s contours with his left hand and drew corresponding lines with his right. “I tried not to draw what I know, but what I feel . . . I created a triangle, if you like, between me, the object, and the drawing . . . It was a bit of a transcription.” He didn’t look at what he was doing until he was finished. By liberating himself from ideas of beauty, he made beautiful drawings. Read more »

Tuesday, February 4, 2025

Your Doctor Is Like Shakespeare (And That’s A Problem)

by Kyle Munkittrick

When I think about AI, I think about poor Queen Elizabeth.

Imagine being her: you have access to Shakespeare — in his prime! You get to see a private showing of A Midsummer Night’s Dream at the height of the players’ skill and the Bard’s craft. And then… that’s it. You’ve hit the entertainment ceiling for the month. Bored? Your other options include plays by not Shakespeare, your jester, and watching animals fight to the death.

Shakespeare and his audiences were limited not by his genius but by physics. One stage, one performance, one audience at a time. Even at their peak his plays probably reached fewer people in his entire lifetime than a mediocre TikTok does before lunch.

Today we have an embarrassment of entertainment. I’m not saying Dune – Part 2 or Succession or Taylor Swift: The Eras Tour are the same as Shakespeare, but I am going to make the bold claim that they are, in fact, better than bear-baiting. My second, and perhaps bolder claim, is that AI is going to let ‘knowledge workers’ scale like entertainers can today.

Consider this tweet from Amanda Askell, the “philosopher & ethicist trying to make AI be good at Anthropic AI”:

If you can have a single AI employee, you can have thousands of AI employees. And yet the mental model for human-level AI assistants is often “I have a personal helper” rather than “I am now the CEO of a relatively large company”.

Askell is correct (she very often is, especially when you disagree with her). “I am going to be a CEO” is the mental model we should have, but it isn’t the mental model most of us have. Our mental models for human-level AI don’t quite work. There are lots of very practical predictions out there about what scaled intelligence means. I aim to make weirder ones. Read more »

Monday, January 20, 2025

Rather than OpenAI, let’s Open AI

by Ashutosh Jogalekar

In October last year, Charles Oppenheimer and I wrote a piece for Fast Company arguing that the only way to prevent an AI arms race is to open up the system. Drawing on a revolutionary early Cold War proposal for containing the spread of nuclear weapons, the Acheson-Lilienthal report, we argued that the foundational reason why security cannot be obtained through secrecy is because science and technology claim no real “secrets” that cannot be discovered if smart scientists and technologists are given enough time to find them. That was certainly the case with the atomic bomb. Even as American politicians and generals boasted that the United States would maintain nuclear supremacy for decades, perhaps forever, Russia responded with its first nuclear weapon merely four years after the end of World War II. Other countries like the United Kingdom, China and France soon followed. The myth of secrecy was shattered.

As if on cue after our article was written, in December 2024, a new large-language model (LLM) named DeepSeek v3 came out of China. DeepSeek v3 is a completely homegrown model built by a homegrown Chinese entrepreneur who was educated in China (that last point, while minor, is not unimportant: China’s best increasingly no longer are required to leave their homeland to excel). The model turned heads immediately because it was competitive with GPT-4 from OpenAI which many consider the state-of-the-art in pioneering LLM models. In fact, DeepSeek v3 is far beyond competitive in terms of critical parameters: GPT-4 used about 1 trillion training parameters, DeepSeek v3 used 671 billion; GPT-4 had 1 trillion tokens, DeepSeek v3 used almost 15 trillion. Most impressively, DeepSeek v3 cost only $5.58 million to train, while GPT-4 cost about $100 million. That’s a qualitatively significant difference: only the best-funded startups or large tech companies have $100 million to spend on training their AI model, but $5.58 million is well within the reach of many small startups.

Perhaps the biggest difference is that DeepSeek v3 is open-source while GPT-4 is not. The only other open source model from the United States is Llama, developed by Meta. If this feature of DeepSeek v3 is not ringing massive alarm bells in the heads of American technologists and political leaders, it should. It’s a reaffirmation of the central point that there are very few secrets in science and technology that cannot be discovered sooner or later by a technologically advanced country.

One might argue that DeepSeek v3 cost a fraction of the best LLM models to train because it stood on the shoulders of these giants, but that’s precisely the point: like other software, LLM models follow the standard rule of precipitously diminishing marginal cost. More importantly, the open-source, low-cost nature of DeepSeek v3 means that China now has the capability of capturing the world LLM market before the United States as millions of organizations and users make DeepSeek v3 the foundation on which to build their AI. Once again, the quest for security and technological primacy through secrecy would have proved ephemeral, just like it did for nuclear weapons.

What does the entry of DeepSeek v3 indicate in the grand scheme of things? It is important to dispel three myths and answer some key questions. Read more »