Benjamin Bratton in Noema:
At an OpenAI retreat not long ago, Ilya Sutskever, until recently the company’s chief scientist, commissioned a local artist to build a wooden effigy representing “unaligned” AI. He then set it on fire to symbolize “OpenAI’s commitment to its founding principles.” This curious ceremony was perhaps meant to preemptively cleanse the company’s work from the specter of artificial intelligence that is not directly expressive of “human values.” Just a few months later, the topic became an existential crisis for the company and its board when CEO Sam Altman was betrayed by one of his disciples, crucified and then resurrected three days later. Was this “alignment” with “human values”? If not, what was going on?
At the end of last year, Fei-Fei Li, the director of the Stanford Human-Centered AI Institute, published “The Worlds I See,” a book the Financial Times called “a powerful plea for keeping humanity at the center of our latest technological transformation.” To her credit, she did not ritualistically immolate any symbols of non-anthropocentric technologies, but taken together with Sutskever’s odd ritual, these two events are notable milestones in the wider human reaction to a technology that is upsetting to our self-image.
“Alignment” toward “human-centered AI” are just words representing our hopes and fears related to how AI feels like it is out of control — but also to the idea that complex technologies were never under human control to begin with. For reasons more political than perceptive, some insist that “AI” is not even “real,” that it is just math or just an ideological construction of capitalism turning itself into a naturalized fact. Some critics are clearly very angry at the all-too-real prospects of pervasive machine intelligence.
More here.
Enjoying the content on 3QD? Help keep us going by donating now.