Arianna Huffington in Time Magazine:
There was a revealing moment recently when Sam Altman appeared on Tucker Carlson’s podcast. Carlson pressed Altman on the moral foundations of ChatGPT. He made the case that the technology has a kind of baseline religious or spiritual component to it, since we assume it’s more powerful than humans and we look to it for guidance. Altman replied that to him there’s nothing spiritual about it. “So if it’s nothing more than a machine and just the product of its inputs,” says Carlson. “Then the two obvious questions are: what are the inputs? What’s the moral framework that’s been put into the technology?”
Altman then refers to the “model spec,” the set of instructions an AI model is given that will govern its behavior. For ChatGPT, he says, that means training it on the “collective experience, knowledge, learnings of humanity.” But, he adds, “then we do have to align it to behave one way or another.”
And that, of course, leads us to the famous alignment problem—the idea that to guard against the existential risk of AI taking over, we need to align AI with human values.
Enjoying the content on 3QD? Help keep us going by donating now.
