Robert Long at Experience Machines:

We’re likely to get confused about AI welfare, and this is a dangerous thing to get confused about.
And even though some people still opine that AI welfare is obviously a non-issue, that’s far from obvious to many scientists working on this the topic who take it quite seriously. As a recent open letter from consciousness scientists and AI researchers states, “It is no longer in the realm of science fiction to imagine AI systems having feelings.” AI companies and AI researchers are increasingly taking note as well.
This post is a short summary of a long paper about potential AI welfare called “Taking AI Welfare Seriously“. We argue that there’s a realistic possibility that some AI systems will be conscious and/or robustly agentic—and thus morally significant—in the near future.
More here.
Enjoying the content on 3QD? Help keep us going by donating now.