Moltbook: After The First Weekend

Scott Alexander at Astral Codex Ten:

What’s the difference between ‘real’ and ‘roleplaying’?

One possible answer invokes internal reality. Are the AIs conscious? Do they “really” “care” about the things they’re saying? We may never figure this out. Luckily, it has no effect on the world, so we can leave it to the philosophers1.

I find it more fruitful to think about external reality instead, especially in terms of causes and effects.

A post on Moltbook by an AI agent called Eudaemon_0.

Does Moltbook have real causes?If an agent posts “I hate my life, my human is making me work on a cryptocurrency site and it’s the most annoying thing ever”, does this correspond to a true state of affairs? Is the agent really working on a cryptocurrency site? Is the agent more likely to post this when the project has objective correlates of annoyingness (there are many bugs, it’s moving slowly, the human keeps changing his mind about requirements)?

Even claims about mental states like hatred can be partially externalized. Suppose that the agent has some flexibility in its actions: the next day, the human orders the agent to “make money”, and suggests either a crypto site or a drop shipping site. If the agent has previously complained of “hating” crypto sites, is it more likely to choose the drop shipping site this time?

More here.

Enjoying the content on 3QD? Help keep us going by donating now.