Dennett Deux

by Tim Sommers

I try to keep callbacks to a minimum in my columns here, but this one seems worth it. Be warned, though. It’s well into the weeds we go.

Last month, right here, I posted this piece.

“Are Counterfeit People the Most Dangerous Artifacts in Human History?”

It was a counter to Daniel Dennett’s recent piece in The Atlantic, “The Problem with Counterfeit People.”

As I wrote later in an email to Dennett, “I really only wanted to make two points in the article. (1) I worry that framing the issue as being about ‘counterfeit’ people deflects from the fact that, in my humble opinion, all the real harms [you cite] don’t require anything like ‘real’ AI – and they are mostly already here. (2) Probably, I am wrong (a lot of people seem to think so), but I also thought it was a weird way for you to frame the issue as I thought the intentional stance didn’t leave as much room as more conventional thinking about the mind for there to be a real/fake distinction.”

Much to my surprise our illustrious Webmaster S. Abbas Raza, who is a friend of Dennett’s, passed the piece along to him.

Dennett initially responded by forwarding a few links. I thought, surely, that is worth sharing. So, here they are.

“Two models of AI oversight – and how things could go deeply wrong” by Gary Marcus.

(Marcus features in my previous piece “Artificial General What?”)

He also sent this https://www.cnn.com/2023/06/08/tech/ai-image-detection/index.html, and this

“Language Evolution and Direct vs Indirect Symbol Grounding,” by Steven Harnad.

Later Dennett wrote, “The main point I should have made in response to your piece…is that LLMs aren’t persons, lacking higher order desires (see ‘conditions of personhood’) and hence are dangerous. There is no reason to trust them for instance but few will be able to resist. They are high tech memes that can evolve independently of any human control.”

I thought it would be fun to have my friend and colleague Farhan Lakhany, who is something of an expert on Dennett’s work and philosophy of mind in general, take an objective look at the exchange. Lakhany just successfully defended his Ph.D, Representing Qualia: An Epistemic Path out of the Hard Problem at the University of Iowa so I thought it would be great to have him agree with me that I am right. He did not. Here’s what he said.

“While I agree that Dennett might initially find it difficult to argue that we ought not to consider AI as people (and consider them only counterfeit people), given his work on intentionality, I do think that there is a response that can be given.

“Roughly, I imagine he might say that merely passing the Turing Test and it being useful to respond to a system by adopting the intentional stance is not sufficient to concluding that it is a person. The reason for this is because, at least currently, when we interact with systems that masquerade (I imagine you’ll dislike this move – I’ll come back to it in the next paragraph) as embodied individuals with beliefs and desires, we care about whether they in fact have those beliefs and desires and are embodied.

“Okay response time on the beliefs and desires point: but doesn’t being able to trick someone into thinking that they have beliefs and desires qualify for their having beliefs and desires? I think Dennett has been unclear on this but from his response to the Chinese Room Argument, I think he would argue that what it is to actually have beliefs and desires is for the system which responds in specific ways to intentional prompts to understand the content (what is it to understand the content? Good question – I would engage in some kind of HOT theory at this point but Dennett, at least in his early work, is not the biggest fan of HOT theories. Eventually, all of this would likely have to be cashed out in some kind of robust behavioral analysis that something like ChatGPT is not currently capable of).

“On the embodied point: the relevant entities that Dennett is pointing to as being problematic are systems that respond as if they are embodied humans. This (currently) makes a huge difference to us: I care about whether I fall head over heels for some disembodied AI that says the right things to me vs some actual female in the world. Now, if the AI makes it very clear (via some kind of watermark) “I am acting like a human but am not in fact embodied”, we wouldn’t have an issue. The issue is that this kind of verification is not present.

“You may ask: why do we care if they are embodied? Should it matter (see the latest BladeRunner with his AI ‘partner’)? These are good questions that I’m not sure how to answer but they are different questions. The fact is that they do matter to us now and they matter to us deeply and we do not wish to be deceived on this count.”

I think this is in the same neighborhood as previously mentioned remarks by David Wallace here.

So, it’s probably time for me to give up on (2). As far as point (1), I still think, however, that (i) we need a better definition of “counterfeit” people or that ends up just meaning anything we get fooled by on the internet – and nothing new or even really AI; because (ii) it seems to me the real question is whether (a) there’s about to be an explosion of new grifting on the internet and (b) whether that will primarily be caused by counterfeit people. Whatever we think about (a), I still think we don’t have a lot of evidence of (b).

But I will end with two even more weedy remarks – the first will only make sense if you are familiar with evolutionary epistemology and the second will only make sense if you read “Language Evolution and Direct vs Indirect Symbol Grounding” per Dennett’s recommendation above.

(1.) I don’t agree with Dennett’s remark that AIs or other(?) counterfeit people are “memes that can evolve independently of any human control.” In fact, I don’t think there are such things as memes that literally evolve since I think Dawkin’s meme theory is so ill-defined that it’s “not even wrong” – and evolutionary epistemology in general is a dead end because epistemology is normative and evolutionary theory is descriptive. Most importantly, there is no DNA-equivalent when it comes to ideas. “Memes” are worse than “paradigms” in their failure to ever pick out any specific thing, much less encode that thing in some relatively permanent way amenable to natural selection. But that’s a topic for another day. And Dennett may well have meant to use “meme” in some looser (popular) sense, or might have robust response to these criticisms.

(2.) I don’t think a project to distinguish direct from indirect symbol grounding (Harnad) can possibly work. Why? For the same kinds of Quinean reasons that the analytic/synthetic distinction is always just a matter of degree and, maybe better, for the same reasons causal theories of reference don’t work.

Thanks so much to Abbas and, of course, thanks to Daniel Dennett. What a privilege.

Sorry to all for whom this is just so much inside baseball. Next month back to more sociopolitical concerns with a new column on “The Death of Standing.”