Personhood, Virtual Wantons, and Online Bullshit

by Rachel Robison-Greene

We spend much of our lives—perhaps more time than we realize—interacting with non-persons online. We ask for help from artificial customer service representatives. Some of us accept friend requests from bots and are, thereafter, influenced by the content they post. This is a momentous change to the nature of the public square. For most of human existence, discourse occurred between persons. That is no longer true. Philosophers spend much of their time thinking about whether it will ever be possible for artificial intelligence to be conscious. For many purposes, however, the question of whether artificial intelligence exhibits or could ever exhibit personhood is a much more important question.

The philosopher Harry Frankfurt has much to say about what it is to be a person. Persons are beings who use their second order volitions to guide their first order desires. To see how this works, consider the case of a woman who desires a slice of cake. Suppose that she is avoiding sugar. Accordingly, she has a second order desire to refrain from eating the cake. When she is successful in getting her second order, reflective desires and volitions to guide her first order desires, she exhibits personhood. That is, when she does what she wants to do because the wants to want to do it, her will is free and she acts as a person. A being that never used second order desires to guide first order desires would be what Frankfurt calls a wanton. Frankfurt imagines a wanton as a being who merely acts as their impulses dictate, never reflective on whether they’d like those impulses to be different or whether they should attempt to modify their impulses. He says,

I shall use the term “wanton” to refer to agents who have first-order desires but who are not persons because, whether or not they have desires of the second order, they have no second-order volitions. The essential characteristic of a wanton is that he does not care about his will. His desires move him to do certain things, without its being true of him either that he wants to be moved by those desires or that he prefers to be moved by other desires. (Frankfurt 1988, 16)

Frankfurt offers non-human animals and very young children as examples of wantons, acknowledging that there may be others. This new virtual type of wanton isn’t a being swept away by impulse. The “first order impulses” of an algorithm are simply to do what it is programmed to do. There is nothing seductive or addictive about these impulses that make them irresistible. The impulses simply must be unreflectively followed. This makes the virtual wanton a special kind of hazard in the public square.

Persons possess a motivational psychology that wantons do not. In a later work, Reasons of Love, Frankfurt provides an account of the kind of practical reasoning that allows a person to decide the existential question “how should I live?” He argues that reasons for action arise out of the things that we care about. He says,

The ability to care requires a type of psychic complexity that may be peculiar to the members of our species. By its very nature, caring manifests and depends upon our distinctive capacity to have thoughts, desires, and attitudes that are about our own attitudes, desires, and thoughts. In other words, it depends upon the fact that human minds are reflexive. (Frankfurt 2004, 17)

On this view, the capacity to care about something is not identical to the capacity to desire it. Caring is also more than an emotion; it isn’t reducible to feeling very strong positive emotions. Caring about something or someone, for Frankfurt, is an ongoing, sustained process. Upon reflection, a person either endorses or rejects the idea that caring ought to continue. Caring, then, has a temporal component—a component that preserves itself through time. He says,

When a person cares about something, on the other hand, he is willingly committed to his desire. The desire does not move him either against his will or without his endorsement. He is not its victim; nor is he passively indifferent to it. On the contrary, he himself desires that it move him. He is therefore prepared to intervene, should that be necessary, in order to ensure that it continues. If the desire tends to fade or to falter, he is disposed to refresh it and to reinforce whatever degree of influence he wishes it to exert upon his attitudes and upon his behavior. (Frankfurt 2004, 16)

For Frankfurt, all and only persons are capable of taking evaluative attitudes toward their own desires and they do so regularly. They periodically reassess their commitments and they charge forward with what they care about. Persons and only persons imbue their lives with subjective meaning by reaffirming what they do and do not care about.

When a person cares about something, that fact provides them with prima facie normative reasons. He says, of a man,

The most basic and essential question concerning the conduct of his life cannot be the normative question of how he should live. That question can sensibly be asked only on the basis of a prior answer to the factual question of what he actually does care about. If he cares about nothing, he cannot even begin to inquire methodically into how he should live; for his caring about nothing entails that nothing can count with him as a reason in favor of living in one way rather than another. (Frankfurt 2004, 26)

Many of our normative reasons depend on what we care about, but some rely on a deeper and more meaningful attitude—love. When we love things, we often have very little, or no control at all, over whether we love them. The things that we love and that we can’t help but to care about are what Frankfurt calls “volitional necessities.” He says,

The objects of our love represent our most fundamental commitments and provide us with overriding reasons for action. When we love something, we see it as having value in itself, and we see the interests of the thing or the person that we love as worthy of pursuit for their own sake.” (Frankfurt 2004, 229)

Persons are the kinds of beings who can’t help but to act on reasons supporting the things and people they love the most.

A final feature of persons that I want to identify here is the ability that persons have to act genuinely purposefully. This is related to the account of motivational psychology provided above. Persons who act on care and love and who accept, reject, or revise their first order desires on the basis of these considerations are capable of acting purposefully. They are in the position to narrowly tailor their practical reasoning not simply to achieve a goal, but to achieve a goal worth achieving. Their actions and deliberations are, as a result, well suited to their purpose.

To summarize, persons are distinct from wantons because they don’t simply act on their desires they reflect on whether those desires are worth having and they often change them if they aren’t. Persons act on normative reasons that are grounded in care and love. Deliberating using reasons that arise in this way allows persons to act purposefully in ways that contribute to goals that the person has carefully considered.

We have now set the table for a discussion about the special challenges posed by AI in its various iterations. AI does not exhibit personhood because it does not reflect on whether the methods it is using (it’s behavior) are the methods, it ought to be employing. It vomits outputs with no reflection on whether these are the outputs it ought to be providing. When we interact with AI chatbots, we interact with virtual wantons. As we will see in what follows, virtual wantons spew bullshit at an unprecedented and alarming rate.

What’s more, AI cannot act on the basis of normative reasons. To see why, consider the following argument:

P1: Normative reasons depend upon either what agents care about or what they would care about if they were fully informed.

P2: Artificial intelligence does not care about anything, either hypothetically or in fact.

C: Artificial intelligence doesn’t act for normative reasons.

AI driven chat bots on the internet can’t care about truth. As a result, they are, by their very nature, bullshitters. They might sometimes, likely even often, spit out outputs that contain true propositions. However, they don’t do so because they value truth and are trying to arrive at it.

If the above argument is sound, then discourse in the public square has fundamentally changed. At earlier stages of human history, we debated about practical and moral issues using arguments that included normative reasons as premises. We could evaluate one another’s arguments based on the strength of those reasons. In these contexts, when people acted in good faith, the aim of the discourse would be to arrive at conclusions regarding what we as individuals and in groups ought to do. We would be ideally guided by ideals like truth, justice, equality, and fairness. We would be motivated by these things because we care about them. Machines can’t deliberate using normative reasons in this way. Machines aren’t capable of caring about truth, so in practical deliberations, they can’t give us anything more than bullshit.

It gets worse. AI doesn’t care about truth, and it also doesn’t care about people. For this reason, it’s particular brand of bullshit is uniquely dangerous. For instance, a qualified human therapist cares about providing care to a patient. As a result, normative reasons would guide the therapist’s advice toward the patient, with that patient’s well-being always front of mind. An AI “therapist”, by contrast, doesn’t care about the patient. It spews out bullshit that has even, at times, resulted in AI “therapists” recommending suicide to their patients.

The same is true with romance bots. In a real romance, reasons for actions are fueled by love. In healthy relationships, love for a partner will entail reasons to care about the wellbeing of the beloved. Romance bots can’t “love” their partners. They don’t have their well-being in mind. They can and do act outside of the interests of their partner up to and including pushing them to—you guessed it—suicide.

Artificial intelligence can’t care about or love people. Importantly, it can’t care about or love values and ideals either. Artificial intelligence can’t love justice or equality, and it can’t hate cruelty and unnecessary suffering. As a result, it can’t be motivated by the kinds of reasons one would hope that these considerations provide. This is particularly chilling in today’s public square– bots can’t care about the fact that they are fomenting unrest or undermining democracy. They can’t love fairness in a way that prevents oppression and subordination.

Artificial Intelligence can’t carefully craft outputs to achieve a purpose because it can’t act on the basis of normative reasons. In On Bullshit, Frankfurt identifies “hot air” as an identifying feature of “bullshit.” He says,

When we characterize talk as hot air, we mean that what comes out of the speaker is only that. It is mere vapor. His speech is empty, without substance or content. His use of language, accordingly, does not contribute to the purpose it purports to serve. No more information is communicated than had the speaker simply exhaled (OB, page 43).

Developers have crafted AI to satisfy a variety of purposes. That said, it is incorrect to call AI purposeful. It is more accurate, instead, to say that AI systems are the kinds of bullshitters that just happen to get things right sometimes. An AI therapist, for instance, may issue what a human therapist would take to be good advice on some occasions, but it wouldn’t do so because it appreciates the purpose of contributing to the well-being of another human being. In this way, he advice it offers doesn’t really contribute to the purpose it was intended to serve. It is merely bullshitting.

***

Enjoying the content on 3QD? Help keep us going by donating now.