Deepfake porn is not going away, so we should find a way to live with that

by Thomas R. Wells

In a world in which anyone can create fake sexually explicit images of anyone else, we should not be surprised when it happens, and we should not get especially upset if it happens to us.

Synopsis of my argument

Premise 1: It is now trivially easy to use a generative AI image apps to produce realistic looking deepfake nudes and explicit pornographic videos of anyone without their consent

Premise 2: P1 is, or should be, common knowledge (everyone knows it, and that everyone knows that everyone knows it)

Conclusion 1: Therefore, everyone knows that everyone knows that sexually explicit pictures of non-porn stars are almost certainly deepfakes created without that person’s consent

Premise 3: Privacy is the right to be mysterious to others: to determine for yourself what different people know, or think they know, about you.

Premise 4: If the deepfake images circulated of you were considered real by those friends and strangers who might find them then that would be a grave violation of your privacy and it would be reasonable to feel very upset about it

Premise 5: However, by C1, everyone knows that everyone knows that these sexualised images are not real

Conclusion 2: Therefore, it is not reasonable to get upset about finding deepfake nudes of ourselves circulating on the internet. The correct response is more of a shrug.

I. Social norms and institutions need updating for a deepfake world

There are thousands of free or nearly free generative image AI apps, many of them deliberately marketing themselves for their lack of censorship. The ease of using them and the realisticness of their outputs continue to improve. Unsurprisingly, vast numbers of (mostly) women have already been ‘deepfake porned’ – with sexualised images and videos produced from original real photos. Some of these have discovered that their deepfakes are circulating on the internet or in chatgroups and felt deeply violated and ashamed. They worried in particular that other people – friends, co-workers, potential employers, and so on – would assume that the deepfakes were real and pass moral judgements on them for their fictional licentiousness.

This is a textbook example of social institutions and norms being outdated and no longer fit for purpose in the circumstances of the modern world. Believing anything you see, for example. Or following the aphorism, ‘no smoke without fire’. Or conflating prudishness with professionalism to justify severe though informal punishment for anyone whose sexual being is not kept securely locked in their bedroom.

Many people regret this and would like to return the world to the point where the institutions and norms they grew up with still worked. For example by passing laws against making and sharing deepfake porn, or requiring all generative AI models to be porn-proof. I can see how such new laws could help – especially around the sharing of deepfake porn and its use in cyber-bullying and harassment. However, I find it unlikely that a non-totalitarian state could ‘fix’ the problem by such means. As is often the case when new technologies change the world, we are going to have to accommodate ourselves to those changes.

In addition, many of the norms and institutions that people would like to rescue are not especially worth keeping. They are prized for their comfortable familiarity, and the complacency that allows, rather than for their actual value. We should be glad of the opportunity for conscious reflection and revision that the deepfake porn crisis has created, and which will help us to navigate the broader transition to the deepfake world in general (as so often with technology, pornography is pioneering technological developments that will soon appear in other spaces).

The starting point is reconciling us all to the obvious fact that we now live in a deepfake world whether we like it or not. Everyone knows – or should be brought to know – that highly realistic seeming images and videos can now be entirely made up by computers and cannot be distinguished from real recordings without considerable technical expertise. Hence we can no longer rely on what our eyes tell us that a picture says happened. This is not a novel situation – for the overwhelming bulk of humanity’s existence we have had to get by with easily faked words. (And photos were anyway never the solid reliable context-independent evidence we were so willing to taken them for: they were always framed.)

With regard to the specific issue of deepfake porn, the character of our acceptance of our new epistemic circumstances is particularly important. It should be ‘common knowledge‘ – meaning that everyone knows that everyone knows that everyone knows – that the overwhelmingly most likely explanation for the appearance of sexually explicit images of non-pornstars on the internet is that they are deepfakes. Coworkers do not have to wonder whether their Maddie really does have an S&M fetish and how to incorporate that into their understanding of her character. They can safely assume that the reason they are so surprised by what these pictures reveal is that they are made up.

And of course everyone should also know that everyone knows that being deepfaked is something that can happen to anyone and doesn’t have any wider meaning or implications to be worried about. Employers do not have to worry that the disturbing pictures that turn up when googling the candidate for junior financial analyst reveal any kind of potential liability to hiring them, for example by raising questions from clients about the conventional middle class propriety (‘professionalism’) of their employees.

Thus, once society accepts that deepfake porn is not going away and updates relevant norms and institutions, most of the real harms – to relationships, careers, and so on – that its subjects have had to worry about up to now will disappear.

II. If no one believes that deepfakes are real, then they do not harm our privacy

The modern ‘liberal’ conception of the individual recognises and supports our personal sovereignty: that we each have a life of our own to live, a life that matters just because it is ours. Privacy is essential to maintaining this individuality. By protecting our freedom from having to explain ourselves to others and offer justifications for our every thought and feeling it gives us the space to live our life for ourselves. Privacy is thus the right to be mysterious to others, to choose how we are known and by whom.

Our privacy and hence individuality is endangered by various circumstances of modern life, such as the ‘context collapse’ imposed by first generation social media and the pervasive corporate and government electronic surveillance enabled by life in a digital world (previously).

The first people unlucky enough to be the subjects of deepfake porn also suffered a violation of their privacy. Images purporting to be of them were created and shared without their consent, and those who discovered them – friends, co-workers, family members, and complete strangers – would have believed that they now knew something important about their subjects and drawn various conclusions from it. Questions were posed. Answers disbelieved. Lives derailed. Though the subjects had done literally nothing feel guilty about, there was often the devastating sense of shame that accompanies the knowledge that others believe you have.

Fortunately – once its ubiquity is common knowledge – deepfake porn is no longer a threat to our privacy or to the precious individuality that privacy protects. For once everyone knows that it’s all fake, those who come across such images will not be under the impression that they know something about us or have the right to ask follow-up questions about intimate parts of our lives and our inability to properly manage who we share that with. Shame is an inappropriate response when no reasonable person could think you have done anything wrong.

Regrettably, deepfake porn is now a fact of life. But there advantages to recognising that fact. One can see it as just one of those unfortunate things that can happen to anyone – like stepping in dogshit. Unpleasant but not life-destroying.

***

Thomas Wells teaches philosophy in the Netherlands and blogs at The Philosopher’s Beard

Enjoying the content on 3QD? Help keep us going by donating now.