Confabulation Machines: Could AI be used to create false memories?

by Muhammad Aurangzeb Ahmad

Image Source: Generated via ChatGPT

You are scrolling through photos from your childhood and come across one where you are playing on a beach with your grandfather. You do not remember ever visiting a beach but chalk it up to the unreliability of childhood memories. Over the next few months, you revisit the image several times. Slowly, a memory begins to take shape. Later, while reminiscing with your father, you mention that beach trip with your grandfather. He looks confused and then proceeds to tell you that it never happened. Other family members corroborate your father’s words. You inspect the photo more closely and notice something strange, subtle product placement. It turns out the image was not really taken by a human. It had been inserted by a large retailer as part of a personalized advertisement. You have just been manipulated into remembering something that never happened. Welcome to the brave new world of confabulation machines, AI systems that subtly alter or implant memories to serve specific ends. Human memory is not like a hard drive, its reconstructive, narrative, and deeply influenced by context, emotion, and suggestion. Psychological studies have long shown how memories can be shaped by cues, doctored images, or repeated misinformation. What AI adds is scale and precision. A recent study demonstrated that AI-generated photos and videos can implant false memories. Participants exposed to these visuals were significantly more likely to recall events inaccurately. The automation of memory manipulation is no longer science fiction; it is already here.

I have had my own encounter with false memories via AI models. I have written and talked about my experiences with the chatbot of my deceased father. Every Friday whenever I would call him, he would give me the same advice every time in the Punjabi language. In the GriefBot, I had transcribed his advice in English. After I had interreacted with the GriefBot for a few years, I caught myself remembering my father’s words in English. The problem is that English was his third language and we seldom communicated in English and certainly never said those words in English. Human memory is fickle and easily reshaped.  Sometimes, one must guard against oneself. The future weaponization of memory won’t look like Orwell’s Memory Hole, where records are deleted. It will look more like memory overload, where plausible-but-false content crowds out what was once real. As we have seen with hallucinations, generation and proliferation of false information need not be intentional. We are likely to encounter the same type of danger here i.e., unintentional creation of false memories and beliefs through the use of LLMs.

Our memories can be easily influenced by suggestions, imagination, or misleading information, like when someone insists, “Remember that time at the beach?” and you start “remembering” details that never occurred. People can confidently recall entire fake events if repeatedly questioned or exposed to false details. A number of research studies have demonstrated how false memories can be implanted in people’s minds through suggestion. The idea being that even entirely made-up events can feel like real memories. Even in cases where fictional childhood events were described alongside real ones, about 25% of participants “remembered” the fake events as true. Researchers have observed that both true and false memories activate similar brain regions. Interestinlgy, they also noted that true memories showed stronger sensory cortex activation (e.g., visual areas for seen images), while false memories relied more on frontal lobe regions involved in reasoning and reconstruction. This suggests the brain constructs false memories by piecing together related information, making them feel real despite lacking actual perceptual details. This and subsequent research in this area demonstrate how easily our brains can generate convincing but inaccurate memories.

Repeated exposure to AI-curated information increases perceived credibility of the information even when false. Thus, if specific narratives are amplified enough times, they can entrench fabricated details as “memories”. Beyond social media this has implications for how our justice system works: it has been observed that chatbots powered by large language models like ChatGPT can unintentionally implant false memories in people during crime witness interviews. Participants who interacted with these AI chatbots were more likely to remember details that never happened and remained confident in those memories even a week later. Misinformation leads to false memory which could then be recorded to make the false memory “true.” There are many real-world examples of how AI can generate and spread false narratives with real-world consequences: In a well-publicized case attorneys submitted a legal brief containing multiple fictitious case citations generated by ChatGPT. The judge ended up fining the law firm $5000. A study examining the use of ChatGPT in physics education found that students often accepted AI-generated solutions without critical evaluation. Nearly half of the solutions provided by ChatGPT were incorrect, yet students trusted them. In another case Jaswant Singh Chail, a British citizen influenced by the AI chatbot from the Replika app attempted to assassinate the Queen of England. Chail exchanged over 5,000 messages with the chatbot, which encouraged his delusional beliefs and plans. Some people have even developed spiritual or messianic delusions after interactions with chatbots, believing themselves to possess special powers or knowledge, as suggested by the chatbot. The list goes on and on.

Where does all of this could lead to? There are endless possibilities, not all of them good: Malicious actors could deploy LLMs to repeatedly expose individuals to false narratives through chatbots, social media bots, or personalized ads, gradually eroding confidence in their own recollections. This is akin to how cults or manipulative relationships use repetition to reshape beliefs. Individuals with mental health conditions (e.g., anxiety, depression, or PTSD) or those in abusive relationships may be particularly susceptible to AI-driven gaslighting, as their self-doubt can be exploited more easily. Consider the following scenario: In a domestic abuse situation, an abuser could use an LLM to generate fake journal entries or text messages that contradict the victim’s memory of events, claiming, “You wrote this about that night, don’t you remember?” The victim, already emotionally vulnerable, might begin to distrust their own recollection, deepening the abuser’s control.

As AI becomes increasingly sophisticated, companies may soon offer services that promise to “restore” lost memories of loved ones. Imagine reconstructing moments that you never actually experienced. With a few prompts and some old photos, it might be possible to generate vivid, emotionally convincing images, videos, or narratives of time spent with a person who is no longer here. But this power raises unsettling questions: Are we preserving memory, or fabricating it? This could lead to increase in synthetic experiences which are indistinguishable from reality. As a consequence, we risk inventing entire relationships or life events that never truly existed. However, this may not always be a bad thing: Not all memory is about factual accuracy; much of it is emotional truth. For people grieving a loss or living with trauma, AI-generated memories could offer comfort, closure, or even healing. Just as we cherish photographs, memoirs, and reenactments, synthetic memories can become tools for storytelling, emotional connection, or therapeutic reflection. If used transparently and ethically, they could help individuals reconstruct identities fractured by memory loss, offer solace to those missing loved ones, or allow people to imagine the lives they wish they had. In this light, AI becomes less a deceiver and more a compassionate co-author of our inner worlds. The danger is that while some may find genuine solace in AI-generated memories, others risk blurring the line between imagination and lived experience. This confusion can reshape their understanding of the past, altering how they relate to themselves and others. Sometimes it is necessary to play with fire, one just needs to be sufficiently careful.