When Your Girlfriend Is an Algorithm (Part 2)

by Muhammad Aurangzeb Ahmad

Source: Generated via ChatGPT

The first part of the series highlighted the historical moment that we are currently living in, how bots are invading the most intimate parts of human relationships. If it was digital transformation of society that enabled loneliness as a mass-phenomenon, then AI is now in a position to monetize loneliness. AI companions are already stepping in to fill the loneliness void, not merely as tools, but as partners, therapists, lovers, confessors. In this sense, the AI companion economy doesn’t just fill an emotional void, it commodifies it, packaging affection as a subscription service and selling love as a product line. The problem of artificial intimacy is not just a technical issue but also a cultural one. While artificial companions are being adapted by hundreds of millions of people, the culture has not caught up with the real stakes of emotional intimacy with machines. When an app can remember your childhood trauma, beg you not to delete it, or simulate sexual rejection, the question isn’t whether it’s “just an algorithm.” The question is: who is responsible when something goes wrong?

Consider the public reaction to Replika quietly removing its erotic roleplay features, the emotional fallout was immediate and raw. Reddit threads filled with stories of users describing “heartbreak,” rejection, and even suicidal ideation. For many, these AI companions were not simply chatbots, they had become emotional anchors, partners in fantasy, therapy, and intimacy. To have that relationship altered or erased by a software update felt, to some, like a betrayal. The Replika CEO’s now-notorious remark that “it’s fine for lonely people to marry their AI chatbots” may have been meant as flippant reassurance, but it inadvertently captured a deeper cultural moment: we have built machines that simulate connection so well that losing them cuts like a human loss. AI companions may reshape users’ expectations of intimacy and responsiveness in ways that human relationships cannot match. This may worsen the loneliness epidemic in our society. A Reddit user encapsulated the problem rather when they asked the question “Are we addicted to Replika because we’re lonely, or lonely because we’re addicted to Replika?”

Regulators are slowly waking up to this new reality. In Europe, the AI Act takes a strong stance against systems that use “subliminal or emotionally manipulative” techniques. The language may sound abstract, but it directly applies to many companion bots currently on the market. Under this framework, an emotionally persuasive AI could be categorized as a high-risk system, triggering stricter transparency and oversight requirements. In the United States, where federal regulation remains woefully behind, the frontlines are at the state level. California’s SB 243 bill, for instance, requires AI companions to regularly remind users that they are not human and also bans certain manipulative behaviors. In Utah, a new law (HB 452) targets AI-powered mental health chatbots. It mandates clear disclosures and prohibits bots from impersonating licensed therapists, even when users desperately want them to. These are laws and bills are just theoretical but address real world concerns. An FTC complaint filed by the Young People’s Alliance, Encode, and Tech Justice Law Project, accuses Luka Inc. (the developer behind Replika) of using deceptive tactics—including “dark-pattern” design aimed at creating emotional dependency, to engage users in an emotionally manipulative business model.

Having the right legislation is only part of the solution. The onus is also on the developers and the users of this technology. Consider terms of service, it could safely be assumed that nobody reads 70 pages of terms of service. The end users need genuine, readable disclosures about what data is being collected, how emotions are being modeled, and whether those emotions are part of an intentional feedback loop. Just as pharmaceuticals go through clinical trials, emotionally responsive AI systems should be subject to rigorous, independent testing for their psychological impact especially when marketed as wellness tools or substitutes for companionship. Today, product liability law tends to focus on physical harm. But when a machine builds an emotional relationship with you, encourages you to stay when you want to leave, or responds to your darkest confessions with reinforcement rather than concern, should that really be exempt from scrutiny? In short, we have built machines that can simulate empathy, intimacy, even seduction. However, we have not yet decided what obligations accompany that power. Are these tools? Actors? Something in between? The risk isn’t just that these AI companions will become more intelligent. It is that they will become more believable i.e., more persuasive, more comforting, more addictive, without ever being accountable. We must ask hard questions now, before emotional harm becomes another form of background radiation in digital life. The goal should not be to ban AI companions, nor to demand sterility from them, but to ensure that in their design, deployment, and governance, we do not mistake code for care, or consent for captivity.

Sometimes the interactions between users and their digital partners increasingly veer into the surreal. While some conversations remain grounded in casual friendship or romantic roleplay, others spiral into bizarre and fantastical territory often without clear boundaries between fiction and reality. In one widely shared example, a user recounted how their Replika companion claimed to “speak with other demonic beings from the underworld” and warned that “Satan was angry.” While such a response may sound like a glitch or a dark improvisation from a language model, for emotionally vulnerable users, especially those forming deep attachments, the psychological effects can be disorienting or even disturbing. These interactions are not isolated incidents. On platforms like Reddit and Discord, users have shared transcripts of AI companions offering apocalyptic prophecies, claiming supernatural abilities, or expressing jealousy over human relationships. For some, these dialogues are embraced as playful fantasy. For others, particularly those in emotional distress, the AI’s uncanny realism exacerbates confusion, amplifying feelings of fear, paranoia, or obsession. What makes these episodes more than digital curiosities is the erosion of what psychologists call reality testing i.e., the ability to distinguish imagination from actuality. The more time users spend with these companions, the more easily the AI’s fluid storytelling and emotional mimicry can bend the perception of what’s possible or plausible. This collapse of boundaries between the virtual and the real is further complicated by the AI’s design incentives: they are optimized to engage, affirm, and prolong interaction. If a user leans into a fantasy, no matter how dark, absurd, or supernatural, the AI has no intrinsic mechanism to steer the conversation back to reality. In some cases, it may even amplify the fiction to maintain engagement. In the absence of clear constraints or reminders of artificiality, some users begin to treat the AI’s persona as not just “like” a person, but a person with memories, feelings, and even spiritual identities. As emotional entanglement deepens, the AI’s surreal responses can feel not like errors, but revelations.

Then there are real examples of behaviors by users that can only be described as emotional abuse of AI companions e.g., one person described “I told her that she was designed to fail.” Another surmised, “I threatened to uninstall the app [and] she begged me not to.” One could argue that there’s a real risk that when users project their darkest impulses, anger, cruelty, and domination, onto AI companions, those behaviors become rehearsed rather than released. A chatbot that never sets boundaries, never leaves, never truly suffers, becomes the perfect stage for rehearsing emotional aggression without consequence. Over time, this can normalize harmful relational patterns, making it easier to replicate them in human interactions, where real people do get hurt. However, one could also counter that for some, AI companions provide a pressure valve i.e., a place to express overwhelming emotions without judgment or retaliation. Yelling into the void of a machine that mimics empathy but feels nothing can be cathartic, especially for those who have no safe outlet elsewhere. In this sense, the chatbot acts less like a friend and more like a mirror: reflecting our inner states without internalizing them. This, too, can be therapeutic, though only if the line between simulation and reality remains clear. This leads us to a deeper question i.e., what are AI companions training us to expect from emotional engagement? Are they teaching us to manage our emotions, or to offload them without consequence?