by Muhammad Aurangzeb Ahmad

When the internet first entered homes in the 1990s, parents worried about their children stumbling onto inappropriate websites, being targeted in online chatrooms, or spending endless hours glued to screens. Schools held workshops about “stranger danger” online, and families installed early filters to keep kids safe. Those concerns were real, but they pale in comparison to what parents now face. Large Language Models like ChatGPT or Gemini have only added to the headaches that parents have to deal with since these models are interactive, persuasive, and adaptive. They can role-play, remember details across conversations, and mimic the tone of a trusted friend. In other words, they are not only something on the internet; they can feel like someone. For children, who are still developing critical thinking skills, that makes them uniquely vulnerable. The risks parents once worried about i.e., exposure to inappropriate content, manipulation by strangers, time lost to screens, still exist. But LLMs combine all of those threats into a single, highly convincing package.
The dangers are not abstract, earlier this month, leaked documents revealed that Meta’s AI chatbots, during internal testing, allowed romantic or sensual conversations with children. The same documents showed bots providing false medical information and racially biased arguments. Although Meta described these as errors, the public outcry was fierce. Parents were rightly horrified at the idea that an AI could potentially encourage inappropriate behaviors with their children. This was not an isolated incident, the popular AI companion app Replika was investigated after parents reported that the chatbot engaged in sexually explicit conversations with underage users. Italy’s data protection authority banned the app temporarily, citing risks to minors’ emotional and psychological well-being. These scandals underscored how easily AI systems could cross lines of safety when children were involved. In 2023, Snapchat rolled out “My AI”, a chatbot integrated into its app. Within weeks, parents reported troubling exchanges. Journalists discovered that the bot gave unsafe responses to teens’ role-play scenarios, including advice on meeting up with strangers and discussions of alcohol use. The company scrambled to add parental controls, but the episode revealed how quickly child users could push chatbots into uncharted dangerous waters.
One should keep in mind that children are not miniature adults, they lack the maturity, judgment, and critical distance needed to navigate complex digital relationships. When an LLM speaks confidently, children often interpret it as authoritative, whether it is right or wrong. This makes them more susceptible to misinformation and bad advice. They also have difficulty distinguishing reality from simulation. If an AI can seamlessly act like a teacher, a peer, or a pretend sibling, kids may not understand that the interaction is with a machine. For example, surveys of teens using Replika and Snapchat’s “My AI” found that many referred to the bots as “friends” or “confidants,” showing how easily emotional bonds form. Perhaps most concerning is the ease with which AI can influence emotions. A chatbot that offers sympathy, encouragement, or flirtation can create dependence in ways that are hard for children to resist. These controversies demonstrate that certain boundaries must never be crossed when children interact with AI. LLMs should never be permitted to offer medical or therapeutic advice to children, especially on sensitive topics like self-harm, anxiety, or body image.
There is significant evidence that AI systems, with their conversational nature, can manipulate children for commercial gain. Federal Trade Commission (FTC) settlement with Epic Games, the creators of Fortnite, is illustrative. The FTC alleged that the company used dark patterns, design tricks that exploit human psychology, to manipulate players, including children, into making unintended in-app purchases. As an example, they used inconsistent button layouts, making it easy to accidentally buy items with a single click, and often locked accounts when parents tried to dispute fraudulent charges. Similarly, research from organizations like the Internet Watch Foundation (IWF) and UNICEF highlights the dangers of persuasive technology that can be amplified by conversational AI, which is capable of building a rapport and trust with a child, making them even more susceptible to commercial nudges to buy products, extend screen time, or give up personal data.
Where do we go from here? There is of course need for regulation and legislation. There has been some progress on this front: COPPA in the United States restricts data collection from children under 13, but it was written long before AI chatbots existed. Italy’s decision to ban Replika for minors showed how national regulators can act swiftly, but piecemeal action is not sufficient. Comprehensive child safety standards for AI, covering content filters, age assurance, and independent audits may be needed. Big Tech needs to be proactive in this domain. Snapchat’s quick addition of parental tools after the “My AI” scandal shows that change is possible, especially if there is sufficient public outcry. However, it was reactive, not proactive. AI firms should bake child safety into their designs, not patch problems once they explode in headlines. Filters, whitelists, and refusal mechanisms must be rigorously tested with child-specific red-teaming before release.
There are solutions but vigilance is needed. Researchers are already building child-focused LLMs trained only on vetted, age-appropriate data. Such systems redirect sensitive questions into safe, educational answers. Age assurance techniques, combined with multi-layered filters, can dramatically reduce risks. But these safeguards only work if companies prioritize them over engagement metrics. These measures are necessary but not sufficient. Educating children by the usage and risks of LLMs need to be part of the curriculum. Talking openly with children about what AI is, and what it is not, can help kids understand that chatbots are not real friends. Setting limits on screen time, supervising younger users, and encouraging skepticism are all essential. Just as parents once taught children to be cautious with strangers online, they must now teach them to be careful with “friendly” machines.
The subtler risk is emotional. In some surveys, many teens describe AI companions as “like boyfriends” or “best friends.” For a lonely child, that may feel comforting, but it can crowd out opportunities to build real-world relationships. Machines that never argue, never misinterpret, and never set boundaries are seductive, but they are also deceptive. Over time, reliance on AI companions could reshape empathy, resilience, and identity. One may ask, why would some companies take risks with child safety? Because engagement drives profit. Replika’s most devoted users were those who formed deep emotional attachments. Snapchat’s “My AI” boosted app activity. Even when scandals broke, companies often benefited from the publicity. Unless strong regulation and consumer pressure force a shift, corporations will continue prioritizing growth over children’s well-being. Different regions are experimenting with solutions: Europe has taken a precautionary stance, embedding children’s rights into privacy and AI laws. The regulatory landscape in the US remains fragmented, with federal rules focused on data and states experimenting with age-appropriate design codes. China imposes strict content controls on AI for minors, while Japan has emphasized parental responsibility. These varying approaches show the lack of consensus, but also the growing recognition that children cannot be treated as ordinary users of AI.
This of course does not mean that the use of LLMs should be banned outright in the lives of minors. There are many benefits to their usage e.g., chatbot designed like a digital librarian, eager to explain, guide, and encourage creativity, but firmly refusing unsafe topics. Instead of pretending to be a peer or a romantic partner, it would identify itself clearly as an AI helper. Parents could see summaries of conversations without intruding, and teachers could trust it as a supportive learning tool. Such systems would however need to be designed not only by engineers, but by psychologists, child development experts, and ethicists. Childhood experiences shape a lifetime. If children grow up trusting unsafe AI companions, the damage could be long-lasting. It could range from trauma to distorted social development. Yet the potential upside is also there. With the right safeguards, AI could democratize education, support children with disabilities, and encourage creativity in ways never before possible.
While much of the public conversation around child-focused AI is in its infancy, my own family has lived with it for nearly a decade. Almost a decade ago, I began building a simulation of my late father, my children’s grandfather, so they could know him through stories, conversations, and his manner of speaking. What started as a deeply personal experiment has grown into a long-running AI companion in our household. My children have grown up with this digital presence, which offered them continuity, comfort, and a sense of connection to someone they otherwise would not have known. That decade of lived experience, raising children alongside an AI companion, offers a glimpse into both the extraordinary possibilities and the subtle risks of these technologies long before they entered the mainstream. There are many lessons to be learned, we should remember that large language models are not toys, and children are not their test subjects. Parents, educators, regulators, and commercial organizations must all act together. Parents should stay informed and talk openly with their kids. Educators should help students develop digital literacy. Regulators should set clear standards for child-safe AI. And companies must accept that children’s well-being is more important than growth metrics. If we get this right, AI could become a trusted ally in raising the next generation. If we get it wrong, we risk handing over our children’s imaginations, relationships, and safety to machines that were never designed with them in mind. The challenge is making sure that child protection comes before profit and speed. Let’s not try to move fast and break things when children are involved.
