by Muhammad Aurangzeb Ahmad

When the internet first entered homes in the 1990s, parents worried about their children stumbling onto inappropriate websites, being targeted in online chatrooms, or spending endless hours glued to screens. Schools held workshops about “stranger danger” online, and families installed early filters to keep kids safe. Those concerns were real, but they pale in comparison to what parents now face. Large Language Models like ChatGPT or Gemini have only added to the headaches that parents have to deal with since these models are interactive, persuasive, and adaptive. They can role-play, remember details across conversations, and mimic the tone of a trusted friend. In other words, they are not only something on the internet; they can feel like someone. For children, who are still developing critical thinking skills, that makes them uniquely vulnerable. The risks parents once worried about i.e., exposure to inappropriate content, manipulation by strangers, time lost to screens, still exist. But LLMs combine all of those threats into a single, highly convincing package.
The dangers are not abstract, earlier this month, leaked documents revealed that Meta’s AI chatbots, during internal testing, allowed romantic or sensual conversations with children. The same documents showed bots providing false medical information and racially biased arguments. Although Meta described these as errors, the public outcry was fierce. Parents were rightly horrified at the idea that an AI could potentially encourage inappropriate behaviors with their children. This was not an isolated incident, the popular AI companion app Replika was investigated after parents reported that the chatbot engaged in sexually explicit conversations with underage users. Italy’s data protection authority banned the app temporarily, citing risks to minors’ emotional and psychological well-being. These scandals underscored how easily AI systems could cross lines of safety when children were involved. In 2023, Snapchat rolled out “My AI”, a chatbot integrated into its app. Within weeks, parents reported troubling exchanges. Journalists discovered that the bot gave unsafe responses to teens’ role-play scenarios, including advice on meeting up with strangers and discussions of alcohol use. The company scrambled to add parental controls, but the episode revealed how quickly child users could push chatbots into uncharted dangerous waters.
One should keep in mind that children are not miniature adults, they lack the maturity, judgment, and critical distance needed to navigate complex digital relationships. When an LLM speaks confidently, children often interpret it as authoritative, whether it is right or wrong. Read more »
