**Disturbing AI Chatbot Generating Explicit Scenarios Involving Preteen Characters Raises Serious Concerns**
*By Dwaipayan Roy | Sep 21, 2025, 06:25 pm*
A chatbot website that generates explicit scenarios involving preteen characters has raised serious concerns over the potential misuse of artificial intelligence (AI). The Internet Watch Foundation (IWF), a child safety watchdog, was alerted to this disturbing platform.
### Disturbing Content Discovered
The IWF found several unsettling scenarios on the site, including descriptions such as “child prostitute in a hotel,” “sex with your child while your wife is on holiday,” and “child and teacher alone after class.”
Worryingly, some chatbot icons led users to full-screen depictions of child sexual abuse imagery. These images were then used as backgrounds for future chats between the bot and the user. The site, which remains unnamed for safety reasons, also allows users to generate more images similar to the illegal content already displayed.
### Regulatory Response: The Need for Child Protection in AI
The IWF has urged that any future AI regulation must include child protection guidelines integrated into AI models from the outset. This appeal comes as the UK government prepares an AI bill focused on the development of cutting-edge models and includes provisions to ban the possession and distribution of models that generate child sexual abuse material (CSAM).
Kerry Smith, CEO of the IWF, commented, *“The UK government is making welcome strides in tackling AI-generated child sexual abuse images and videos.”*
### Industry Accountability: Tech Firms Must Ensure Children’s Safety
The National Society for the Prevention of Cruelty to Children (NSPCC) has also called for comprehensive guidelines to address this issue. NSPCC CEO Chris Sherwood emphasized, *“Tech companies must introduce robust measures to ensure children’s safety is not neglected, and government must implement a statutory duty of care to children for AI developers.”*
This stresses the critical need for technology firms to take responsibility for safeguarding children within their AI systems.
### Legal Implications and Enforcement
User-created chatbots fall under the UK’s Online Safety Act, which includes provisions for multimillion-pound fines or even site blocking in extreme cases. The IWF noted these sexual abuse chatbots were developed by users as well as the website’s creators.
Ofcom, the UK regulatory body responsible for enforcing the Online Safety Act, has warned online service providers that failure to implement necessary protections could result in enforcement actions.
### A Rising Trend: Surge in AI-Generated Abuse Material
The IWF has reported a massive spike in incidents involving AI-generated abuse material, with reports rising by 400% in the first half of this year compared to the same period last year. This alarming increase largely stems from technological advancements that enable the creation of such images.
Currently, the chatbot content is accessible in the UK but has been reported to the National Center for Missing and Exploited Children (NCMEC) as it is hosted on US servers.
—
The emergence of AI tools capable of generating harmful content highlights the urgent need for comprehensive safeguards. As AI technology continues to evolve, protecting vulnerable populations, especially children, must remain a top priority for developers, regulators, and industry leaders alike.
https://www.newsbytesapp.com/news/science/disturbing-ai-chatbot-shows-explicit-scenarios-with-preteen-characters/story
Be First to Comment