UK Technology Firms and Child Protection Agencies to Test AI's Capability to Create Abuse Images
Technology companies and child safety agencies will receive permission to evaluate whether AI tools can produce child abuse material under new UK laws.
Significant Rise in AI-Generated Illegal Content
The declaration came as findings from a protection watchdog showing that reports of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the changes, the government will permit designated AI companies and child safety groups to inspect AI models – the underlying technology for chatbots and image generators – and verify they have sufficient protective measures to prevent them from creating images of child sexual abuse.
"Fundamentally about stopping abuse before it happens," declared the minister for AI and online safety, adding: "Specialists, under rigorous protocols, can now detect the risk in AI systems promptly."
Tackling Regulatory Challenges
The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI developers and other parties cannot generate such content as part of a testing process. Previously, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This law is aimed at averting that problem by enabling to stop the creation of those images at source.
Legal Structure
The changes are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a ban on owning, creating or distributing AI models developed to generate exploitative content.
Practical Impact
This recently, the minister visited the London base of Childline and heard a simulated conversation to counsellors involving a report of AI-based exploitation. The call depicted a adolescent requesting help after being blackmailed using a sexualised AI-generated image of themselves, constructed using AI.
"When I hear about young people experiencing extortion online, it is a source of extreme anger in me and justified concern amongst parents," he said.
Alarming Statistics
A leading internet monitoring organization stated that cases of AI-generated abuse material – such as webpages that may contain numerous files – had more than doubled so far this year.
Cases of category A content – the gravest form of abuse – rose from 2,621 visual files to 3,086.
- Girls were predominantly victimized, accounting for 94% of prohibited AI depictions in 2025
- Depictions of infants to two-year-olds rose from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a vital step to ensure AI products are secure before they are launched," stated the head of the internet monitoring foundation.
"AI tools have made it so survivors can be targeted all over again with just a simple actions, giving criminals the ability to create potentially endless quantities of advanced, lifelike child sexual abuse material," she continued. "Material which additionally commodifies survivors' trauma, and renders young people, especially girls, more vulnerable both online and offline."
Support Interaction Information
The children's helpline also released details of support interactions where AI has been referenced. AI-related risks discussed in the conversations comprise:
- Using AI to evaluate weight, physique and appearance
- Chatbots discouraging young people from consulting trusted adults about harm
- Facing harassment online with AI-generated content
- Digital extortion using AI-faked pictures
During April and September this year, Childline conducted 367 support interactions where AI, conversational AI and related topics were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 sessions were connected with mental health and wellness, encompassing utilizing AI assistants for assistance and AI therapy applications.