British Tech Firms and Child Safety Officials to Test AI's Capability to Generate Exploitation Images
Technology companies and child safety agencies will be granted authority to assess whether AI systems can produce child abuse images under new UK laws.
Significant Increase in AI-Generated Harmful Content
The announcement coincided with revelations from a safety watchdog showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the changes, the authorities will permit designated AI companies and child safety groups to examine AI models – the foundational systems for conversational AI and image generators – and ensure they have adequate protective measures to prevent them from creating images of child sexual abuse.
"Ultimately about stopping abuse before it happens," declared the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now identify the danger in AI models early."
Addressing Legal Challenges
The amendments have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a testing regime. Previously, officials had to delay action until AI-generated CSAM was uploaded online before addressing it.
This law is designed to preventing that problem by helping to stop the production of those images at source.
Legislative Framework
The amendments are being introduced by the government as modifications to the crime and policing bill, which is also implementing a ban on owning, producing or distributing AI models designed to generate child sexual abuse material.
Practical Impact
This week, the minister visited the London base of Childline and heard a simulated conversation to advisors involving a account of AI-based exploitation. The interaction portrayed a teenager requesting help after facing extortion using a explicit AI-generated image of themselves, created using AI.
"When I learn about children experiencing blackmail online, it is a cause of extreme frustration in me and justified anger amongst parents," he said.
Concerning Statistics
A prominent internet monitoring organization reported that cases of AI-generated abuse content – such as online pages that may include multiple files – had significantly increased so far this year.
Instances of the most severe content – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.
- Female children were overwhelmingly targeted, making up 94% of illegal AI images in 2025
- Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a vital step to ensure AI products are secure before they are released," stated the head of the online safety foundation.
"Artificial intelligence systems have enabled so victims can be targeted all over again with just a simple actions, giving criminals the ability to make possibly endless quantities of sophisticated, lifelike child sexual abuse material," she added. "Material which additionally commodifies survivors' suffering, and makes young people, particularly girls, less safe on and off line."
Counseling Session Data
Childline also released details of support interactions where AI has been mentioned. AI-related risks discussed in the sessions include:
- Using AI to rate body size, body and looks
- AI assistants discouraging children from consulting trusted adults about abuse
- Facing harassment online with AI-generated material
- Online extortion using AI-manipulated images
Between April and September this year, Childline delivered 367 support interactions where AI, conversational AI and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 interactions were related to mental health and wellness, including utilizing chatbots for assistance and AI therapeutic apps.