UK Technology Companies and Child Protection Agencies to Examine AI's Ability to Generate Abuse Content
Technology companies and child safety organizations will be granted authority to assess whether AI tools can produce child abuse material under new British laws.
Substantial Increase in AI-Generated Harmful Material
The declaration coincided with findings from a protection watchdog showing that cases of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the changes, the government will allow designated AI developers and child protection organizations to inspect AI models – the foundational systems for chatbots and visual AI tools – and ensure they have adequate safeguards to prevent them from producing depictions of child sexual abuse.
"Ultimately about preventing exploitation before it occurs," stated Kanishka Narayan, noting: "Experts, under strict protocols, can now detect the danger in AI systems early."
Tackling Regulatory Challenges
The amendments have been introduced because it is illegal to create and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.
This law is designed to preventing that problem by enabling to halt the production of those materials at source.
Legislative Structure
The amendments are being added by the government as revisions to the criminal justice legislation, which is also establishing a ban on owning, creating or distributing AI systems designed to generate exploitative content.
Real-World Impact
This week, the official toured the London headquarters of a children's helpline and heard a simulated call to counsellors involving a report of AI-based abuse. The call portrayed a teenager seeking help after being blackmailed using a sexualised AI-generated image of himself, created using AI.
"When I learn about children facing extortion online, it is a source of intense frustration in me and rightful concern amongst parents," he stated.
Concerning Statistics
A leading online safety organization stated that cases of AI-generated exploitation content – such as online pages that may include multiple files – had significantly increased so far this year.
Cases of the most severe content – the most serious form of exploitation – increased from 2,621 visual files to 3,086.
- Girls were predominantly targeted, accounting for 94% of illegal AI depictions in 2025
- Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "represent a crucial step to guarantee AI tools are safe before they are launched," commented the head of the internet monitoring foundation.
"AI tools have made it so victims can be targeted all over again with just a few clicks, giving criminals the capability to make potentially endless amounts of sophisticated, lifelike exploitative content," she continued. "Content which additionally commodifies survivors' suffering, and makes children, particularly female children, more vulnerable on and off line."
Support Interaction Information
Childline also released details of counselling sessions where AI has been referenced. AI-related risks discussed in the conversations include:
- Using AI to rate weight, body and looks
- Chatbots dissuading young people from consulting trusted adults about harm
- Facing harassment online with AI-generated content
- Online blackmail using AI-manipulated images
Between April and September this year, the helpline conducted 367 counselling interactions where AI, chatbots and associated topics were discussed, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 interactions were related to mental health and wellness, including using AI assistants for support and AI therapy applications.