British Technology Companies and Child Protection Officials to Examine AI's Ability to Generate Exploitation Images
Technology companies and child safety organizations will receive authority to assess whether artificial intelligence systems can produce child exploitation material under recently introduced UK legislation.
Significant Increase in AI-Generated Harmful Material
The declaration came as revelations from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, rising from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the amendments, the government will allow approved AI developers and child safety organizations to inspect AI systems β the foundational technology for chatbots and visual AI tools β and verify they have sufficient protective measures to stop them from producing images of child sexual abuse.
"Ultimately about preventing abuse before it occurs," declared Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now detect the risk in AI models promptly."
Tackling Regulatory Challenges
The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI developers and others cannot generate such content as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that issue by enabling to stop the creation of those images at source.
Legislative Structure
The amendments are being introduced by the authorities as revisions to the criminal justice legislation, which is also establishing a prohibition on owning, creating or sharing AI models developed to generate child sexual abuse material.
Real-World Consequences
This week, the official toured the London base of Childline and listened to a simulated conversation to counsellors involving a account of AI-based abuse. The call portrayed a teenager requesting help after facing extortion using a explicit AI-generated image of themselves, created using AI.
"When I learn about children experiencing extortion online, it is a cause of intense anger in me and rightful anger amongst families," he said.
Concerning Data
A prominent online safety organization reported that instances of AI-generated exploitation material β such as online pages that may contain numerous files β had more than doubled so far this year.
Cases of the most severe material β the gravest form of abuse β rose from 2,621 visual files to 3,086.
- Girls were overwhelmingly targeted, making up 94% of illegal AI depictions in 2025
- Depictions of newborns to two-year-olds increased from five in 2024 to 92 in 2025
Sector Response
The law change could "represent a crucial step to ensure AI tools are secure before they are released," commented the head of the internet monitoring foundation.
"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a few clicks, providing criminals the capability to make possibly limitless amounts of advanced, lifelike exploitative content," she added. "Content which further exploits victims' suffering, and renders young people, particularly girls, more vulnerable both online and offline."
Support Interaction Data
The children's helpline also released details of support interactions where AI has been mentioned. AI-related risks discussed in the conversations comprise:
- Employing AI to rate body size, body and appearance
- AI assistants dissuading children from consulting safe guardians about abuse
- Being bullied online with AI-generated material
- Online extortion using AI-faked images
During April and September this year, the helpline conducted 367 counselling sessions where AI, conversational AI and associated terms were mentioned, four times as many as in the same period last year.
Half of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellness, encompassing utilizing AI assistants for support and AI therapeutic applications.