British Tech Companies and Child Safety Officials to Examine AI's Capability to Create Abuse Content

Technology companies and child protection agencies will be granted authority to evaluate whether artificial intelligence tools can generate child abuse material under new British laws.

Significant Rise in AI-Generated Illegal Material

The announcement coincided with findings from a safety watchdog showing that reports of AI-generated child sexual abuse material have more than doubled in the last twelve months, rising from 199 in 2024 to 426 in 2025.

Updated Regulatory Framework

Under the changes, the government will allow approved AI companies and child safety organizations to inspect AI systems – the underlying technology for chatbots and visual AI tools – and ensure they have sufficient protective measures to prevent them from creating depictions of child exploitation.

"Ultimately about stopping abuse before it happens," stated the minister for AI and online safety, adding: "Experts, under rigorous conditions, can now detect the risk in AI systems early."

Tackling Legal Challenges

The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and others cannot create such images as part of a evaluation regime. Until now, authorities had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This legislation is aimed at averting that issue by helping to halt the production of those materials at their origin.

Legal Framework

The amendments are being introduced by the government as revisions to the crime and policing bill, which is also implementing a ban on owning, creating or distributing AI models designed to generate exploitative content.

Practical Consequences

This week, the minister toured the London headquarters of Childline and heard a simulated call to advisors featuring a report of AI-based exploitation. The call depicted a adolescent seeking help after facing extortion using a explicit AI-generated image of himself, created using AI.

"When I learn about children experiencing blackmail online, it is a cause of intense frustration in me and justified concern amongst families," he stated.

Concerning Data

A leading online safety organization stated that instances of AI-generated abuse material – such as webpages that may include multiple files – had more than doubled so far this year.

Cases of the most severe material – the most serious form of abuse – rose from 2,621 images or videos to 3,086.

  • Female children were predominantly victimized, accounting for 94% of prohibited AI images in 2025
  • Depictions of infants to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The legislative amendment could "represent a vital step to ensure AI products are safe before they are released," commented the chief executive of the internet monitoring foundation.

"Artificial intelligence systems have enabled so survivors can be targeted all over again with just a few clicks, providing criminals the capability to make possibly endless amounts of sophisticated, lifelike child sexual abuse material," she added. "Content which additionally exploits victims' suffering, and makes children, particularly girls, more vulnerable on and off line."

Support Session Data

Childline also released information of support interactions where AI has been referenced. AI-related harms mentioned in the conversations comprise:

  • Employing AI to evaluate body size, physique and appearance
  • Chatbots discouraging children from consulting trusted adults about harm
  • Being bullied online with AI-generated content
  • Digital blackmail using AI-faked images

Between April and September this year, Childline delivered 367 support sessions where AI, chatbots and associated topics were discussed, four times as many as in the same period last year.

Fifty percent of the references of AI in the 2025 sessions were related to psychological wellbeing and wellness, including utilizing AI assistants for support and AI therapy apps.

Mark Wang MD
Mark Wang MD

Elara is a passionate adventurer and writer, sharing insights from her global treks and love for the natural world.

February 2026 Blog Roll

Popular Post