February 05, (THEWILL) — The United Nations Children’s Fund (UNICEF) has raised the alarm over the rapid proliferation of sexualized images of children generated through artificial intelligence (AI), warning that weak legal frameworks are failing to curb a fast-growing threat.
In a statement released on Wednesday, February 4, 2026, UNICEF disclosed that at least 1.2 million children reported having their images manipulated into sexually explicit deepfakes within the past year.
The findings stem from a study conducted across 11 countries in collaboration with the International Criminal Police Organization (INTERPOL) and ECPAT, a global network working to end the sexual exploitation of children.
The report highlighted the rise of nudification tools, where AI is used to digitally remove or alter clothing to create fabricated nude images of children.
In some surveyed countries, the phenomenon affects as many as one in 25 children—the equivalent of one child in a typical classroom.
“We must be clear. Sexualised images of children generated or manipulated using AI tools are child sexual abuse materials. Deepfake abuse is abuse, and there is nothing fake about the harm it causes. When a child’s image or identity is used, that child is directly victimised,” UNICEF stated.
The agency noted that children are acutely aware of these dangers, with up to two-thirds of respondents in certain countries expressing worry that AI could be used to target them.
UNICEF warned that AI-generated CSAM normalizes exploitation, fuels demand for abusive content, and presents significant challenges for law enforcement agencies attempting to identify and protect real victims.
While welcoming efforts by some AI developers who have adopted safety-by-design approaches, UNICEF described the overall industry response as uneven.
It specifically warned of the risks posed when generative AI tools are embedded directly into social media platforms, allowing manipulated images to spread rapidly and widely.
To address the escalation, the agency issued its Guidance on AI and Children 3.0, which outlines recommendations for policies that uphold children’s rights in the digital age.
Key calls for action include urging governments to expand legal definitions of CSAM to explicitly include AI-generated content and criminalize its creation, possession, and distribution.
UNICEF also called on AI developers to embed robust guardrails directly into technologies during the development phase and tasked digital companies to move beyond merely removing content after reports.
The agency emphasized that companies must invest in proactive detection technologies to prevent circulation entirely.
“Digital companies should strengthen content moderation through increased investment in detection technologies and proactive prevention measures,” the agency said, stressing that coordinated global action is essential to protect children from unchecked AI misuse.



Leave a comment