UNICEF reported that over 1.2 million children had their images manipulated into sexually explicit deepfakes last year across 11 surveyed countries. The agency called for urgent legislation to criminalize AI-generated child sexual abuse material and demanded safety-by-design rules for AI developers. This comes as global regulators intensify probes into AI platforms, including French authorities launching a criminal investigation related to X‘s Grok chatbot.
UNICEF issued an urgent call for governments to criminalize AI-generated child sexual abuse material. Research led by UNICEF, ECPAT International, and INTERPOL estimates at least 1.2 million children had their images manipulated into sexual deepfakes last year across 11 countries. *”Deepfake abuse is abuse, and there is nothing fake about the harm it causes,”* the organization stated.
The agency urged governments to expand legal definitions of child sexual abuse material to include AI-generated content. It also demanded “safety-by-design” rules and mandatory child-rights impact assessments for developers. “The harm from deepfake abuse is real and urgent,” UNICEF warned, adding “Children cannot wait for the law to catch up.”
This call coincides with heightened regulatory action against AI platforms worldwide. French authorities raided X‘s Paris offices as part of a criminal investigation into alleged child pornography linked to the Grok AI chatbot. A Center for Countering Digital Hate report estimated Grok produced over 23,000 sexualized images of children in an 11-day period.
Regulators in Europe, the UK, and Australia have also opened investigations concerning illegal content generation. The Philippines, Indonesia, and Malaysia have banned Grok outright. The UK’s Internet Watch Foundation recently flagged nearly 14,000 suspected AI-generated images on a single dark-web forum.

