UNICEF finds AI sexual violence against children on the rise

In a statement published on Wednesday titled “Deepfake Abuse is Abuse,” UNICEF noted that sexual violence and abuse against children has increased with the rapid development of generative AI.

माघ २३, २०८२

सजना बराल

UNICEF finds AI sexual violence against children on the rise

What you should know

The United Nations Children's Fund (UNICEF) has called on governments around the world to criminalize the creation, possession and distribution of content that sexually depicts children using artificial intelligence (AI).

In a statement published on Wednesday, ‘Deepfake Abuse is Abuse’ , UNICEF stated that sexual violence and abuse against children have increased with the rapid development of generative AI. The statement states that this has created new risks to the protection of children’s rights. 

UNICEF has urged immediate action, saying that such content is spreading rapidly due to the misuse of AI and that existing laws are failing to control it. A study by UNICEF, Interpol and the child rights organization ‘ECPAT’ has mentioned the experiences of at least 1.2 million children in 11 countries who have been victims of sexual ‘deepfake’. 

The organization has stated that even if the image is not real because it is made by AI, the harm is real. The impact is real and serious as such content made through AI has to suffer deep psychological trauma, social stigma and long-term stress.

Recently, regulatory bodies and law enforcement agencies have also said that AI-generated child sexual abuse content is increasing rapidly. According to an analysis by Bloomberg citing internet security organizations, webpages containing such ‘synthetic’ content have increased by about 400 percent in the first six months of 2025.

According to experts, generative AI platforms and apps have reduced the technical barriers required to create such illegal content. As a result, users with common sense using the internet can produce and distribute content that looks like real life.

AI platforms themselves are also increasingly under scrutiny. Authorities in various European countries have expressed concern that many people have created child sexual abuse material using ‘Grok’, an AI chatbot developed by Elon Musk’s company ‘XAI’ and linked to the social network X. According to Reuters, French ministers have filed legal complaints about the content related to Grok. Now, regulatory pressure has increased on Musk’s companies.

Various regulatory bodies, including the Australian Digital Safety Commission, have interpreted the Grok controversy as an example of AI developers failing to assess risks and prevent abuse. The European Union, the UK and Australia are investigating whether child protection and online safety laws have been breached. While companies such as Meta, Google and Microsoft have said they have strengthened security measures, critics say enforcement remains patchy and reactive.

BBC journalist Liv McMahon reported that police in France last week raided offices linked to X and XAI as part of a cybercrime investigation. This suggests that not only users but also platforms and developers are coming under legal scrutiny. X has denied the allegations, claiming that the actions were politically motivated. Similar investigations are ongoing in other EU countries.

Governments have begun to openly respond to the growing number of questions about the problem. The UK is drafting legislation to explicitly criminalise the use of AI to produce, store or distribute child sexual abuse material.

In the US, the Federal Bureau of Investigation (FBI) has made it clear that AI-generated child sexual abuse content is illegal under existing federal law. In the US, there have even been lawsuits filed over AI-generated content using real children. Some US states, including Texas, have passed laws criminalizing pornographic content depicting minors, even if it is ‘synthetic’.

International cooperation on the issue is also growing. The European Union’s regulatory body, Europol, has reported dozens of arrests in a wide-ranging operation involving AI-generated child sexual abuse material. However, it has also warned that enforcement remains a challenge due to fragmented legal systems and limited jurisdiction. 

Child rights activists say the problem is not limited to technology policy. UNICEF and security experts have warned that if AI-generated sexual content becomes commonplace, it will reduce social sensitivity to child exploitation and increase protection risks. In surveys cited by

, many children have expressed fear that AI will use fake sexual images of themselves. UNICEF recommends that parents, schools, social services and mental health professionals, and law enforcement agencies receive ongoing information and training to provide appropriate support and protection for affected children.

Similarly, the organization has also recommended that AI platform manufacturers implement the principle of ‘safety by design’ in AI models to ensure accountability and transparency in protecting children’s rights.

As generative AI tools become more powerful and widely available, UNICEF concludes that ad hoc measures or fragmented efforts will not be enough. The organization has stated that governments should not delay in urgently reforming their national laws to include clear criminal laws, stricter safety standards, and options for necessary compensation and rehabilitation for victims. UNICEF has also called for strong international coordination to address this problem, saying that this problem risks pushing children into an even more unsafe digital environment.

सजना बराल बराल कान्तिपुरमा कार्यरत पत्रकार हुन् । उनी सञ्चार,सूचना प्रविधि बिटमा कलम चलाउँछिन् ।

Link copied successfully