Voyage Voices

Home
Voyage Voices

The Dark Side of AI: How Artificial Intelligence Fuels Sextortion, Child Exploitation, and Revenge Porn

Content Warning: This article discusses topics such as sextortion and child sexual exploitation. If you have been affected by the topics raised, please speak to an adult you trust or contact the following organisations for more support:

- ChildLine: Call on 0800 1111

- Victim Support (for Sextortion): Call on 08081689111 or visit their website.

https://www.victimsupport.org.uk/children-and-young-people/

Author: Nimrod W-F

Introduction

As the capabilities of artificial intelligence (AI) continue to grow, its potential to transform industries and enhance daily life is clear. Whether it's boosting efficiency in marketing and healthcare or, controversially, enabling students to cheat on assignments. Yet, the same technology that fuels innovation can also be weaponised for malicious purposes. Among the most disturbing misuses of AI are its roles in child sexual exploitation and sextortion. AI’s capability to create hyper-realistic images, videos, and even automated conversations has equipped predators with powerful tools to manipulate, extort, and exploit young people. From generating deepfake child sexual abuse material (CSAM) to automating online grooming, AI has amplified these crimes, allowing offenders to operate with greater ease while leaving law enforcement struggling to keep pace. As these threats escalate, it’s vital to address how AI is being misused and the urgent steps tech companies must take to curb these dangers and protect young people.

AI-Generated Content and Deepfakes

AI is increasingly being used to create realistic, fabricated images and videos known as deepfakes through advanced machine learning techniques like Generative Adversarial Networks (GANs). In this process, two neural networks—the generator and the discriminator—work together, with the generator creating fake media that becomes more convincing over time (Deloitte, 2024). This technology allows AI to swap faces, replicate voices, and produce hyper-realistic details such as accurate lighting and facial expressions, making it difficult to distinguish between real and fake content.

It can be argued that deepfake technology has legitimate uses, such as in film and TV show dubbing/subbing. However, this technology is used to do more harm than good. Sextortion and revenge porn are common examples, where deepfake content is used to create false and explicit media of individuals, which is then distributed or used to blackmail them. As AI tools for creating deepfakes grow more user-friendly, even individuals with limited technical skills can generate highly realistic fabricated media, significantly increasing the risks of misuse for exploitation, manipulation, and blackmail. A recent high-profile case of malicious deepfake use involves Taylor Swift, who, in the beginning of 2024, had ‘deepfake pornographic images’ circulating on X (Twitter), garnering 47 million views (Saner, 2024). Only through the mobilisation and mass-reporting by her fans was the post taken down in under a day. However, this unfortunately isn't the case for other celebrities such as actress Xochitl Gomez (portrayed America Chavez in Doctor Strange: Multiverse of Madness), whose team has tried to take down the content from X, but to no avail (Tenbarge, 2024). Another celebrity who recently became a victim of this was rapper Megan Thee Stallion, who had a sexually explicit deepfake video circulated on X (Jones, 2024). It has left me wondering, if such high-profile individuals can be affected by this, how much of an impact is it having on young people who have nowhere near the level of support as a celebrity?

Child Sexual Exploitation and AI

AI-CSAM poses a significant challenge for law enforcement, as advanced algorithms can now create hyper-realistic images and videos without involving actual victims. This complicates investigations, as authorities struggle to distinguish between real and AI-generated content, leading to potential legal loopholes where no identifiable child is harmed, yet harmful content proliferates (Piccolo, 2024). Such AI-generated material often circulates on dark web networks, feeding illegal markets that demand this kind of exploitation. These hidden platforms provide a haven for perpetrators to share, trade, and profit from such content, further complicating efforts to track and stop offenders. According to research by the Internet Watch Foundation (2024), in 2023, 20,000 AI-CSAM were found on just a single dark web forum within a one month period. Furthermore, more than half of images assessed by IWF analysts were realistic enough to be assessed under the same law as real CSAM. This not only emphasises how unprecedented of a legal landscape this is, but also raises concerns about how base material is even collected to commit such heinousness. Through young girls’ Instagram selfies? Through parents sharing cute videos and images of their young child(ren) to the world?

Additionally, the rise of AI-powered chatbots and other automated tools has enabled criminals to streamline the grooming process, using these technologies to engage with and manipulate young people at scale, making extortion and exploitation more efficient and harder to detect. Law enforcement agencies around the world face a growing burden as they attempt to combat these evolving threats, hindered by outdated laws that may not fully account for AI-generated abuse and the overwhelming task of sifting through vast amounts of both real and artificial exploitation material. This combination of technological sophistication and legal grey areas poses a significant threat to child safety, but also highlights how resources on internet safety need to continuously be adapted alongside rapid innovation.

Anonymity and Scalability of Exploitation via AI

AI has drastically lowered the barriers for perpetrators to produce and distribute exploitation material at scale. With minimal effort or technical skill, criminals can now use AI to generate realistic abusive content, enabling them to flood the dark web with vast amounts of material. This ease of access has significantly amplified the scope and speed of exploitation (Lee, 2024). Furthermore, AI-powered tools also bolster criminals anonymity through encrypted platforms and privacy tools, making it increasingly difficult for law enforcement to track their activities locally, nationally and especially on an international scale (Shanmugaesan, 2024).

For victims, this technological evolution increases their vulnerability, as AI allows for exploitation without involvement or/and the victim's knowledge. A victim can be repeatedly targeted with altered or newly generated content based on their likeness, re-victimising them and extending the cycle of abuse. Even if the individual is not revictimized in person, AI-generated materials can resurface and be redistributed over time, continuously perpetuating harm. The psychological trauma from knowing these images or videos are circulating online is compounded by the legal and emotional challenges of trying to remove this content from the internet, where complete eradication is nearly impossible. As a result, AI CSAM leaves lasting real-world impacts on victims’ mental health and sense of safety.

The impact on young people is highlighted through figures from the National Centre for Missing and Exploited Children (NCMEC) in the US. In 2023, the organisation received 26,718 global reports of financially motivated extortion, more than double than the 10,731 reports made in 2022 (Vaughan, 2024). In England and Wales specifically, 23 cases of sextortion were recorded in 2014. However, that has increased to over 30,000 in a decade.

Coupled with the significant increase in reports is the unfortunate increase in suicide by victims of financial sextortion. In 2022, 17-year-old Jordan DeMay took his own life just six hours after ‘he started communicating with the scammers’ (Tidy, 2024). A year after, another young Australian boy committed suicide after ‘being threatened with sextortion if $500AUD wasn't paid’ (Ransley and Brennan, 2024). There are others like Robin Janjua (Weichel, 2024) and William Doiron (Silberman, 2024), and so many victims who are scared to speak out and go unnoticed. The staggering statistics highlight the need for more support, resources, and clear guidance to help young people protect themselves from sextortion and other types of exploitation. Parents, carers, and teachers should also have access to these tools to spot the signs of sextortion early.

Solutions and Progress

Some of the world's biggest tech and AI firms, such as Google, Amazon, Meta and Microsoft have started mobilising and collaborating together to try and ‘combat the creation and spread of AI-CSAM’ (Landi, 2024).These companies alongside others have pledged to do so through their commitment to the Safety Design Principles (co-authored by Thorn and All Tech Is Human) (Lee, 2024). This sets a precedent across the tech industry and holds companies accountable for ensuring that child online safety is treated with paramount importance. Other large tech companies such as X (Twitter), however, have not pledged their commitment to the principles.

On the legislative front, policymakers worldwide are working to update existing frameworks to address emerging technological threats, with a particular focus on strengthening international cooperation and establishing clear legal standards for digital forensics. Some examples of such work include the presentation of proposals and draughting of reports in the European Parliament (2019) on tackling AI-CSAM. Whilst this is a big step from one of the largest parliaments in the world, there is still a lot that needs to be done as there is still no dedicated AI legislation on AI-CSAM or regulatory body to oversee all AI-related matters in the UK (Bhatnagar and Gajjar, 2024).

Equally important is the role of public education and awareness. Schools have already implemented comprehensive digital literacy sessions that teach critical thinking skills for evaluating online content and understanding privacy protection. However, there is still a lot more that can be done in the structuring and dissemination of resources regarding child online safety. These sessions, however, usually cover practical safety measures like recognising warning signs of grooming and manipulation, understanding digital footprints and knowing how to report concerning content. By combining technological solutions, legal frameworks and educational initiatives, communities can build stronger defences against online exploitation while empowering young people to protect themselves in digital spaces.

To tackle the misuse of AI, collaboration is essential. I hope that through a call to action, it sparks a collective effort between governments and tech companies to not only raise awareness but have effective safeguards, ethical standards and accessible resources in place to match the speed at which AI is advancing. By acting together, it can be ensured that AI’s benefits are not outweighed by its dangers, helping to create a safer online environment for young people and children.

Sources

Learn more about Voyage

WE USE COOKIES! By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation.
Read our Privacy Police here.