Question: Is there any research on the psychological impacts of being a victim of synthetic sexual images (aka deepfake porn)?
Answer: Although generative AI (artificial intelligence that can generate text, images, or sound) has many potentially positive uses, it can be used in negative ways too.
To best understand the research on this topic, it is important to have a shared understanding of what we mean by deepfakes, synthetic pornography, and other related terms. Please review our portal response Defining deepfakes, synthetic pornography & virtual child sexual abuse material if any of these terms are unfamiliar to you.
How are AI-altered images dangerous, especially for kids?
If a child is a victim of AI-generated, image-based sexual abuse, they may experience humiliation, shame, anger, violation, and self-blame. These outcomes can contribute to immediate and continual emotional distress, withdrawal from family and school livelihoods, and challenges with sustaining trusting relationships. Some cases can lead to self-harm and suicidal thoughts. If deepfakes are passed around a school community or peer groups, the victim may be bullied, teased, and harassed; trauma is amplified each time the content is shared. Also, if a child’s face is involved in deepfake pornography, they may face harm to their reputation, lower performance at school, and decreasing confidence about the future and available opportunities due to worries that the images will be permanently available online (even though this content is fake!).
Research with youth revealed that 1 in 6 minors who are involved in a potentially harmful online sexual interaction never disclose it to anyone. Boys are less likely to tell others when they have been victims of deepfake pornography or CSAM. Being portrayed in a deepfake can instill fear of not being believed by others, intensifying barriers to help-seeking.
Aside from intentions to spread fake news and misinformation, create hoaxes, bully, and take advantage of minors sexually, bad actors who use AI-altered images against others may desire to financially sextort children as well. This means that adults will blackmail kids online by coercing them to pay money in order to prohibit intimate photos or recordings of them from being shared. These situations are often targeted toward boys aged 14-17 years. For example, an adult may disguise himself as a teenage girl and convince a young boy to send a nude image of himself – generative AI removes the need for that step. All that a bad actor needs is a photo of the boy’s face, allowing him to fabricate digital CSAM in which the boy appears to be involved in explicit sexual interactions.
Negative effects & psychological concerns of deepfakes – what studies show:
It’s important to note that the evidence surrounding AI-generated, image-based sexual abuse predominantly features voices and participants of individuals 18 years and older for ethical reasons. Therefore, the results from studies below include input from adults, but the perspectives highlighted are ones that relate to child health, safety, and psychological well-being.
In one descriptive study, authors conducted an online survey to look at attitudes towards synthetic pornography. Negative impacts of consuming AI-generated sexual content, or synthetic pornography specifically in this study, were described as:
- Addiction and dependency risks, lack of control over viewing
- Lowered interest in real sexual interactions due to the combination of customization and instant gratification
- Distorted expectations of real sexual interactions and romantic and/or sexual relationships
- Harm to body image of viewers
- Exploitation of women, people of color, and children in this content (These identities and ages are disproportionately featured in synthetic pornography.)
Another study utilized a dataset of 13,293 Reddit comments collected about deepfake porn and hentai (i.e. anime and manga pornography, drawings/cartoons) to explore viewpoints about the sexualization of minors (SOM) online. The Reddit discussions centered around major themes about negative impacts of this online content:
- Illegality – Involvement of a victim in deepfake porn makes it a sexual crime and a form of child abuse.
- Promotion of pedophilia – Consuming content demonstrating SOM increases sexual exploitation of and offending against children. Allowing or accepting SOM in digital spaces is harmful, given that violent porn viewing has led to heightened likelihood of sexual offending in past research. Viewing of CSAM also has strong correlations with sexual offending.
- General harmfulness – Promotion of the culture of sexualization in the US and the normalization of SOM online desensitizes people to the potential distress placed on youth.
This issue is very anxiety-inducing and scary! If you’re a parent or caregiver of a child involved in AI-generated, image-based sexual abuse, here are tips on how to move forward and resources that may be helpful.
References
- Eelmaa, S. (2022). Sexualization of Children in Deepfakes and Hentai. Trames, XXVI(2), 229–248.
- Okolie, C. (2023). Artificial Intelligence-Altered Videos (Deepfakes) and Data Privacy Concerns. Journal of International Women’s Studies, 25, 13.
- Kim, S., & Banks, J. (2024). Expanding experiences and anxieties: Gender-identity and sexual-orientation differences in attitudes toward synthetic pornography. Porn Studies, 1–19.
- Tenbarge, K. (2024, July 24). Defiance Act passes in the Senate, potentially allowing deepfake victims to sue over nonconsensual images. (2024, July 24). NBC News.
Age: 10-24
Topics: deepfakes, synthetic sexual images, synthetic pornography, sexual abuse, sexual violence, image-generative AI
Role: Clinician
Last Updated
03/13/2025
Source
American Academy of Pediatrics