Generative AI technology has reshaped content creation in the video, audio, and image realms, with some unintended consequences. The ability to instantly create virtually anything that one can describe has significant implications for trust and safety on digital platforms, particularly in the realm of child safety, including child sexual abuse material (CSAM). This talk explores the dynamics of AI-generated CSAM and non-consensual intimate imagery (NCII), presenting original research done by the Stanford Internet Observatory and explaining in accessible terms how the technology works, including findings related to prevalence of CSAM in training data as well as explanations of specific advances that have made generative AI models effective for the creation of child-related content. It also discusses the dynamics around dissemination, and issues related to legality, that social media platforms and regulators have begun to confront. This talk emphasizes that these changes are not mere hypotheticals but are already shaping our online environment, and discusses ways that child safety experts can grapple with these new challenges.
Learning Objectives:
Understand the Technology: learn how generative AI models work, examining specific advances misused in creation of CSAM as well as dynamics of some open-source communities.
Explore Prevalence and Impact: we will examine research findings about prevalence of CSAM in AI training data, and challenges that platforms and investigators now face.
Confronting Legal & Regulatory Challenges: discuss and assess current responses of social platforms and regulators, exploring potential strategies for more effectively ensuring child safety online.