BETA RELEASE

Summary

Google outlines its responsible AI approach to prevent AI‑facilitated child sexual abuse and exploitation, detailing detection tools, partnerships, policies, and ongoing mitigation efforts.

Key quotes

“Do not engage in dangerous or illegal activities, or otherwise violate applicable law or regulations. This includes generating or distributing content that relates to child sexual abuse or exploitation.”
“Since the IWF first started monitoring AI-generated child sexual abuse and exploitation (AI CSAE) in early 2023 we’ve seen a rapid improvement in the ability to generate lifelike imagery.”
“In 2024, we reported more than 600 instances of apparent CSAM to NCMEC using hash‑matching, which were uploaded as a user prompt to our generative AI products.”
“Google joined the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member in 2024.”

This PDF is a progress update released by Google Public Policy in April 2025. It documents Google’s initiatives to safeguard children from AI‑facilitated sexual abuse, describing detection systems, data‑cleaning processes, partnerships with NGOs and law‑enforcement, and policy frameworks. The report also highlights reporting metrics and collaborative industry efforts.