A new Stanford report examines the strengths and weaknesses of the online child safety ecosystem.
[Ads /]
Shelby Grossman, one of the researchers with Stanford's Internet Observatory, says the nine-month study looked at the platforms that are submitting reports: the National Center for Missing and Exploited Children, which processes these reports, and law enforcement officers who investigate them.
"It's not just that law enforcement are overwhelmed with the volume of reports, it's that they feel that they aren't able to accurately prioritize among reports for investigation - they feel like they can't figure out which report is most likely to lead to the rescue of a child who is actively being abused," Grossman said.
The organization National Center for Missing and Exploited Children has a CyberTipline.
MORE: Kidnapping scam uses artificial intelligence to clone teen girl's voice, mother issues warning
It's a tool for members of the public or companies to report online child sexual exploitation, but many of the tips received are incomplete or inaccurate.
"Many platforms are submitting CyberTipline reports that show a meme and the meme technically meets the definition of child sexual abuse material, so platforms are required by law to report it - these are typically images that people are sharing out of poor comedic intent," Grossman said.
[Ads /]
According to the center's data, nearly half of all reports made to the CyberTipline in 2022 were considered "actionable."
"Law enforcement might look at an image and not be able to tell if it's an AI-generated image or an image or a real child who has not yet been identified who needs to be rescued, so that's just going to overwhelm an already overwhelmed system," Grossman said.
San Jose State associate professor Bryce Westlake says as AI-generated images become more realistic looking, it's going to become more of a problem.
"If they're spending time trying to determine if this AI content is a real victim, it's taking away limited resources they already have to rescue real children that are being abused," Westlake said.
MORE: Biden calls for ban on AI voice generations during State of the Union
Westlake says the way investigators usually detect child sex abuse material is through a hash value.
[Ads /]
"All files are 0s and 1s and basically, the system is able to put that into like this 32 hexadecimal code that kind of acts like a digital fingerprint. If you had an image of a child and you drew a line on it, or cropped it or whatever, it's going to change the hash value. So it kind of acts like a fingerprint. And so what they do typically, Facebook, Instagram, Twitter the social media companies, they have a database that's done by NCMEC and basically they look for these hash values so they go and download the image, they check the hash value - oh this is a known hash value therefore it's child sexual abuse material, lets automatically get rid of it," Westlake said.
Westlake says what happens with AI-generated content, child offenders could create 1,000 new images with hash values not in databases.
"Challenge is, once an image gets out there you can't do anything to get rid of it and with AI-generated content, it means that tons and tons of brand new images are being created every day," Westlake said.
NCMEC released this statement to ABC7 News:
"Over the years, the complexity of reports and the severity of the crimes against children continue to evolve, therefore, leveraging emerging technological solutions into the entire CyberTipline process leads to more children being safeguarded and offenders being held accountable."