You have commented 339 times on Rantburg.

Your Name
Your e-mail (optional)
Website (optional)
My Original Nic        Pic-a-Nic        Sorry. Comments have been closed on this article.
Bold Italic Underline Strike Bullet Blockquote Small Big Link Squish Foto Photo
Cyber
US receives thousands of reports of AI-generated child abuse content in growing risk
2024-02-02
[Jpost] The NCMEC has not yet published the total number of child abuse content reports from all sources that it received in 2023, but in 2022 it received reports of about 88.3 million files.

The US National Center for Missing and Exploited Children (NCMEC) said it had received 4,700 reports last year about content generated by artificial intelligence that depicted child sexual exploitation.

The NCMEC told Reuters the figure reflected a nascent problem that is expected to grow as AI technology advances.

In recent months, child safety experts and researchers have raised the alarm about the risk that generative AI tech, which can create text and images in response to prompts, could exacerbate online exploitation.

INCREASING CHILD EXPLOITATIVE MATERIAL
The NCMEC has not yet published the total number of child abuse content reports from all sources that it received in 2023, but in 2022 it received reports of about 88.3 million files.

"We are receiving reports from the generative AI companies themselves, (online) platforms and members of the public. It's absolutely happening," said John Shehan, senior vice president at NCMEC, which serves as the national clearinghouse to report child abuse content to law enforcement.

The chief executives of Meta Platforms, X, TikTok, Snap and Discord testified in a Senate hearing on Wednesday about online child safety, where lawmakers questioned the social media and messaging companies about their efforts to protect children from online predators.

Researchers at Stanford Internet Observatory said in a report in June that generative AI could be used by abusers to repeatedly harm real children by creating new images that match a child's likeness.
Not nearly as harmful as repeatedly doing whatever-it-was to the actual child.
Content flagged as AI-generated is becoming "more and more photo-realistic," making it challenging to determine if the victim is a real person, said Fallon McNulty, director of NCMEC's CyberTipline, which receives reports of online child exploitation.

OpenAI, creator of the popular ChatGPT, has set up a process to send reports to NCMEC, and the organization is in conversations with other generative AI companies, McNulty said.

Posted by:Skidmark

#7  Water pistols mean that you'll shoot the Dems.


Well...yeah.
Posted by: Skidmark   2024-02-02 17:40  

#6  Think its gong to be a bumpy ride. Sure that the story is already wrote that Hunter's Laptop all an AI hit-piece.
Posted by: swksvolFF   2024-02-02 15:51  

#5  re;#4

Wait till they use this principle to fire arms.

Water pistols mean that you'll shoot the Dems.
Posted by: AlanC   2024-02-02 15:36  

#4  Chaff for the real thing.
Posted by: swksvolFF   2024-02-02 13:05  

#3  if no actual person is harmed, how is it illegal?

Gateway drug, ET. Impure thoughts.
Posted by: Skidmark   2024-02-02 12:30  

#2  So, if no actual person is harmed, how is it illegal? We've reached the point of punishing people for having bad thoughts.
Posted by: ed in texas   2024-02-02 08:54  

#1  One of many pending 'infrastructure targets' for Biden's upcoming cyber war.

Haven't heard much about China's washable latex dolls of late.
Posted by: Skidmark   2024-02-02 04:18  

00:00