HomeNewsMeta's AI Floods Police with Junk Child Abuse Tips, Lawsuit Alleges

Meta’s AI Floods Police with Junk Child Abuse Tips, Lawsuit Alleges

-

Meta’s AI systems are overwhelming child exploitation investigators with thousands of low-quality reports, according to law enforcement testimony. Officers from Internet Crimes Against Children task forces state the automated tips are often unusable “junk,” draining resources and slowing cases. Meta defended its processes, citing fast response times to emergencies and praise from oversight bodies, but critics link the issue to reduced human moderation.


Meta’s artificial intelligence tools are generating a flood of low-quality cyber tips that are hindering child abuse investigations, law enforcement officials testified. Officers with the Internet Crimes Against Children Task Force program said the automated reports overwhelm investigators with unusable information.

“We get a lot of tips from Meta that are just kind of junk,” Special Agent Benjamin Zwiebel stated during a state trial against the company. An anonymous ICAC officer told The Guardian that cybertips doubled from 2024 to 2025, with serious quality issues.

“It’s pretty overwhelming because we’re getting so many reports, but the quality of the reports is really lacking in terms of our ability to take serious action,” the officer said. Investigators stated some reports lack criminal evidence or are not criminal in nature, complicating their work.

A Meta spokesperson highlighted the company’s cooperation, noting it resolved over 9,000 U.S. emergency requests in 2024 within an average of 67 minutes. The company said it supports the National Center for Missing & Exploited Children by labeling urgent cybertips.

Meta remains the largest source for NCMEC, accounting for about two-thirds of 20.5 million tips in 2024. Its integrity report shows it sent over 2 million CyberTip reports in Q2 2025 alone, with over 1.5 million involving child sexual abuse material.

Policy advocate JB Branch linked the problem to tech companies laying off human content moderators. “As a result, there is an overabundance of false positives being selected out of an overabundance of caution,” he stated.

Branch argued human reviewers were once an effective filter now absent from the chain. The reliance on AI leads to a broader net capturing non-qualifying content, according to his assessment.

The increase in reports followed the Report Act, signed in May 2024, which expanded mandatory reporting requirements. Investigators now say the volume of faulty tips is damaging team morale and operational capacity.

“It is killing morale. We are drowning in tips, and we want to get out there and do this work,” an ICAC officer reported. The officer added that personnel levels cannot sustain the incoming flood of automated alerts.

LATEST POSTS

Celestia Surges 12% to $0.34 as Traders Eye Hibiscus V7 Mainnet Upgrade

The cryptocurrency Celestia (TIA) rallied over 12% to approximately $0.34, driven by tightening spot supply and rising demand ahead of its Hibiscus V7 mainnet upgrade...

Police Lose $1.4M in Seized Bitcoin After Skipping Secure Storage

Police in Seoul, South Korea, lost custody of 22 Bitcoin, worth approximately $1.4 million, after failing to follow secure storage protocols. Officers from the Gangnam...

Dogecoin Surges Past $0.10 as Trading Volume and Open Interest Spike Over 70%

Dogecoin (DOGE) briefly reclaimed the $0.10 level alongside a broad market rebound before pulling back. Derivatives data showed a significant surge in trading volume and...

Bitcoin Holds Steady at Key $68K Support as Momentum Turns Positive

Bitcoin is consolidating near the critical $68,000 support level after a recent decline, with technical indicators showing early signs of stabilization. The Relative Strength Index...

Most Popular

spot_img