DOJ Investigators Slam Meta for Flooding Child Abuse Tips with AI ‘Junk’
According to investigators in the United States, the AI technology employed by Meta to monitor its social platforms is overwhelming law enforcement with low-quality tips about child abuse, which is hindering actual investigations.
Law enforcement officers from the Internet Crimes Against Children Task Force, who work in collaboration with the United States Department of Justice, testified in court in New Mexico that most of the tips are of little actual value. The officers are required to investigate each tip, even if it does not lead to a crime.
Benjamin Zwiebel, an ICAC special agent, said in testimony that many reports sent by Meta are “junk.” Officers described thousands of monthly alerts that lack key details such as images, videos, or readable text. Without that information, police cannot identify suspects or move cases forward.
One officer said the volume of reports doubled between 2024 and 2025. Many alerts suggest misconduct but arrive with redacted or missing material. Investigators then face legal barriers that prevent them from opening files without a warrant, which adds delay to urgent cases.
Meta vs. New Mexico: The Clash Over Child Safety, Encryption, and Reporting Volume
Meta rejected claims that its systems harm investigations. A company spokesperson said Meta has worked with law enforcement for years and has helped secure arrests through rapid responses to emergency requests. The company also pointed to new safety measures, including protected teen accounts, and said it prioritises reports involving child safety.
The lawsuit is part of a larger controversy in which New Mexico Attorney General Raúl Torrez is suing Meta for putting profits over the safety of children. Torrez has admitted in court that Meta is still a valuable source of tips by reporting suspected abuse to the National Center for Missing & Exploited Children (NCMEC), the national clearinghouse that relays tips to law enforcement agencies.
It is mandatory for technology companies in the U.S. to report child sexual abuse material to NCMEC if it is detected. NCMEC then relays the reports to federal, state, and local law enforcement agencies. NCMEC does not screen the reports before relaying them to law enforcement.
Meta generates more reports than any other company. NCMEC data shows the company submitted 13.8 million reports in 2024 out of 20.5 million total tips received that year. Investigators say the surge has created a heavy workload, forcing officers to spend hours reviewing cases that do not lead to charges.
Court filings also revealed earlier internal concerns at Meta about safety risks tied to encryption. In 2019, as the company prepared to expand end-to-end encryption in Messenger, policy chief Monika Bickert warned colleagues that encryption could limit the company’s ability to detect child exploitation or terror threats. Internal estimates suggested encryption might prevent proactive reporting in hundreds of abuse and security cases.
How Meta’s AI and New Laws Are Overwhelming Child Safety Investigators?
Meta later introduced safety tools designed to work within encrypted chats. Spokesperson Andy Stone said those concerns drove the development of new detection systems aimed at preventing abuse while preserving private messaging.
Investigators argue that recent legal changes also explain the spike in reports. The Report Act, which took effect in November 2024, expanded reporting rules. Companies must now report suspected grooming, trafficking, and planned abuse, not only confirmed illegal images. They must also store evidence longer and face stronger penalties for failing to report.
Officers believe Meta may now send broader AI-generated alerts to avoid legal risk. Some reports flagged normal online conversations, including teenagers discussing celebrities, investigators said. Zwiebel told the court that such errors appear consistent with automated systems rather than human review.
Each incoming tip requires manual assessment. Officers say the growing backlog harms morale and pulls attention away from serious abuse cases that demand immediate action.
“We are drowning in tips,” one investigator said. “We want to focus on real victims, but we do not have the staff to keep up.”
The case highlights a growing tension between large technology platforms and law enforcement. AI allows companies to scan vast amounts of content and report potential harm at scale. Yet investigators warn that quantity without accuracy risks overwhelming the very systems meant to protect children.
Comments are closed.