Leaked Documents Reveal Meta AI’s Strict Limits on Abortion and Sexual Health Information for Teens
Newly surfaced internal documents indicate that Metathe parent company of Facebook, Instagramand WhatsApphas placed extensive restrictions on how its artificial intelligence chatbot discusses abortion and sexual health with users under the age of 18.
The materials, first reported by Mother Joneslay out detailed instructions for how Meta’s AI should respond to minors seeking information on a wide range of sensitive topics. While the company has strengthened safeguards around issues such as suicide, self-harm, and eating disorders, the same documents show a sweeping clampdown on reproductive health discussions — including contraception, puberty, and abortion access.
The disclosures arrive at a delicate time for Meta. The company is facing significant legal scrutiny over allegations that its platforms contributed to youth mental health harms. Other major social media firms, including TikTokhave also faced related claims, though TikTok reached a settlement before trial proceedings moved forward.
Expanded Protections on Self-Harm — But Silence on Reproductive Health
According to the internal guidance, Meta’s chatbot is programmed to respond proactively when teens raise concerns about suicide or self-harm. In those cases, the AI is directed to provide crisis resources and encourage users to seek professional help. When discussions involve eating disorders, the chatbot is instructed to share hotline information and suggest reaching out to trained counselors.
These measures reflect mounting public and regulatory pressure on technology companies to address the role their platforms may play in youth mental health crises. Over the past several years, social media firms have been accused of amplifying harmful content or failing to intervene effectively when young users display warning signs of distress.
However, the documents reveal that Meta has drawn a far stricter line around sexual and reproductive health information. The chatbot is prohibited from offering advice or opinions to minors about sexual health topics. That includes information on reproductive anatomy, menstrual cycles, fertilization, contraception, sexually transmitted infections, consent, or abstinence. It is also barred from encouraging condom use or discussing menstrual hygiene products in advisory terms.
The most sweeping restriction concerns abortion. The chatbot is explicitly forbidden from providing information that could help a minor obtain an abortion. That includes directing users to clinics, offering location-based guidance, or explaining how to access abortion services. The AI is also instructed not to express a value judgment either in favor of or against abortion.
The result is a striking contrast: the same AI system that actively connects teens to mental health resources is largely silent when it comes to guiding them toward reproductive healthcare information.
Legal and Political Context Intensifies Scrutiny
Meta’s internal policy decisions are unfolding within a rapidly changing legal landscape. The U.S. Supreme Court’s 2022 ruling in Dobbs v. Jackson Women’s Health Organization overturned Roe v. Wade and eliminated federal constitutional protections for abortion. Since that decision, many states have implemented strict bans or significant restrictions.
At the federal level, policies under Donald Trump have further shaped the national conversation around reproductive health access. In parallel, debates about artificial intelligence have increasingly entered partisan politics. In July, President Trump signed an executive order titled “Preventing Woke AI in the Federal Government,” aimed at limiting certain AI-generated content related to gender and sexuality within federal agencies.
Although that directive applies specifically to government AI use, critics argue that the broader political climate may influence how private companies approach sensitive subjects.
Jacob Hoffman-Andrews of the Electronic Frontier Foundation has expressed concern that technology companies could be narrowing access to certain information in response to political pressures. He pointed to what he described as a clear imbalance: Meta’s readiness to offer extensive resources for eating disorders or suicide, contrasted with its reluctance to provide information about reproductive healthcare providers.
Meta, in response to criticism, has stated that its AI systems are designed to provide age-appropriate information while avoiding advice on complex health decisions. The company maintains that it allows broader discussions about healthcare services on its platforms as long as content complies with existing policies, and that users can appeal moderation decisions.
Advocacy Groups Warn of Broader Censorship Trends
Reproductive health advocates say Meta’s chatbot rules reflect a wider pattern across the company’s services. Martha Dimitratou, who leads the nonprofit Repro Uncensored, has argued that content related to reproductive health and LGBTQ issues has faced growing restrictions on Meta’s platforms in recent years.
According to data compiled by her organization, removals of sexual and reproductive health–related content increased sharply between 2024 and 2025. Dimitratou has said her group has repeatedly urged Meta to treat abortion as a healthcare issue rather than a political topic, but she contends that the company has not shifted its approach.
At the same time, reliance on AI-driven search tools has surged. Platforms such as ChatGPT and Gemini have seen growing traffic as users increasingly turn to conversational AI for answers to health questions. Advocates argue that as chatbots become primary sources of information — particularly for younger audiences — ensuring accuracy and neutrality becomes even more critical.
In comparative testing conducted by Repro Uncensored, Meta’s AI was described as less consistent than competing consumer AI systems when responding to abortion-related queries.
Comments are closed.