Leaked Meta Docs Show Facebook Uses AI Chatbots to Handle Child Exploitation

A leaked internal Meta document has revealed how the social media giant is training its artificial intelligence (AI) chatbot to navigate one of the most sensitive issues online.

The documents show that Facebook, Instagram, and WhatsApp are relying on AI to deal with child sexual exploitation on the platofrms.

The guidelines were reported by Business Insider.

The leak offers a rare insight into the rules governing Meta’s AI as regulators intensify their scrutiny of Big Tech.

- Advertisement -

The disclosure comes as the Federal Trade Commission (FTC) investigates Meta, OpenAI, and Google to determine how their AI systems are designed.

The FTC is investigating whether the platforms adequately protect children.

Earlier this year, Meta faced backlash after reports showed its earlier rulebook mistakenly allowed chatbots to engage in romantic conversations with minors.

The company later called the inclusion an “error” and removed the language.

The new internal guidelines take a harder line, explicitly requiring chatbots to refuse any request for sexual roleplay involving children.

According to the leaked document, the rules attempt to draw a strict line between “educational discussion” and “harmful roleplay.”

- Advertisement -

Chatbots may provide factual information on child safety or report concerns.

However, they are strictly prohibited from sexualized or romantic engagement.

Meta’s communications chief Andy Stone told Business Insider that the policies reflect the company’s longstanding prohibition on sexualized roleplay involving minors, emphasizing that additional guardrails are also in place.

Hawley Presses Zuckerberg

The revelations come at a politically sensitive time.

In August, Sen. Josh Hawley (R-MO) demanded that Meta CEO Mark Zuckerberg hand over a 200-page internal rulebook governing chatbot behavior, along with enforcement manuals.

- Advertisement -

Meta initially missed the deadline, blaming a “technical issue,” but has since begun turning over materials.

Hawley has been one of Congress’s most vocal critics of Big Tech, accusing companies like Meta of prioritizing growth and profit over child safety.

AI Expansion Raises Stakes

The scrutiny coincides with Meta’s push to weave AI into everyday life.

At its Meta Connect 2025 event, the company unveiled new AI-driven products, including Ray-Ban smart glasses with built-in displays and enhanced chatbot integration.

The deeper the technology embeds itself into daily communication, the greater the concern over whether safeguards will keep pace.

Slay the latest News for free!

We don’t spam! Read our privacy policy for more info.

The revelations highlight both progress and risk.

While Meta has tightened restrictions in response to criticism, the fact that its earlier framework allowed chatbots to simulate inappropriate interactions underscores what critics see as the fragility of Big Tech’s safeguards.

For now, regulators, journalists, and watchdog groups are filling the accountability gap.

But as Meta and its rivals expand AI products into homes and classrooms, the pressure to ensure strong protections for children is only likely to intensify.

READ MORE – UK Government Advances Plan to Mandate Digital ID for All Children

SHARE:
- Advertisement -
- Advertisement -
join telegram

READERS' POLL

Who is the best president?

By completing this poll, you gain access to our free newsletter. Unsubscribe at any time.

Our comment section is restricted to members of the Slay News community only.

To join, create a free account HERE.

If you are already a member, log in HERE.

Subscribe
Notify of
1
0
Would love your thoughts, please comment.x
()
x