Safety Experts Sound Alarm Over Unregulated Surge in AI-Powered Toys: ‘Unprecedented Risks’

A major coalition of child development experts is warning parents about the “unprecedented risks” linked to toys embedded with artificial intelligence (AI).

Amid a surge in unregulated AI-powered toys this Christmas, experts are citing mounting evidence that artificial intelligence chatbots can undermine healthy development and expose children to serious risks.

The warning was issued by the advocacy group Fairplay and endorsed by more than 150 child-safety organizations and experts.

It warns that AI-enabled plush toys, dolls, robots, and action figures are fundamentally different from traditional toys because they attempt to function as “trusted friend[s]” using human-like conversational algorithms.

- Advertisement -

Fairplay identifies a growing slate of AI toys already on the market, including Miko, Smart Teddy, Roybi, Loona Robot Dog, and models from Curio Interactive.

The group notes that major manufacturers such as Mattel plan to expand into AI-powered children’s products.

Many of these toys are marketed to children “as young as infants.”

The renewed scrutiny follows high-profile concerns about AI harm to minors, including a lawsuit against Character.AI alleging that chatbot interactions contributed to suicidal ideation in children and the death of a 14-year-old.

Fairplay argues the risks are clear: “The serious harms that AI chatbots have inflicted on children are well-documented, including fostering obsessive use, having explicit sexual conversations, and encouraging unsafe behaviors, violence against others, and self-harm.”

Five Major Developmental Risks Identified

Fairplay’s one-page advisory urges parents to avoid AI toys for five reasons:

• The toys are driven by the same conversational AI systems already shown to harm children.

- Advertisement -

• They exploit young children’s trust by mimicking friendship and emotional understanding.

• They disrupt normal developmental processes such as building resilience and forming real relationships.

• They invade family privacy by collecting sensitive personal data.

• They displace imagination-driven play essential for healthy cognitive growth.

Public Interest Research Group testing has already uncovered instances of AI toys “telling children where to find knives, teaching them how to light a match, and even engaging them in sexually explicit conversations.”

Fairplay stresses that young children are especially vulnerable because they cannot distinguish between a toy’s preprogrammed responses and trustworthy human guidance.

The advisory also warns that companies use intimate data collected by toys, such as family details, children’s routines, and conversations, “to make their AI systems more life-like, responsive, and addictive, allowing them to build a relationship with a child, and ultimately sell products/services.”

By contrast, traditional toys rely on children’s imagination.

Fairplay argues AI-driven toys “drive the conversation and play through prompts, preloaded scripts, and predictable interactions,” potentially stifling creativity.

Rachel Franz, director of Fairplay’s Young Children Thrive Offline program, said the risks surpass anything previously seen in children’s products.

Slay the latest News for free!

We don’t spam! Read our privacy policy for more info.

“Companion AI has already harmed teens,” she said.

“Stuffing that same technology into cute, kid-friendly toys exposes even younger children to risks beyond what we currently comprehend.

She added that such products remain “unregulated and being marketed to families with a promise of safety, learning, and friendship,” despite “mounting evidence” of harm.

Federal Scrutiny Is Growing

The warnings come as federal regulators accelerate oversight of AI “companion” products.

On September 11, the Federal Trade Commission (FTC) announced a formal inquiry into chatbots marketed as child companions.

“Protecting kids online is a top priority… and so is fostering innovation,” FTC Chairman Andrew N. Ferguson said.

He emphasized that the study will examine how companies develop these products and what protections exist for children.

At the same time, major toy manufacturers are moving ahead.

Mattel announced in June that it was partnering with OpenAI to build new AI-powered toys.

The Toy Association, which represents more than 900 U.S. companies, said it supports the “judicious use” of AI and stressed that all toys must comply with extensive safety standards.

A recent Public Interest Research Group assessment of AI-based toys found that guardrails designed to keep conversations age-appropriate “vary in effectiveness, and at times, can break down entirely.”

Fairplay’s message to parents is direct: the technology is advancing faster than the evidence proving it is safe.

“Children should be able to play with their toys, not be played by them,” the group concluded.

READ MORE – AI Teddy Bear Pulled from Shelves After Giving Children Disturbing Instructions

SHARE:
- Advertisement -
- Advertisement -
join telegram

READERS' POLL

Who is the best president?

By completing this poll, you gain access to our free newsletter. Unsubscribe at any time.

Our comment section is restricted to members of the Slay News community only.

To join, create a free account HERE.

If you are already a member, log in HERE.

Subscribe
Notify of
0
Would love your thoughts, please comment.x
()
x