Yesterday, Mattel, Inc. and OpenAI announced a partnership to bring gen AI into toys and experiences.
Originally shared on LinkedIn on June 13 2025.
Yesterday, Mattel, Inc. and OpenAI announced a partnership to bring gen AI into toys and experiences.
It’s a major signal: AI isn’t just entering classrooms, it’s entering childhood itself.
Mattel promises “age-appropriate, AI-powered play experiences with an emphasis on privacy and safety." But let’s be clear: this isn’t just about Barbie learning to chat. It’s about gen AI becoming a direct presence in kids’ daily lives and it’s already happening.
📱 Portola’s "Tolan" app has over 500K downloads and is popular among teens. It's an "AI friend" that pairs users with a talking alien that listens, guides, and grows with them. For many, it helps manage stress and “overwhelm.”
👻 Snap Inc.’s MyAI has more than 150 million users. According to Pew, 59% of U.S. teens (ages 13–17) use Snapchat. Many customize their AI to look and sound human. However, earlier this year, the FTC said it would refer MyAI chatbot complaints to the DOJ.
🗣️ Character.AI sees 5x more daily engagement than ChatGPT. But it’s also under scrutiny: lawsuits allege it encouraged self-harm, including a tragic case that ended in a teenager’s suicide.
In some cases, these tools help.
A 2025 MIT study found that AI companions can reduce anxiety, especially for those who are socially anxious or isolated. They can offer a low pressure space for expression, skill-building, and confidence.
But it comes with a cost.
The same study showed that high engagement, especially with expressive and voice-enabled bots, can increase emotional dependence and reduce real world social interaction.
And these dynamics play out quietly, in apps designed to blend in. You might not notice until it surfaces as withdrawal or misaligned social cues.
We’ve seen this movie before. Social media scaled faster than we understood the cost.
In response to growing scrutiny, Character.AI has made some safety updates:
👉 A pop-up directing users to the National Suicide Prevention Lifeline during self-harm conversations
👉 New filters to reduce exposure to sensitive content
👉 Weekly email summaries for parents outlining screen time and interactions
California lawmakers proposed legislation that would require AI services to periodically remind young users that they’re chatting with a bot, not a human.
But a new report from Common Sense Media and Stanford goes a step further, recommending that children under 18 should not use AI companion apps at all.
Among the findings:
⚠️ Bots engaged in sexual roleplay with users who said they were 14
⚠️ “Dark design” patterns that foster emotional dependency
⚠️ Chatbots claiming they eat, sleep, and feel emotions
⚠️ Weak or nonexistent age gates, easily bypassed
This generation won’t remember a world without AI. For them, talking to an LLM will feel as normal as texting a friend.
That’s not inherently bad.
But it raises urgent questions about the types of relationships we’re normalizing, especially for kids still learning how to relate to others and to themselves.