Insights

Responsible AI for kids can’t wait. Here’s how to build a trustworthy product.

Carlos Dominguez
Carlos Dominguez
Senior Director of Innovation and Technology
Length 7 min read
Date July 2, 2025
Responsible AI for kids can’t wait. Here’s how to build a trustworthy product.

Mattel recently announced that they’ve launched a strategic partnership with OpenAI.

Their vision for the future? AI-powered products and experiences based on classic Mattel brands, such as Barbie, Hot Wheels, American Girl, and Fisher-Price.

The response to this news was as polarized as you’d expect, with optimists excited at the prospect of their Toy Story dreams coming true and critics worried about the nightmarish possibilities of a real-life M3GAN.

For the purposes of this discussion, however, we’re putting the public outcry aside. Because what matters most is this: 

AI for kids isn’t coming. It’s already here.

And, unlike previous technological shifts, we have the unprecedented opportunity to build AI for kids with intention, responsibility, and genuine care for young users from day one.

The moment is now

The last time we stood at such a technological crossroads was during social media’s rise. With socials, however, the industry’s “move fast and fix later” mentality made it incredibly difficult to retroactively implement effective guardrails for children—a struggle that still continues today.

We cannot afford a repeat performance with AI.

Nearly one-third of U.S. parents report that their children aged 0 to 8 already use AI tools for school or play. The vulnerabilities of these young users and AI’s profound developmental impact make reactive approaches not just ethically questionable, but strategically catastrophic. 

At the same time, this milestone reveals not just adoption, but genuine enthusiasm from families ready to embrace AI-powered experiences. 

Fortunately, a host of new regulatory developments has offered product owners and developers the clarity they need to deliver AI-powered experiences that are both thoughtfully and responsibly designed.

  • The FTC’s strengthened COPPA rule now includes biometrics as “personal data,” acknowledging the sophisticated capabilities of modern AI while ensuring children’s privacy remains paramount.
  • The UK’s progressive Children’s Code has established frameworks that protect young users while enabling innovation. Their approach to curbing “nudge” tactics has also had an impact on a global level by informing an international consensus around protecting children without limiting their access to beneficial technology.
  • By emphasizing safety-by-design and age-appropriate transparency, UNICEF’s updated AI for Children guidelines acknowledge that AI can be a powerful force for good in children’s lives as long as it is implemented thoughtfully.


This recognition from governments and international organizations of AI’s potential, along with the growing adoption of AI among young children and their families, demonstrates how it is crucial for CEOs to prioritize ethical AI implementation now.

Trust is earned in design, not PR

Families and regulators no longer evaluate trustworthiness through marketing campaigns or public statements. They examine your product interface directly, looking for the tools, options, and information they need to determine whether or not a product is safe for their children.

The message is clear: Ethical principles must be embedded in your product’s DNA, not bolted on afterward.

As a result, what’s emerging is a virtuous cycle where regulatory clarity enables greater innovation confidence. Companies that embrace these guidelines early are discovering that building trust with families doesn’t constrain creativity. It unleashes it.

UNICEF’s emphasis on age-appropriate transparency, for example, has inspired ingenious design solutions. Forward-thinking companies are creating kid-friendly explanations that answer questions like “Why did the robot recommend this video?” These interactions don’t feel like compliance checkboxes but rather opportunities to engage young users in meaningful dialogue about technology.

The most successful implementations feature intuitive dashboards that empower parents to establish boundaries through simple, clear interfaces. These aren’t complex control panels but thoughtfully designed experiences that make family digital wellness feel manageable and positive.

Companies that have rebuilt their products around child-centered principles are reporting remarkable outcomes: shorter user-acquisition cycles as parents become authentic advocates, smoother international launches as regulators recognize proactive duty of care, and higher engagement rates as children develop deeper trust in their digital experiences.

Five strategic moves for immediate action

At DEPT®, our Kids & Family team has extensive experience designing digital experiences children love and parents trust. With projects like the Magical Storybook, we’ve proven firsthand that responsible and ethical development isn’t a roadblock to innovation—it can be the ultimate creative inspiration for a great product. 

To help you bring your own kid-friendly digital playground to life, here’s what we’ve learned works:

1. Run a “Kid-Safety Sprint”

Get your design, engineering, security, and legal teams together to map every data touchpoint a child experiences in your product. Visualize it on one page, then fix the riskiest patterns first. This isn’t just compliance theater: In our experience supporting client teams, we’ve used this approach to guide their process and accelerate alignment.

Most importantly, it’s a form of preemptive risk management that positions you ahead of competitors awaiting regulatory pressure. Organizations that proactively identify vulnerabilities avoid the crushing costs of reactive fixes while building parental trust that translates directly into market advantage.

Run a kid safety sprint.

2. Swap “Time-Spent” for “Time Well Spent”

Realign your KPIs so incentives ride on positive engagement scores—parent and child Net Promoter Scores—rather than raw minutes logged. This shift naturally reduces manipulative design elements while fostering genuine loyalty.

The strategic insight: not all engagement is equal, especially for children. Optimizing for value over volume creates more sustainable revenue streams, reduces churn, and builds stronger brand advocacy. Parents become your most effective unpaid marketers when they trust your product with their children’s well-being.

Implement an explain it layer.

3. Implement an “Explain-It” layer

Whether through tooltips, comic bubbles, or short videos, ensure every key AI decision includes a one-sentence explanation kids can read and parents can verify.

This transparency investment pays compound returns. When parents understand how your AI operates, anxiety transforms into confidence. Children learn while they play. Customer support tickets decrease. Trust multiplies across your entire user base.

4. Audit for bias

Use synthetic data to balance underrepresented age or language groups in your AI models. Document every change meticulously. These records will prove invaluable when regulators come calling.

Biased AI doesn’t just raise ethical concerns; it limits market reach and invites regulatory penalties. Proactive bias mitigation ensures inclusivity, broadens demographic appeal, and transforms potential compliance burdens into competitive advantages for global expansion.

Bundle AI literacy

5. Bundle AI literacy into the experience

Transform moments of curiosity (“Why did I get this quest?”) into 20-second interactive lessons about AI concepts.

This approach elevates your product from entertainment to education, creating stronger value propositions for parents while future-proofing your market. An AI-literate consumer base adopts advanced features more readily and develops a nuanced understanding of technology’s capabilities and limitations.


The choice is yours

The future of children’s AI is being written now, in boardrooms and product development cycles happening today. 

MIT’s Day of AI program, for example, has reached half a million students, proving children can grasp concepts like bias, training data, and model limitations when taught appropriately. Smart brands like Duolingo Kids are already weaving mini-lessons into their products, turning moments of curiosity into educational opportunities.

It goes to show: Responsible AI for children isn’t a constraint on innovation. It’s the foundation that makes genuine innovation possible. The companies that understand this distinction won’t just survive the coming transformation; they’ll lead it.

You can either shape tomorrow’s digital playgrounds with imagination, integrity, and foresight—or find yourself explaining to stakeholders why you missed the most important product development opportunity of the decade.

ON OUR Mind

VIEW ALL INSIGHTS