More than 200 child advocacy groups and experts are demanding that YouTube ban AI-generated “slop” from its children’s platform entirely, arguing that the low-quality, algorithmically produced videos are rewiring young brains and raking in millions while parents and regulators look the other way.
The open letter, organized by children’s advocacy group Fairplay and addressed to YouTube CEO Neal Mohan and Google CEO Sundar Pichai, was signed by more than 135 organizations. Signatories included the American Federation of Teachers and the American Counseling Association, as well as prominent researchers such as Jonathan Haidt, author of The Anxious Generation. The letter’s authors say YouTube is not only failing to stop AI slop from reaching children but is also actively profiting from it.
“AI-generated videos are really just an escalation of a myriad of problems that YouTube already has when it comes to interfacing with kids on their platforms,” Rachel Franz, director of Fairplay’s Young Children Thrive Offline program, told Fortune. “It’s important to address this AI slop phenomenon, but it’s also equally important to take YouTube to task for the way that its platform is designed to hook users into spending more time in ways that aren’t necessarily related to AI.”
What is ‘AI slop’ anyway?
The term refers to a wave of mass-produced, AI-generated videos flooding platforms like YouTube. The content is cheap to make, often bizarre or nonsensical, and engineered to grab and hold young (or really, any) viewers’ attention. And dear reader, the videos are bizarre: cartoon animals performing repetitive tasks in an uncanny valley aesthetic; fake “educational” videos with garbled information; or hypnotic loops without any pure purpose. The New York Times documented the phenomenon in a February investigation, finding such videos embedded throughout YouTube Kids, a platform YouTube has marketed as a safe, curated space for children.
“So much of AI-generated content is really designed to hijack children’s attention, especially young children who are just at the beginning of developing their impulse control, and they can really distort reality, create confusion, and impact how children are understanding the world around them,” said Franz, who has a background in early child development. “This isn’t a parenting issue in and of itself. The platform is consistently recommending AI content to young users in ways that make it kind of impossible for them to avoid.”
The financial incentives are staggering. Fairplay found that top AI slop channels targeting children have earned over $4.25 million in annual revenue, with some creators openly advertising profits from “plotless, mesmerizing AI content.” The letter argued that no amount of policy will be enough until the platform removes the financial incentives for creators of these videos.
“Only about 5% of videos on YouTube for kids under 8 are actually high-quality. And there are debates amongst that 5% of whether those are actually high-quality,” said Franz. YouTube, however, finds that number contrary to their standards policy.
“We have high standards for the content in YouTube Kids, including limiting AI-generated content in the app to a small set of high-quality channels,” YouTube spokesperson Boot Bullwinkle told Fortune in a statement. “We also provide parents the option to block channels. Across YouTube, we prioritize transparency when it comes to AI content, labeling content from our own AI tools, and requiring creators to disclose realistic AI content. We’re always evolving our approach to stay current as the ecosystem evolves.”
How to solve it
The coalition draws on child development research to argue this isn’t a niche concern. Even adults can have trouble correctly identifying AI-generated content, getting it right only about 50% of the time. More troubling, repeated exposure makes people more likely to perceive AI imagery as real, even after being told it’s fake. For young children whose brains are still building foundational schemas of reality, the damage compounds over time.
Fairplay’s asks are structural, not cosmetic. The coalition is calling on YouTube to clearly label all AI-generated content across the platform; ban AI-generated content entirely from YouTube Kids; and prohibit AI-generated “made for kids” content on the main YouTube platform. Fairplay wants YouTube to bar its algorithm from recommending AI content to users under 18; introduce a parental toggle to disable AI content that is switched off by default; and halt all investment in AI-generated content targeting children.
That last demand takes direct aim at YouTube’s investment in Animaj, an AI-powered children’s entertainment studio backed by Google’s AI Futures Fund. “YouTube is essentially investing in harming babies through its purchase of Animaj,” Franz said.
In Bullwinkle’s statement to Fortune, the spokesperson confirmed that YouTube is developing dedicated AI labels for YouTube Kids, though did not provide a timeline. YouTube CEO Neal Mohan had already flagged “managing AI slop” as a top priority in his annual letter. “To reduce the spread of low-quality AI content, we’re actively building on our established systems that have been very successful in combating spam and clickbait, and reducing the spread of low-quality, repetitive content,” read the letter.
Bullwinkle also noted that the 15 channels mentioned in the Times article are not on YouTube Kids and that the platform removed videos that violated its child safety policies. But for Franz, that’s not good enough.
“It shouldn’t be up to individual researchers to point out a few channels as examples that are doing things that could potentially harm kids, and have that be the basis for what YouTube decides to kick off the platform. What we saw with Elsagate was that at that time, YouTube removed 150,000 videos from its platform and several hundred different channels,” Franz said. She was referencing a 2017 scandal in which thousands of videos on YouTube and YouTube Kids used familiar children’s characters, like Elsa from Frozen and Peppa Pig, to hide deeply disturbing content including graphic violence, sexual themes, and drug use, all dressed up with algorithm-friendly tags like “education” and “fun” to slip past filters and reach young children.
“So we know that YouTube has the capacity to monitor, track, and remove these videos at scale, but right now, they’re doing a Band-Aid approach, where the channels that are getting press coverage—it seems like those are the ones they’re going forward doing something about,” Franz continued. “But it’s not fixing the overall problem.”












