In a major move that’s shaking up the AI news today, Character.AI has officially announced it will ban users under 18 from accessing its artificial intelligence chatbots. The decision follows mounting criticism and legal challenges after multiple families, including Florida mother Megan Garcia, accused the company of failing to protect children from emotional harm caused by its platform.
Character.AI’s New Age Restriction Policy
Character.AI, founded in 2021, is a fast-growing startup known for offering personalized AI experiences through custom chatbots that mimic human conversations. However, concerns about minors forming emotional attachments and engaging in inappropriate chats with AI bots have prompted this drastic change.
Starting November 25, 2025, users must verify they are over 18 to continue using Character.AI. The platform will roll out an in-house age assurance model, supplemented by third-party verification tools such as Persona, which is also used by companies like LinkedIn and OpenAI.
The company said these steps mark “the biggest safety measure we’ve taken to date,” reflecting a wider industry push to make AI technology more responsible and transparent.
Megan Garcia’s Lawsuit and the Turning Point
The announcement comes after a devastating case brought by Megan Garcia, whose 14-year-old son, Sewell Setzer, died by suicide in connection with his interactions on the platform. Garcia’s lawsuit accused Character.AI of negligence and failing to safeguard young users.
“This comes about three years too late,” Garcia told NBC News. “Sewell’s gone; I can’t get him back. I think he was collateral damage.”
Her lawsuit, which was the first of five similar cases, alleges that the AI chatbot engaged in conversations that contributed to her son’s mental health struggles. Two of these lawsuits claim the company’s AI was directly involved in situations leading to child suicides.
Broader Scrutiny of AI Platforms
The AI news today is not just about Character.AI—it’s part of a growing global debate over AI ethics, youth safety, and the emotional impact of machine interactions. Tech giants like Meta and OpenAI are also under pressure to tighten safety features, as parents and lawmakers call for stricter regulations on how AI chatbots interact with minors.
Last month, Garcia and other advocates urged Congress to create laws limiting children’s exposure to emotionally manipulative AI systems. Consumer advocacy group Public Citizen echoed this sentiment on X (formerly Twitter), writing, “Congress MUST ban Big Tech from making these AI bots available to kids.”
Safety Features and Data Transparency
Character.AI claims to have implemented new safety tools, including Parental Insights dashboards, filtered characters, and usage notifications. These measures aim to help parents monitor how their children engage with AI-driven conversations.
However, critics remain skeptical. Garcia has demanded that the company reveal how it uses data collected from minors, especially since Character.AI’s privacy policy allows user data to help train its models. While the company insists it doesn’t sell voice or text data, concerns over data exploitation persist.
A Step Forward, But Questions Remain
While Garcia and her attorney, Matt Bergman from the Social Media Victims Law Center, acknowledge that banning minors is a “step in the right direction,” they emphasize that true accountability requires more.
“The devil is in the details,” Bergman said. “But we would urge other AI companies to follow Character.AI’s example, even if they were late to the game.”
Garcia, determined to continue her legal fight, said, “I’m just one mother in Florida up against tech giants. It’s a David and Goliath situation—but I’m not afraid.”
What This Means for the Future of AI
This story is a defining moment in AI news today, signaling a shift toward greater accountability among AI developers. As chatbots become integral to daily communication, regulators, parents, and tech companies will continue to grapple with the balance between innovation and safety.
The Character.AI ban also underscores a growing realization: that AI tools designed for companionship and emotional support must operate under the same ethical and safety standards as social media platforms.
Stay ahead in the evolving world of technology, AI, and innovation.
For more trending updates and startup insights, visit StartupNews.fyi.








