As long as there has been AI, there have been people sounding alarms about what it might do to us: rogue superintelligence, mass unemployment, or environmental ruin. But another threat entirely—that of kids forming unhealthy bonds with AI—is pulling AI safety out of the academic fringe and into regulators’ crosshairs.
This has been bubbling for a while. Two high-profile lawsuits filed in the last year, against Character.AI and OpenAI, allege that their models contributed to the suicides of two…








