10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
Metaverse

UK details requirements to protect children from ‘toxic algorithms’


The UK is calling on search and social media companies to “tame toxic algorithms” that recommend harmful content to children, or risk billions in fines. On Wednesday, the UK’s media regulator Ofcom outlined over 40 proposed requirements for tech giants under its Online Safety Act rules, including robust age-checks and content moderation that aims to better protect minors online in compliance with upcoming digital safety laws. 

“Our proposed codes firmly place the responsibility for keeping children safer on tech firms,” said Ofcom chief executive Melanie Dawes. “They will need to tame aggressive algorithms that push harmful content to children in their personalized feeds and introduce age-checks so children get an experience that’s right for their age.”

Specifically, Ofcom wants to prevent children from encountering content related to things like eating disorders, self-harm, suicide, pornography, and any material judged violent, hateful, or abusive. Platforms also have to protect children from online bullying and promotions for dangerous online challenges, and allow them to leave negative feedback on content they don’t want to see so they can better curate their feeds.

Bottom line: platforms will soon have to block content deemed harmful in the UK even if it means “preventing children from accessing the entire site or app,” says Ofcom.

The Online Safety Act allows Ofcom to impose fines of up to £18 million (around $22.4 million) or 10 percent of a company’s global revenue — whichever figure is greater. That means large companies like Meta, Google, and TikTok risk paying substantial sums. Ofcom warns that companies who don’t comply can “expect to face enforcement action.”

Companies have until July 17th to respond to Ofcom’s proposals before the codes are presented to parliament. The regulator is set to release a final version in Spring 2025, after which platforms will have three months to comply.



Source link

by The Verge

Anthropic is one of the world’s leading AI model providers, especially in areas like coding. But its AI assistant, Claude, is nowhere near as popular as OpenAI’s ChatGPT. According to chief product officer Mike Krieger, Anthropic doesn’t plan to win the AI race by building a mainstream AI assistant. “I hope Claude reaches as many people as possible,” Krieger told me onstage at the HumanX AI conference earlier this week. “But I think, [for] our ambitions, the critical path isn’t through mass-market consumer adoption right now.” Instead,… Source link

by The Verge

Meta will begin testing its X-style Community Notes starting March 18th. The feature will roll out on Facebook, Instagram, and Threads in the US – but Meta won’t publicly publish the notes to start as it tests the Community Notes writing and rating system. Meta first announced plans to replace its fact-checking program with Community Notes in January, saying it would be “less prone to bias.” So far, around 200,000 potential contributors have signed up for the waitlist. Not everyone will be able to write and rate Community Notes at launch, as… Source link

by The Verge

An arbitrator has decided in favor of Meta in a case the company brought against Sarah Wynn-Williams, the former Meta employee who wrote a memoir published this week detailing alleged claims of misconduct at the company. Macmillan Publishers and its imprint that published the memoir, Flatiron Books, were also named as respondents. The memoir, titled Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism, details alleged claims of sexual harassment, including by current policy chief Joel Kaplan, who was her boss, according to NBC News. In… Source link