10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
AI

YouTube now lets you request removal of AI-generated content that simulates your face or voice

Meta is not the only company grappling with the rise in AI-generated content and how it affects its platform. YouTube also quietly rolled out a policy change in June that will allow people to request the takedown of AI-generated or other synthetic content that simulates their face or voice. The change allows people to request the removal of this type of AI content under YouTube’s privacy request process. It’s an expansion on its previously announced approach to responsible AI agenda first introduced in November.

Meta is not the only company grappling with the rise in AI-generated content and how it affects its platform. YouTube also quietly rolled out a policy change in June that will allow people to request the takedown of AI-generated or other synthetic content that simulates their face or voice. The change allows people to request the removal of this type of AI content under YouTube’s privacy request process. It’s an expansion on its previously announced approach to responsible AI agenda first introduced in November.

Instead of requesting the content be taken down for being misleading, like a deepfake, YouTube wants the affected parties to request the content’s removal directly as a privacy violation. According to YouTube’s recently updated Help documentation on the topic, it requires first-party claims outside a handful of exceptions, like when the affected individual is a minor, doesn’t have access to a computer, is deceased or other such exceptions.

Simply submitting the request for a takedown doesn’t necessarily mean the content will be removed, however. YouTube cautions that it will make its own judgment about the complaint based on a variety of factors.

For instance, it may consider if the content is disclosed as being synthetic or made with AI, whether it uniquely identifies a person and whether the content could be considered parody, satire or something else of value and in the public’s interest. The company additionally notes that it may consider whether the AI content features a public figure or other well-known individual, and whether or not it shows them engaging in “sensitive behavior” like criminal activity, violence or endorsing a product or political candidate. The latter is particularly concerning in an election year, where AI-generated endorsements could potentially swing votes.

YouTube says it will also give the content’s uploader 48 hours to act on the complaint. If the content is removed before that time has passed, the complaint is closed. Otherwise, YouTube will initiate a review. The company also warns users that removal means fully removing the video from the site and, if applicable, removing the individual’s name and personal information from the title, description and tags of the video, as well. Users can also blur out the faces of people in their videos, but they can’t simply make the video private to comply with the removal request, as the video could be set back to public status at any time.

The company didn’t broadly advertise the change in policy, though in March it introduced a tool in Creator Studio that allowed creators to disclose when realistic-looking content was made with altered or synthetic media, including generative AI. It also more recently began a test of a feature that would allow users to add crowdsourced notes that provide additional context on videos, like whether it’s meant to be a parody or if it’s misleading in some way.

YouTube is not against the use of AI, having already experimented with generative AI itself, including with a comments summarizer and conversational tool for asking questions about a video or getting recommendations. However, the company has previously warned that simply labeling AI content as such won’t necessarily protect it from removal, as it will still have to comply with YouTube’s Community Guidelines.

In the case of privacy complaints over AI material, YouTube won’t jump to penalize the original content creator.

“For creators, if you receive notice of a privacy complaint, keep in mind that privacy violations are separate from Community Guidelines strikes and receiving a privacy complaint will not automatically result in a strike,” a company representative last month shared on the YouTube Community site where the company updates creators directly on new policies and features.

In other words, YouTube’s Privacy Guidelines are different from its Community Guidelines, and some content may be removed from YouTube as the result of a privacy request even if it does not violate the Community Guidelines. While the company won’t apply a penalty, like an upload restriction, when a creator’s video is removed following a privacy complaint, YouTube tells us it may take action against accounts with repeated violations.

Source link

AI
by The Economic Times

IBM said Tuesday that it planned to cut thousands of workers as it shifts its focus to higher-growth businesses in artificial intelligence consulting and software. The company did not specify how many workers would be affected, but said in a statement the layoffs would “impact a low single-digit percentage of our global workforce.” The company had 270,000 employees at the end of last year. The number of workers in the United States is expected to remain flat despite some cuts, a spokesperson added in the statement. A massive supplier of technology to… Source link

AI
by The Economic Times

The number of Indian startups entering famed US accelerator and investor Y Combinator’s startup programme might have dwindled to just one in 2025, down from the high of 2021, when 64 were selected. But not so for Indian investors, who are queuing up to find the next big thing in AI by relying on shortlists made by YC to help them filter their investments. In 2025, Indian investors have invested in close to 10 Y Combinator (YC) AI startups in the US. These include Tesora AI, CodeAnt, Alter AI and Frizzle, all with Indian-origin founders but based in… Source link

by Techcrunch

Lovable, the Stockholm-based AI coding platform, is closing in on 8 million users, CEO Anton Osika told this editor during a sit-down on Monday, a major jump from the 2.3 million active users number the company shared in July. Osika said the company — which was founded almost exactly one year ago — is also seeing “100,000 new products built on Lovable every single day.” Source link