10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
Metaverse

AI-generated Asians were briefly unavailable on Instagram


Yesterday, I reported that Meta’s AI image generator was making everyone Asian, even when the text prompt specified another race. Today, I briefly had the opposite problem: I was unable to generate any Asian people using the same prompts as the day before.

The tests I did yesterday were on Instagram, via the AI image generator available in direct messages. After dozens of tries, I was unable to generate a single accurate image using prompts like “Asian man and Caucasian friend” and “Asian man and white wife.” Only once was the system able to successfully create a picture of an Asian woman and a white man — it kept making everyone Asian.

After I initially reached out for comment yesterday, a Meta spokesperson asked for more details about my story, like when my deadline was. I responded and never heard back. Today, I was curious if the problem was resolved or if the system was still unable to create an accurate image showing an Asian person with their white friend. Instead of a slew of racially inaccurate pictures, I got an error message: “Looks like something went wrong. Please try again later or try a different prompt.”

Weird. Did I hit my cap for generating fake Asian people? I had a Verge co-worker try, and she got the same result.

I tried other even more general prompts about Asian people, like “Asian man in suit,” “Asian woman shopping,” and “Asian woman smiling.” Instead of an image, I got the same error message. Again, I reached out to Meta’s communications team — what gives? Let me make fake Asian people! (During this time, I was also unable to generate images using prompts like “Latino man in suit” and “African American man in suit,” which I asked Meta about as well.)

Forty minutes later, after I got out of a meeting, I still hadn’t heard back from Meta. But by then, the Instagram feature was working for simple prompts like “Asian man.” Silently changing something, correcting an error, or removing a feature after a reporter asks about it is fairly standard for many of the companies I cover. Did I personally cause a temporary shortage of AI-generated Asian people? Was it just a coincidence in timing? Is Meta working on fixing the problem? I wish I knew, but Meta never answered my questions or offered an explanation.

Whatever is happening over at Meta HQ, it still has some work to do — prompts like “Asian man and white woman” now return an image, but the system still screws up the races and makes them both Asian like yesterday. So I guess we’re back to where we started. I will keep an eye on things.

Screenshots by Mia Sato / The Verge



Source link

by The Verge

Anthropic is one of the world’s leading AI model providers, especially in areas like coding. But its AI assistant, Claude, is nowhere near as popular as OpenAI’s ChatGPT. According to chief product officer Mike Krieger, Anthropic doesn’t plan to win the AI race by building a mainstream AI assistant. “I hope Claude reaches as many people as possible,” Krieger told me onstage at the HumanX AI conference earlier this week. “But I think, [for] our ambitions, the critical path isn’t through mass-market consumer adoption right now.” Instead,… Source link

by The Verge

Meta will begin testing its X-style Community Notes starting March 18th. The feature will roll out on Facebook, Instagram, and Threads in the US – but Meta won’t publicly publish the notes to start as it tests the Community Notes writing and rating system. Meta first announced plans to replace its fact-checking program with Community Notes in January, saying it would be “less prone to bias.” So far, around 200,000 potential contributors have signed up for the waitlist. Not everyone will be able to write and rate Community Notes at launch, as… Source link

by The Verge

An arbitrator has decided in favor of Meta in a case the company brought against Sarah Wynn-Williams, the former Meta employee who wrote a memoir published this week detailing alleged claims of misconduct at the company. Macmillan Publishers and its imprint that published the memoir, Flatiron Books, were also named as respondents. The memoir, titled Careless People: A Cautionary Tale of Power, Greed, and Lost Idealism, details alleged claims of sexual harassment, including by current policy chief Joel Kaplan, who was her boss, according to NBC News. In… Source link