10th Indian Delegation to Dubai, Gitex & Expand North Star – World’s Largest Startup Investor Connect
AI

Large language models can help home robots recover from errors without human help


There are countless reasons why home robots have found little success post-Roomba. Pricing, practicality, form factor and mapping have all contributed to failure after failure. Even when some or all of those are addressed, there remains the question of what happens when a system makes an inevitable mistake.

This has been a point of friction on the industrial level, too, but big companies have the resources to properly address problems as they arise. We can’t, however, expect consumers to learn to program or hire someone who can help any time an issue arrives. Thankfully, this is a great use case for LLMs (large language models) in the robotics space, as exemplified by new research from MIT.

A study set to be presented at the International Conference on Learning Representations (ICLR) in May purports to bring a bit of “common sense” into the process of correcting mistakes.

“It turns out that robots are excellent mimics,” the school explains. “But unless engineers also program them to adjust to every possible bump and nudge, robots don’t necessarily know how to handle these situations, short of starting their task from the top.”

Traditionally, when they encounter issues, robots will exhaust their pre-programmed options before requiring human intervention. This is an especially big issue in an unstructured environment like a home, where any numbers of changes to the status quo can adversely impact a robot’s ability to function.

Researchers behind the study note that while imitation learning (learning to do a task through observation) is popular in the world of home robotics, it often can’t account for the countless small environmental variations that can interfere with regular operation, thus requiring a system to restart from square one. The new research addresses this, in part, by breaking demonstrations into smaller subsets, rather than treating them as part of a continuous action.

This, in turn, is where LLMs enter the picture, eliminating the requirement for the programmer to individually label and assign the numerous subactions.

“LLMs have a way to tell you how to do each step of a task, in natural language. A human’s continuous demonstration is the embodiment of those steps, in physical space,” says grad student Tsun-Hsuan Wang.  “And we wanted to connect the two, so that a robot would automatically know what stage it is in a task, and be able to replan and recover on its own.”

The particular demonstration featured in the study involves training a robot to scoop marbles and pour them into an empty bowl. It’s a simple, repeatable task for humans, but for robots, it’s a combination of various small tasks. The LLMs are capable of listing and labeling these subtasks. In the demonstrations, researchers sabotaged the activity in small ways, like bumping the robot off course and knocking marbles out of its spoon. The system responded by self-correcting the small tasks, rather than starting from scratch.

“With our method, when the robot is making mistakes, we don’t need to ask humans to program or give extra demonstrations of how to recover from failures,” Wang adds.

It’s a compelling method to help one avoid completely losing their marbles.



Source link

AI
by The Economic Times

IBM said Tuesday that it planned to cut thousands of workers as it shifts its focus to higher-growth businesses in artificial intelligence consulting and software. The company did not specify how many workers would be affected, but said in a statement the layoffs would “impact a low single-digit percentage of our global workforce.” The company had 270,000 employees at the end of last year. The number of workers in the United States is expected to remain flat despite some cuts, a spokesperson added in the statement. A massive supplier of technology to… Source link

AI
by The Economic Times

The number of Indian startups entering famed US accelerator and investor Y Combinator’s startup programme might have dwindled to just one in 2025, down from the high of 2021, when 64 were selected. But not so for Indian investors, who are queuing up to find the next big thing in AI by relying on shortlists made by YC to help them filter their investments. In 2025, Indian investors have invested in close to 10 Y Combinator (YC) AI startups in the US. These include Tesora AI, CodeAnt, Alter AI and Frizzle, all with Indian-origin founders but based in… Source link

by Techcrunch

Lovable, the Stockholm-based AI coding platform, is closing in on 8 million users, CEO Anton Osika told this editor during a sit-down on Monday, a major jump from the 2.3 million active users number the company shared in July. Osika said the company — which was founded almost exactly one year ago — is also seeing “100,000 new products built on Lovable every single day.” Source link