Gemini Robotics uses Google’s top language model to make robots more useful
1 min read
Summary
The latest artificial intelligence (AI) machine from Google’s DeepMind combines its best large language model with robotics.
It aims to improve robots’ dexterity, work from natural-language commands and be able to generalise across tasks.
This is the first time AI has been incorporated in this way in advanced robots, according to Stanford professor Jan Liphardt.
The robot is trained both on simulated and real-world data to navigate obstacles and understand commands such as “put the bananas in the clear container”.
Google DeepMind partnered with several robotics companies including Agility Robotics and Boston Dynamics on a new model, the Gemini Robotics-ER vision-language model.
The company also developed a constitutional AI mechanism for the model to improve safety around humans.