Google DeepMind’s new AI models help robots perform physical tasks, even without training
1 min read
Summary
Artificial intelligence (AI) company Google DeepMind has launched two models to help robots perform a wider range of real-world tasks.
Gemini Robotics is a vision, language and action model which is trained to recognise and understand new scenarios, even if it hasn’t been trained in them.
It makes robots more dexterous, allowing them to carry out more precise tasks and improves their interactivity with people and their environments.
Google DeepMind is also launching Gemini Robotics-ER, an advanced visual language model that is designed to offer reasoning and understanding for complex and ever-changing situations, such as packing a lunch box.
The company is working with Apptronik to build the next generation of humanoid robots and giving access to the models to Agile Robots, Agility Robotics, Boston Dynamics and Enchanted Tools, to name a few.