Gemini Robotics has introduced the Gemini Robotics-ER 1.6, an advanced model designed to enhance robots’ reasoning capabilities regarding the physical world. This upgrade significantly improves visual and spatial understanding, enabling robots to accurately identify objects in complex environments and determine task completion. One key feature is its ability to interpret complex instruments, like analog gauges, a development made possible through collaboration with Boston Dynamics, allowing robots to process images effectively during industrial inspections. Available immediately on Google AI Studio and the Gemini API, this model represents a leap forward in robotics, providing developers with new tools for creating smarter robotic applications.
Gemini API: The Gemini API is Google’s programmatic interface for accessing the Gemini family of multimodal AI models, supporting agentic tools, function calling, and vision inputs. It enables seamless integration into robotics and other applications. This release makes Gemini Robotics-ER 1.6 available via the API for developers to enhance robots with physical reasoning.
Boston Dynamics: Boston Dynamics is a robotics company renowned for building dynamic mobile robots such as the Spot quadruped used in industrial patrols and inspections. It partners with Google DeepMind to integrate AI for enhanced autonomy in real-world tasks. The news demonstrates Spot leveraging Gemini Robotics-ER 1.6 to process complex analog dials by generating code to correct camera distortions.
Google AI Studio: Google AI Studio is a web-based developer platform from Google for building and testing applications with Gemini AI models, including interactive prompts and Colab notebooks. It facilitates rapid prototyping for AI integrations. Gemini Robotics-ER 1.6 is now live on Google AI Studio, providing developers with examples for embodied reasoning tasks.
Gemini Robotics-ER 1.6: Gemini Robotics-ER 1.6 is a vision-language model developed by Google DeepMind that brings agentic capabilities to robotics, specializing in embodied reasoning for physical environments. It supports advanced visual and spatial understanding, task planning, object localization, instrument reading, and safety-aware decision-making. In this upgrade, it enables robots to process multi-view scenes, confirm task completion, and handle industrial challenges like gauge reading on Boston Dynamics’ Spot.
`json
{
“Developer Access”: “Available immediately on Google AI Studio and Gemini API with Colab examples for configuring prompts and tool calls in robotics applications.”,
“Embodied Reasoning”: “Gemini Robotics-ER 1.6 enhances spatial reasoning, world knowledge, and agentic vision to enable robots to read diverse instruments and navigate cluttered workshops.”,
“Industrial Collaboration”: “Developed through partnership with Boston Dynamics, the model processes images during facility inspections to interpret distorted analog gauges.”
}
`
