
Google Maps integrates Gemini 2.5 model
After launching the Gemini 2.5 model, Google continues to bring artificial intelligence to its most familiar application, Google Maps. For the first time, the map can not only show directions but also listen, understand and respond in natural language, a step forward that turns the navigation tool into a true communication assistant.
As Google Maps gets smarter
According to Google DeepMind's official blog (November 2025), Gemini 2.5, an AI model capable of manipulating browsers and processing real-world data, has begun testing integration into Google Maps. The goal is to help users "naturally interact with the map" instead of manually typing in each location or complex options.
Gemini can understand requests like: "Find a nearby restaurant that's open late, but avoid road construction" or "Take me home but stop by a pharmacy on the way." The system will automatically read real-time traffic data, determine the speaker's location, and suggest the best route.
"Gemini is trained to understand not just words but also user intent in real space," Google describes in a post introducing the "Ask Gemini in Maps" feature.
According to Tuoi Tre Online 's research, the first test version is being deployed in the US, Canada and the UK, with plans to expand to Asian countries in 2026. Some Android users have been able to activate the voice command "Ask Gemini in Maps" to give commands or chat directly.
What's special is that Gemini not only looks up maps , but also analyzes Street View images, community reviews, travel habits and weather data, so the map can say natural sentences like "Turn right at Mrs. Bay's coffee shop ahead" instead of "Turn right after 200 meters".
"Listen - understand - respond" technology and personalization progress
Technically, Gemini in Maps applies a multimodal model - combining language processing, images and location data to "understand" the world the way humans perceive it.
The system uses grounding techniques (associating language with specific objects and locations) to allow AI to understand natural sentences like "go towards the sunset" or "pass the intersection with the red sign".
Compared to the previous Google Assistant, Gemini is about twice as fast and consumes less cloud data thanks to on-device AI on Pixel and high-end Android devices. This not only allows for near-instant responses but also better privacy protection, as many voice commands are processed right on the user's device.
Tech observers say the arrival of Gemini in Maps could turn the map into a true conversational interface , where users can ask questions, adjust routes, or learn about locations hands-free.
This trend also opens a new phase: AI not only answers questions, but also begins to act on behalf of humans in the real world.
Google has yet to announce an official release date, but experts predict that 2026 will be the milestone for expanding Gemini's deeper integration into the Android ecosystem from directions, car booking to personal travel management.
Source: https://tuoitre.vn/google-maps-nang-cap-biet-nghe-va-tro-chuyen-voi-nguoi-dung-20251107112218748.htm






Comment (0)