Google has introduced AI Mode for its Search service in Singapore, expanding access to an artificial intelligence-powered search tool that delivers detailed answers to complex queries and supports multimodal interaction.
The AI Mode option, now available in English via Google Search, leverages Gemini 2.5, the latest of Google’s language models. It enables Singapore users to pose longer and more intricate questions, which are processed and broken down into subtopics for more comprehensive results. This update follows an initial experimental launch in the United States earlier this year.
AI Mode’s core functionality lies in its ability to address queries that traditionally would require several separate searches. By utilising a query fan-out approach, the system analyses a user’s question, subdividing it into smaller components and issuing multiple underlying searches to present a unified, thorough response.
Early trials showed that users submitted queries two to three times longer than the average search, suggesting the tool is well-suited for complex questions that demand more detailed information. It has already been applied to exploratory requests such as product comparisons, travel planning, and understanding step-by-step processes.
For example, entering a query like “Compare safe and affordable travel destinations from Singapore for a short weekend trip in September” prompts AI Mode to present options for destinations, estimated travel time, weather forecasts, and additional considerations. Follow-up questions, such as “Can you suggest some activities for a solo traveller,” can further refine the answer and provide detailed, relevant recommendations.
Behind the scenes, the tool works by breaking questions into subtopics and issuing a multitude of underlying searches. This allows it to draw on a wide range of online sources, including real-time information from Google’s Knowledge Graph and shopping data for billions of products.
In addition to textual input, AI Mode allows searchers to ask questions using their voice, or by uploading or snapping an image. These multimodal capabilities are integrated into the Google app for both Android and iOS, making the feature accessible across devices.
The design aims to make searching more natural, whether through typing, speaking, or visual input, and is particularly useful for complex or longer queries or when typing isn’t convenient. Users can, for instance, tap the microphone icon and speak directly into AI Mode to generate responses.
Google Lens is also built into the experience, enabling visual searches. For example, a user who receives a plant as a gift can take a photo and ask how to identify it and care for it. AI Mode can recognise the plant and provide step-by-step instructions, along with links to relevant guides and resources.
Beyond delivering relevant answers, the update is designed to improve content discovery. By allowing people to describe exactly what they are looking for in greater detail, AI Mode can surface a broader range of web content and formats, which traditional keyword searches might miss.
The tool essentially breaks down a user’s question into subtopics and runs multiple searches on their behalf, combining advanced model capabilities with Google’s existing information systems. The result is a deeper dive into the web and a wider pool of content presented directly within Search.
The new feature is available immediately in Singapore on the Google homepage and app, with the option to activate AI Mode for eligible queries. The launch continues Google’s strategy of incrementally introducing AI-powered tools to international markets following initial testing phases elsewhere.