Google I/O has begun in Mountain View, California, with a flurry of announcements centered on AI. Leading off the presentations, CEO Sundar Pichai and other Google executives have been showcasing how the company is integrating its Gemini artificial intelligence model into a wide array of its online services. Google continues to compete with major contenders in the AI arena, such as OpenAI, Microsoft, and others. Key announcements include real-time AI-powered translation for Google Meet, enhancements in Project Astra’s computer vision capabilities, and improvements to Google’s Veo image and video generation.
As announcements continue on stage, follow our on-site correspondent Karissa Bell, supported by Engadget staff, for real-time updates in our liveblog below. In the usual fashion, a developer-centered keynote will occur after the main presentation (4:30 PM ET / 1:30 PM PT). While we will monitor key developments from this session, our liveblog will largely focus on the major highlights.
Watch the keynote via the embedded livestream or on the company’s YouTube channel. Google also plans breakout sessions until May 21 on various topics important for developers.
As part of its I/O 2025 announcements, Google has introduced new shopping features within AI Mode that enhance the shopping experience. These tools will improve aspects like discovering products, attempting items via virtual try-ons, and checkout processes, set to launch “in the coming months” for online shoppers in the United States. For instance, a search for specific items, like a travel bag or a rug, will prompt Google AI to display a visual-rich panel tailored to the query.
In addition to these tools, Google’s Veo 3 AI model was presented, marking its first version capable of generating videos with synchronized sound. This latest iteration can create realistic scenarios such as a video featuring birds accompanied by their singing, or a city street highlighted with traffic sounds. Currently, the model is accessible for Gemini Ultra subscribers in the U.S. and enterprise users on Vertex AI.
Also highlighted is Google Search Live, which will allow users to have interactive dialogues about what their camera captures. Users can point their camera at complex equations or concepts, and the Search tool will assist in problem-solving or provide explanations. This innovation demonstrates a significant leap in multimodal interaction and is part of Project Astra’s capabilities.
As the conference nears its conclusion, participants and audiences alike are left with high expectations for the future of Google’s technology innovations, especially in integrating advanced AI functionalities across various services.