May 17, 2024
Google held its annual I/O developer conference on 14 May 2024. This yearly event serves as a platform for Google to announce its new products, features and tech – and 2024 was no exception.
This year’s presentation was packed with software updates and advancements in artificial intelligence (AI). Google teased an array of new search functions, security features and announced not one but two new Gemini models.
Missed the live stream? Not to worry; we’ve got you covered. Here are the biggest takeaways from Google I/O 2024.
When it comes to search, Google reigns supreme. Since 1998, the tech giant has continually pushed the boundaries, developing new capabilities and features that have kept them at the forefront of search.
In recent years, Google has utilised AI to provide a richer, more accessible and personalised search experience. So, it’s no surprise that AI was a dominant theme at the 2024 I/O. This year, the focus was on their new custom Gemini model, which combines Google’s search systems with Gemini’s advanced AI capabilities.
Here are four ways AI is enhancing and reshaping Google search.
Multi-Step Reasoning is a new feature designed to provide deeper insights into topics and deliver streamlined answers with more contextual depth. It enables users to pose intricate questions in a single query, eliminating the need for multiple searches.
During I/O 2024, Google demonstrated this feature using the example of planning a trip. They showcased how users can utilise Maps to seamlessly discover hotels, plan travel and explore dining options (including the ability to specify dietary requirements), all within a single search.
AI Overviews, formerly known as Search Generative Experience (SGE), is a new search function that’s set to overhaul results pages. Powered by a specialised Gemini model, AI Overviews answer queries via summaries. These summaries are generated using information from multiple sources, offering users a convenient overview of a topic that includes links to relevant sources.
The feature is already being rolled out in the United States and it’s expected to reach over a billion users by the end of 2024 (we’ve blogged about this in detail here). However, there is concern among website owners about its potential impact on their performance in SERPs (search engine results pages).
Google Lens is revolutionising visual search by introducing the ability to ask questions via video. Users no longer have to search using text, image or their voice. Instead, they can find answers by simply pointing their camera.
During a quick demo, Google showcased how Lens accurately identifies and addresses issues, such as a faulty turntable. Google Lens quickly recognised the model and pinpointed the problem: the tonearm. It then provided both text and video instructions on how to adjust and solve the problem.
Google’s Circle to Search is getting an innovative upgrade. Launched in January 2024, this search feature allowed users to search by drawing a circle on their screen. Now, Circle to Search can also assist with academic tasks.
Android phone and tablet users can simply circle a maths problem and get help from Google. The maths problem is then broken down into manageable steps, helping students to understand the theory and find the solution.
Next up: Gemini.
From its integration within search to the introduction of new models, Gemini stole the show at the I/O 2024 conference. Highlights included two new models (Gemini 1.5 Flash and Project Astra), as well as the Google workspace rollout of Gemini 1.5 Pro.
Here’s what you need to know about the new Gemini models.
Following the launch of Gemini 1.0 in December 2023 and the introduction of 1.5 Pro in February 2024, Google has announced its latest model: Gemini 1.5 Flash.
Gemini 1.5 Flash has been designed for speed and efficiency. It’s a lightweight, cost-effective model that harnesses the power of 1.5 Pro, making it ideal for high-volume, low-latency tasks such as processing code, extracting data and captioning videos.
Step aside ChatGPT, there’s a new chatbot in town: Project Astra. As Google’s latest chatbot innovation, Project Astra blurs the lines between chatbots and virtual assistants – Google calls it ‘a universal AI agent that is helpful in everyday life.’
Powered by multimodal AI, Project Astra extends beyond conversation. It’s a visual chatbot that can recognise landmarks and locations, interpret code and drawings, help with brainstorming and decipher your surroundings. Users simply point their phone camera and ask it questions about objects or their environment.
Google envisions that Astra will function as a do-everything virtual assistant, seamlessly integrating into users’ lives with its human-level assistance.
In addition to these new models, I/O 2024 also announced a host of new Gemini features:
Away from search, Gemini and chatbots, I/O 2024 introduced a range of other developments. Here are the highlights:
The conference also included numerous updates for developers. You can find a full list of these here on the Google for Developers blog.
Want to read about every update? Google has outlined all 100 announcements in this blog post.
As you can see, Google I/O 2024 unveiled a myriad of innovations and features. With advanced AI capabilities like Multi-Step Reasoning, cutting-edge Gemini models and revolutionary chatbots like Project Astra, Google continues to push the boundaries of AI and redefine the future of search.
Got questions about these updates? Or do you have any concerns about how they could impact your performance? Contact our experts to learn more and make sure your website is ready for the next generation of search.