Google‘s I/O developer conference in May 2024 demonstrated that the battle for AI dominance will be played out on smartphones.
The event included a flurry of announcements, but Google Android (Android) took centre stage, with the company introducing multiple user-friendly integrations of Google Gemini (Gemini) AI into Android and its services. The AI examples showcased at Google’s I/O were primarily focused on smartphones, including an early version of Google’s multimodal AI assistant, Project Astra.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThis real-time assistant, demonstrated by Demis Hassabis, head of Google DeepMind, allows users to interact with their smartphones in a conversational manner, identifying objects, finding lost items, and more. As rival OpenAI is rumoured to have partnered with tech giant Apple to bring its AI models to Apple iPhones, Google’s inclusion of multimodality attributes on its on-device AI model, Gemini Nano, gives it an early advantage.
User experience augmentation emerging as a use case for Google
The conference highlighted the company’s objective – to deeply integrate its Gemini AI model into all its software and hardware. The key use case that emerged at Google’s developer conference was to make day-to-day tasks easier for consumers via the apps and services they already use.
For instance, Gemini can now build travel itineraries by analysing flight and hotel details from a user’s email, in addition to gathering information from online sources. It can also interpret images, such as extracting event information from a picture and automatically adding it to a calendar, eliminating the need for manual input.
Voice will be a user experience pillar
Voice interaction is poised to be a cornerstone of the user experience with AI. Google’s demonstrations of Gemini’s image interpretation capabilities, its ‘Ask Photos’ feature, retrieved a picture of CEO Sundar Pichai’s license plate number based on location and historical data from Google Photos.
This capability, coupled with the six billion daily Google Photo uploads, underscores the potential for AI to become an indispensable part of consumer lives. Existing voice solutions, be it Apple Siri, Google Assistant, or Amazon Alexa, have not fully met user expectations in terms of command recognition, accuracy, and privacy. The effectiveness and superiority of Gemini’s offering in comparison to these incumbents will be a determining factor in its market success.
Potential for Google AI dominance
The company plans to roll out Gemini updates to 200 million devices by end-2024, an indication of how powerfully impactful this platform can be. However, it is early days, and the most effective monetisation strategies are yet to be determined. The company has invested heavily in hardware due to the importance of AI, but it has struggled to effectively market itself to consumers.
Despite these challenges, Google has the best chance of dominating the AI space due to its extensive user base and growing device ecosystem.
Related Company Profiles
Google LLC
Apple Inc
Sirius XM Holdings Inc