Top 3 Updates for Building with AI on Android at Google I/O ‘24 (2024)

Posted by Terence Zhang – Developer Relations EngineerTop 3 Updates for Building with AI on Android at Google I/O ‘24 (1)

At Google I/O, we unveiled a vision of Android reimagined with AI at its core. As Android developers, you're at the forefront of this exciting shift. By embracing generative AI (Gen AI), you'll craft a new breed of Android apps that offer your users unparalleled experiences and delightful features.

Gemini models are powering new generative AI apps both over the cloud and directly on-device. You can now build with Gen AI using our most capable models over the Cloud with the Google AI client SDK or Vertex AI for Firebase in your Android apps. For on-device, Gemini Nano is our recommended model. We have also integrated Gen AI into developer tools - Gemini in Android Studio supercharges your developer productivity.

Let’s walk through the major announcements for AI on Android from this year's I/O sessions in more detail!

#1: Build AI apps leveraging cloud-based Gemini models

To kickstart your Gen AI journey, design the prompts for your use case with Google AI Studio. Once you are satisfied with your prompts, leverage the Gemini API directly into your app to access Google’s latest models such as Gemini 1.5 Pro and 1.5 Flash, both with one million token context windows (with two million available via waitlist for Gemini 1.5 Pro).

If you want to learn more about and experiment with the Gemini API, the Google AI SDK for Android is a great starting point. For integrating Gemini into your production app, consider using Vertex AI for Firebase (currently in Preview, with a full release planned for Fall 2024). This platform offers a streamlined way to build and deploy generative AI features.

We are also launching the first Gemini API Developer competition (terms and conditions apply). Now is the best time to build an app integrating the Gemini API and win incredible prizes! A custom Delorean, anyone?


#2: Use Gemini Nano for on-device Gen AI

While cloud-based models are highly capable, on-device inference enables offline inference, low latency responses, and ensures that data won’t leave the device.

At I/O, we announced that Gemini Nano will be getting multimodal capabilities, enabling devices to understand context beyond text – like sights, sounds, and spoken language. This will help power experiences like Talkback, helping people who are blind or have low vision interact with their devices via touch and spoken feedback. Gemini Nano with Multimodality will be available later this year, starting with Google Pixel devices.

We also shared more about AICore, a system service managing on-device foundation models, enabling Gemini Nano to run on-device inference. AICore provides developers with a streamlined API for running Gen AI workloads with almost no impact on the binary size while centralizing runtime, delivery, and critical safety components for Gemini Nano. This frees developers from having to maintain their own models, and allows many applications to share access to Gemini Nano on the same device.

Gemini Nano is already transforming key Google apps, including Messages and Recorder to enable Smart Compose and recording summarization capabilities respectively. Outside of Google apps, we're actively collaborating with developers who have compelling on-device Gen AI use cases and signed up for our Early Access Program (EAP), including Patreon, Grammarly, and Adobe.

Top 3 Updates for Building with AI on Android at Google I/O ‘24 (2)


Adobe is one of these trailblazers, and they are exploring Gemini Nano to enable on-device processing for part of its AI assistant in Acrobat, providing one-click summaries and allowing users to converse with documents. By strategically combining on-device and cloud-based Gen AI models, Adobe optimizes for performance, cost, and accessibility. Simpler tasks like summarization and suggesting initial questions are handled on-device, enabling offline access and cost savings. More complex tasks such as answering user queries are processed in the cloud, ensuring an efficient and seamless user experience.

This is just the beginning - later this year, we'll be investing heavily to enable and aim to launch with even more developers.

To learn more about building with Gen AI, check out the I/O talks Android on-device GenAI under the hood and Add Generative AI to your Android app with the Gemini API, along with our new documentation.


#3: Use Gemini in Android Studio to help you be more productive

Besides powering features directly in your app, we’ve also integrated Gemini into developer tools. Gemini in Android Studio is your Android coding companion, bringing the power of Gemini to your developer workflow. Thanks to your feedback since its preview as Studio Bot at last year’s Google I/O, we’ve evolved our models, expanded to over 200 countries and territories, and now include this experience in stable builds of Android Studio.

At Google I/O, we previewed a number of features available to try in the Android Studio Koala preview release, like natural-language code suggestions and AI-assisted analysis for App Quality Insights. We also shared an early preview of multimodal input using Gemini 1.5 Pro, allowing you to upload images as part of your AI queries — enabling Gemini to help you build fully functional compose UIs from a wireframe sketch.


You can read more about the updates here, and make sure to check out What’s new in Android development tools.

Top 3 Updates for Building with AI on Android at Google I/O ‘24 (2024)

FAQs

Top 3 Updates for Building with AI on Android at Google I/O ‘24? ›

Google recently unveiled plans to integrate its search engine with artificial intelligence (AI). The company is debuting a new search engine feature called A.I. Overviews, which generates an overview of the topic a user searches and displays links to learn more.

What is the new AI feature in Google Search? ›

Google recently unveiled plans to integrate its search engine with artificial intelligence (AI). The company is debuting a new search engine feature called A.I. Overviews, which generates an overview of the topic a user searches and displays links to learn more.

How do I turn on AI on Google? ›

Turn on or off “AI Overviews and more” in Search Labs
  1. On your Android device, open the Google app , Chrome , or Firefox .
  2. Make sure you're signed in to your Google Account with Incognito mode turned off.
  3. At the top, tap Labs Manage .
  4. Toggle off ​​​​​ or on. “AI Overviews and more.”

How to get AI results in Google? ›

"AI Overviews and more" in Search Labs

People who turn on this experiment may see AI Overviews on more Google searches and may have access to additional generative AI features in Search. You'll see the Search Labs icon next to AI Overviews when you've opted into the “AI Overviews and more” experiment in Search Labs.

What is the Gemini in Google Search results? ›

Unlike traditional Google search results, which provide snippets of information or a list of links, Gemini works to deliver detailed and contextual responses. Google announced Gemini (but called it “Bard”) back in February 2023, conveniently after OpenAI and Microsoft announced their own AI chatbot systems.

What is the Google latest update in 2024? ›

Google has announced the March 2024 Core Update and multiple spam policy updates, which aim to enhance the quality of its search results. Unveiled on 5th March 2024, this first Google Algorithm core update of the year, coupled with new spam policies, is seemingly big as it involves upgrading multiple ranking systems.

What's the AI everyone is using? ›

As a leader in the AI space, Google Assistant is considered to be one of the most advanced virtual assistants of its kind on the market. Using natural language processing, it supports both voice and text commands, and can handle everything from internet searches to voice-activated control of other devices.

Is Chrome getting 3 new generative AI features? ›

Google is adding three new generative AI powered features to the world's most popular internet browser - Chrome. These include a new way to automatically group tabs, a custom background generator and an AI powered writing assistant.

How to make AI in Android? ›

  1. Step 1: Problem identification and setting goals.
  2. Step 2: Preparation of data.
  3. Step 3: Choosing the right tools and frameworks.
  4. Step 4: Designing and training/fine-tuning AI model.
  5. Step 5: Model integration into the app.
  6. Step 6: Model testing and iteration.

Does Chrome have built-in AI? ›

To support built-in AI in Chrome, we created infrastructure to access foundation and expert models for on-device execution. This infrastructure is already powering innovative browser features, such as Help me write, and will soon power APIs for on-device AI.

How do I enable Google generative AI? ›

Enabling the Google Generative AI Search Feature
  1. Open Your Browser: Go to the Google homepage.
  2. Head to Settings: Located at the bottom right corner.
  3. Select Search Settings: Find it in the dropdown menu.
  4. Enable Google AI Search: Scroll to this section and turn it on.
Dec 20, 2023

Do I have AI on my phone? ›

You're Likely Already Using AI

If your smartphone is relatively new and updated with the latest operating system, you're using AI-powered tools without even knowing it. All of the virtual assistants (Siri, Google Assistant, etc.)

Is Google's AI free? ›

The free tier of Google Cloud AI Platform allows users to access various features, including AutoML for model training and deployment, AI Platform Notebooks for collaborative Jupyter notebooks, and AI Platform Prediction for serving machine learning models in production environments.

What is Google's new search AI? ›

Soon, when you're looking for ideas, Search will use generative AI to brainstorm with you and create an AI-organized results page that makes it easy to explore. You'll see helpful results categorized under unique, AI-generated headlines, featuring a wide range of perspectives and content types.

What is Google Alpha Code 2? ›

AlphaCode 2 solves problems by first tapping a family of “policy models” that generate a number of code samples for each problem. Code samples that don't fit the problem description are filtered out, and a clustering algorithm groups “semantically similar code samples” to avoid any redundancies.

How to enable generative AI in Google Search? ›

How to Sign Up for the Google Generative AI Search Feature
  1. Visit the Google Labs Website: Start by logging in with your Google account.
  2. Find the Generative AI Section: Look for the “Sign Up” button.
  3. Agree to the Terms: Read the terms carefully, then hit “Submit”.
  4. Registration Complete: That's it!
Dec 20, 2023

What is the new search feature in Google? ›

Lens search with video enables users to record a video and ask questions while recording; Google will respond to based on the questions asked. “We're able to take visual search to a whole new level, with the ability to ask questions with video,” Reid said.

What is the AI used in Google? ›

A majority of Google's products and services use Google AI research. Much of the technology emerging from Google AI research is incorporated into Google products, such as Google Search and Google Translate. Many Google products using Google AI come already downloaded on Android phones, such as Google Maps.

What is Google new AI chatbot? ›

AI chatbots are trained on large amounts of data and use ML to intelligently generate a wide range of non-scripted, conversational responses to human text and voice input. Virtual agents are AI bots that can be specifically trained to interact with customers in call centers or contact centers.

Top Articles
Latest Posts
Article information

Author: Gregorio Kreiger

Last Updated:

Views: 6388

Rating: 4.7 / 5 (77 voted)

Reviews: 92% of readers found this page helpful

Author information

Name: Gregorio Kreiger

Birthday: 1994-12-18

Address: 89212 Tracey Ramp, Sunside, MT 08453-0951

Phone: +9014805370218

Job: Customer Designer

Hobby: Mountain biking, Orienteering, Hiking, Sewing, Backpacking, Mushroom hunting, Backpacking

Introduction: My name is Gregorio Kreiger, I am a tender, brainy, enthusiastic, combative, agreeable, gentle, gentle person who loves writing and wants to share my knowledge and understanding with you.