Posted by Kateryna Semenova – Sr. Developer Relations Engineer
AI is reshaping how customers work together with their favourite apps, opening new avenues for builders to create clever experiences. At Google I/O, we showcased how Android is making it simpler than ever so that you can construct good, personalised and artistic apps. And we’re dedicated to offering you with the instruments wanted to innovate throughout the complete improvement stack on this evolving panorama.
This yr, we targeted on making AI accessible throughout the spectrum, from on-device processing to cloud-powered capabilities. Listed here are the highest 3 bulletins it is advisable know for constructing with AI on Android from Google I/O ‘25:
#1 Leverage the effectivity of Gemini Nano for on-device AI experiences
For on-device AI, we introduced a brand new set of ML Equipment GenAI APIs powered by Gemini Nano, our best and compact mannequin designed and optimized for working immediately on cell units. These APIs present high-level, straightforward integration for widespread duties together with textual content summarization, proofreading, rewriting content material in several types, and producing picture description. Constructing on-device gives vital advantages similar to native information processing and offline availability at no extra price for inference. To begin integrating these options discover the ML Equipment GenAI documentation, the pattern on GitHub and watch the “Gemini Nano on Android: Constructing with on-device GenAI” discuss.
#2 Seamlessly combine on-device ML/AI with your personal customized fashions
The Google AI Edge platform permits constructing and deploying a variety of pretrained and customized fashions on edge units and helps numerous frameworks like TensorFlow, PyTorch, Keras, and Jax, permitting for extra customization in apps. The platform now additionally gives improved help of on-device {hardware} accelerators and a brand new AI Edge Portal service for broad protection of on-device benchmarking and evals. If you’re on the lookout for GenAI language fashions on units the place Gemini Nano is just not out there, you need to use different open fashions by way of the MediaPipe LLM Inference API.
Serving your personal customized fashions on-device can pose challenges associated to dealing with giant mannequin downloads and updates, impacting the person expertise. To enhance this, we’ve launched Play for On-Machine AI in beta. This service is designed to assist builders handle customized mannequin downloads effectively, making certain the precise mannequin measurement and pace are delivered to every Android gadget exactly when wanted.
For extra data watch “Small language fashions with Google AI Edge” discuss.
#3 Energy your Android apps with Gemini Flash, Professional and Imagen utilizing Firebase AI Logic
For extra superior generative AI use instances, similar to complicated reasoning duties, analyzing giant quantities of information, processing audio or video, or producing photographs, you need to use bigger fashions from the Gemini Flash and Gemini Professional households, and Imagen working within the cloud. These fashions are nicely fitted to situations requiring superior capabilities or multimodal inputs and outputs. And because the AI inference runs within the cloud any Android gadget with an web connection is supported. They’re straightforward to combine into your Android app through the use of Firebase AI Logic, which offers a simplified, safe approach to entry these capabilities with out managing your personal backend. Its SDK additionally contains help for conversational AI experiences utilizing the Gemini Dwell API or producing customized contextual visible property with Imagen. To study extra, take a look at our pattern on GitHub and watch “Improve your Android app with Gemini Professional and Flash, and Imagen” session.
These highly effective AI capabilities will also be dropped at life in immersive Android XR experiences. You could find corresponding documentation, samples and the technical session: “The long run is now, with Compose and AI on Android XR”.

Get impressed and begin constructing with AI on Android in the present day
We launched a brand new open supply app, Androidify, to assist builders construct AI-driven Android experiences utilizing Gemini APIs, ML Equipment, Jetpack Compose, CameraX, Navigation 3, and adaptive design. Customers can create personalised Android bot with Gemini and Imagen by way of the Firebase AI Logic SDK. Moreover, it incorporates ML Equipment pose detection to detect an individual within the digital camera viewfinder. The complete code pattern is on the market on GitHub for exploration and inspiration. Uncover extra AI examples in our Android AI Pattern Catalog.

Choosing the proper Gemini mannequin relies on understanding your particular wants and the mannequin’s capabilities, together with modality, complexity, context window, offline functionality, price, and gadget attain. To discover these concerns additional and see all our bulletins in motion, take a look at the AI on Android at I/O ‘25 playlist on YouTube and take a look at our documentation.
We’re excited to see what you’ll construct with the ability of Gemini!