Tested: Google’s Gemini can now control apps on your phone

Google’s Gemini AI can now navigate apps on your smartphone and complete tasks on your behalf. Allison Johnson reports for The Verge that the feature, currently in beta, is available on the Pixel 10 Pro and the Galaxy S26 Ultra and limited to a small number of food delivery and rideshare apps.

The feature runs in the background while users go about other activities. Gemini handles everything up to the final confirmation step, where users review and approve the order. In testing, the assistant proved surprisingly accurate, requiring few corrections before checkout.

The process is slow. One dinner order through Uber Eats took roughly nine minutes to complete. The assistant occasionally struggled to locate items clearly visible on screen. When failures occurred, they typically happened within the first two minutes, often due to missing permissions or incorrect settings.

More complex tasks showed greater promise. When given a vague prompt to book an Uber to the airport for a flight the following day, Gemini accessed the user’s calendar, identified the flight time, and suggested appropriate departure windows without detailed instructions.

Johnson notes a broader limitation. Current apps are designed for humans, not AI. Visual clutter, photos, and promotional banners slow the assistant down. Google is working toward cleaner methods of AI integration, including the standard Model Context Protocol. Sameer Samat, Google’s head of Android, confirmed that the current screen-reading approach is a temporary solution until those alternatives are widely adopted.

About the author

Related posts:

Stay up-to-date:

Advertisement