Google has updated the Gemini app to generate images based on personal data from connected Google services. The feature combines Personal Intelligence with the image model Nano Banana 2 and, optionally, a user’s Google Photos library.
Previously, users had to write detailed prompts and upload reference photos manually to get relevant results. The update removes that step. Gemini now draws on context from connected Google apps to interpret simple prompts. A request like “design my dream house” will automatically reflect the user’s tastes as gathered from existing app data.
When users connect Google Photos to Personal Intelligence, Gemini can incorporate photos of people and pets that have already been labeled in the library. This allows prompts such as “create a claymation image of me and my family enjoying our favorite activity” to generate a specific, personalized result without any manual uploads.
Google acknowledges the feature may not always select the intended photo on the first attempt. Users can tap a Sources button to see which image guided the result, select a different reference photo, or correct the output through follow-up prompts.
On privacy, Google states that the Gemini app does not train its models directly on users’ private Google Photos libraries. Connecting Google apps to Gemini remains opt-in and can be changed in settings at any time.
Sources: Google, 9to5Google
Stay up to date
AI for content creation: the latest tools, tips and trends. Every two weeks in your inbox: