Meta has released a new standalone Meta AI app, built on its Llama 4 model. The app offers users personalized AI interactions through voice and text conversations, image generation capabilities, and web search functionality. Available on iOS and Android in select countries, the app represents Meta’s vision for making AI more personal and integrated into daily life.
The most distinctive feature is the app’s “Discover” feed, which adds a social dimension to AI interactions. Users can see how others, including their Facebook and Instagram friends, are using Meta AI, with the option to like, comment, share, or remix these shared prompts and creations. This social layer aims to demonstrate AI’s practical applications and inspire new use cases.
Meta AI emphasizes personalization by remembering user preferences and interests. The system draws on information users have already shared on Meta platforms to deliver more relevant responses, with deeper personalization available for those who link their Facebook and Instagram accounts through Meta Accounts Center. These personalized features are currently limited to users in the US and Canada.
Voice interaction is central to the app’s design. Meta offers an experimental “full-duplex” voice mode that creates more natural, conversational interactions where the AI generates speech directly rather than reading written responses. Standard voice features are available in the US, Canada, Australia, and New Zealand.
The app also integrates with Ray-Ban Meta smart glasses, replacing the former Meta View companion app. This integration allows users to start conversations on their glasses and continue them in the app or on the web, although conversations cannot yet be initiated on the app and resumed on glasses.