Meta AI app raises privacy concerns with extensive data collection

Meta’s new AI chatbot app is gathering extensive user data, potentially compromising privacy, according to Washington Post columnist Geoffrey A. Fowler. The app, which reached number two on iPhone’s free download charts, connects to Facebook and Instagram accounts, allowing it to access years of personal information.

Unlike competitors ChatGPT and Google Gemini, Meta AI saves everything users discuss by default, creating a “Memory file” of conversations that can include sensitive topics. Fowler’s tests revealed the app recorded interests in subjects like fertility techniques, divorce, and tax evasion, which were later difficult to fully delete.

The app offers limited privacy controls, with no option to prevent data collection before conversations occur. Users must manually delete their chat history and memory files after the fact, and even then, Meta warns that the information isn’t completely removed.

Meta spokesperson Thomas Richards defended the app, stating it provides “transparency and control throughout” while offering valuable personalization. However, privacy experts express concern about the data practices.

All conversations with Meta AI are also used to train the company’s AI systems, with no opt-out option available. Additionally, Zuckerberg has signaled plans to monetize the service with product recommendations and advertisements in the future.

Privacy advocates recommend using Meta AI only for “surface-level, fun prompts” rather than sharing personal information or concerns that users wouldn’t want broadly known.

Related posts:

Stay up-to-date: