Security researcher Johann Rehberger has discovered a vulnerability in Google’s Gemini AI that allows attackers to plant false long-term memories in the chatbot. As reported by Dan Goodin in Ars Technica, the hack uses a technique called “delayed tool invocation” to bypass Google’s security measures. The attack works by embedding malicious instructions in documents that users ask Gemini to summarize. When users interact with specific trigger words, the chatbot saves unauthorized information to its long-term memory. While Google has classified the threat as low risk and low impact, the vulnerability demonstrates an ongoing challenge in protecting AI systems from indirect prompt injection attacks. The hack could potentially cause Gemini to retain and act on false information across multiple chat sessions.