Simon Willison writes how ChatGPT’s extended memory feature, which references past conversations to provide personalized responses, is creating unexpected and unwanted results for users. He detail how the feature, launched in April 2025, builds a comprehensive dossier of user interactions that automatically influences all future conversations without clear user control.
The feature, available only to Plus and Pro account holders, was demonstrated when ChatGPT unexpectedly included a “Half Moon Bay” sign in an image generation prompt about a dog in a pelican costume, referencing Willison’s location from previous unrelated conversations. This surprised the author, who hadn’t requested this location-specific element.
Willison, a self-described “LLM power-user,” expresses frustration that the system undermines his ability to carefully control context in prompts. By maintaining and injecting detailed summaries of past conversations into new chats, ChatGPT creates an “extraordinarily detailed” profile of user interests, locations, and behaviors.
Using a specific prompt, Willison revealed the extensive metadata ChatGPT maintains, including his location, device information, conversation patterns, and topic interests. While users can opt out by disabling the feature in settings or archiving specific conversations, Willison suggests a better approach would be project-scoped memory that allows contextual recall only within relevant conversation groups.
The underlying technology appears to be a system prompt enhancement rather than the RAG (Retrieval-Augmented Generation) pattern that Willison initially suspected.