AI company Anthropic will start using user chat transcripts and coding sessions to train its language models unless users actively opt out, The Verge reports.
All users must make a decision by September 28th through a pop-up notification. Those who click “Accept” allow Anthropic to immediately begin training on their data and retain it for up to five years. The company previously had shorter data retention periods.
The policy affects new and resumed conversations on Claude Free, Pro, and Max subscription tiers. Previous chats remain protected unless users continue them. Commercial users including Claude Gov, Claude for Work, and API customers are exempt from the changes.
New users will choose their preference during signup. Existing users can defer the decision temporarily but must respond by the September deadline.