Anthropic Will Train AI Models On Chat Transcripts Unless Users Opt Out
Anthropic has announced a major update to how it trains its AI models, confirming that new chat transcripts and coding sessions will now be included in training data unless users choose to opt out. The change has raised questions around data privacy, retention policies, and how user decisions could impact the future of Anthropic’s AI systems. With data being kept for up to five years, users are being urged to carefully review their options before the September 28th deadline.
Anthropic AI Training Policy And What It Means For Users
The new policy means that Anthropic will automatically begin using user data from new or resumed chats and coding sessions for AI training, unless individuals explicitly opt out. Importantly, past conversations that are not reopened remain excluded. This decision aligns with the company’s broader goal of improving its Claude AI models but places responsibility on users to manage their preferences. For those concerned with privacy, it’s a critical moment to evaluate whether to allow their interactions to contribute to future model improvements.
Opt-Out Options And Data Retention Changes
Anthropic has also extended its data retention policy, keeping user data for up to five years if consent is given. Existing users will see a pop-up notification prompting them to either accept or decline the new terms, while new users must set preferences during sign-up. Although there is an option to delay the choice, all users must confirm their decision by September 28th. This shift highlights how AI companies are balancing transparency, innovation, and user control in the evolving landscape of data management.
Impact On Claude AI Users And Consumer Tiers
The updates apply across all consumer subscription tiers, including Claude Free, Pro, and Max, as well as Claude Code. However, commercial tiers such as Claude Gov, Claude for Work, Claude for Education, and API integrations are excluded. This division shows Anthropic’s strategic focus on refining its consumer-facing AI products, while leaving enterprise-level users unaffected. Still, many worry that users may unknowingly accept the terms too quickly, raising concerns about informed consent and the long-term consequences of data-driven AI training.
๐ฆ๐ฒ๐บ๐ฎ๐๐ผ๐ฐ๐ถ๐ฎ๐น ๐ถ๐ ๐๐ต๐ฒ๐ฟ๐ฒ ๐ฟ๐ฒ๐ฎ๐น ๐ฝ๐ฒ๐ผ๐ฝ๐น๐ฒ ๐ฐ๐ผ๐ป๐ป๐ฒ๐ฐ๐, ๐ด๐ฟ๐ผ๐, ๐ฎ๐ป๐ฑ ๐ฏ๐ฒ๐น๐ผ๐ป๐ด. Weโre more than just a social platform โ from jobs and blogs to events and daily chats, we bring people and ideas together in one simple, meaningful space.