BETA RELEASE

Summary

Anthropic updated its privacy policy to allow training its models on user chats and coding sessions, introducing an opt-out mechanism and extended data retention for opted-in users.

Key quotes

Anthropic is prepared to repurpose conversations users have with its Claude chatbot as training data for its large language models—unless those users opt out.
For users who allow model training on their conversations, Anthropic increased the amount of time it holds onto user data from 30 days in most situations to a much more extensive five years.

The article explains that as of October 8, 2025, Anthropic began using new user chat logs and coding tasks for model training by default. Commercial, government, and educational plans are excluded from this policy change.