❌

Reading view

Anthropic Brings Automatic Memory to Claude Pro and Max Users

Anthropic today said it is updating the Claude chatbot with a new memory feature, which will put Claude on par with ChatGPT. With memory enabled, Claude will be able to recall past conversations.


Anthropic first added memory to Claude earlier this year, but with the initial implementation, Claude would only recall details when specifically asked. In August, Anthropic expanded the memory feature to allow Claude to automatically remember conversation details without a specific user request, and that functionality has been limited to Team and Enterprise subscribers.

Claude's memory functionality is now expanding to all paid users, so Pro and Max subscribers can use the feature. Max users can turn it on now, while Pro subscribers will get access "over the coming days."

Memory is an opt-in feature that can be turned on in Claude's settings. There are options for "search and reference chats" and "generate memory from chat history." Claude offers an editable memory summary that users can view to see what Claude remembers from conversations.

In the projects section of Claude, each project will have a separate memory. The division ensures that different discussions remain distinct, allowing for separation of work and personal chats.
This article, "Anthropic Brings Automatic Memory to Claude Pro and Max Users" first appeared on MacRumors.com

Discuss this article in our forums

  •  

Anthropic Will Now Train Claude on Your Chats, Here's How to Opt Out

Anthropic announced today that it is changing its Consumer Terms and Privacy Policy, with plans to train its AI chatbot Claude with user data.


New users will be able to opt out at signup. Existing users will receive a popup that allows them to opt out of Anthropic using their data for AI training purposes.

The popup is labeled "Updates to Consumer Terms and Policies," and when it shows up, unchecking the "You can help improve Claude" toggle will disallow the use of chats. Choosing to accept the policy now will allow all new or resumed chats to be used by Anthropic. Users will need to opt in or opt out by September 28, 2025, to continue using Claude.

Opting out can also be done by going to Claude's Settings, selecting the Privacy option, and toggling off "Help improve Claude."

Anthropic says that the new training policy will allow it to deliver "even more capable, useful AI models" and strengthen safeguards against harmful usage like scams and abuse. The updated terms apply to all users on Claude Free, Pro, and Max plans, but not to services under commercial terms like Claude for Work or Claude for Education.

In addition to using chat transcripts to train Claude, Anthropic is extending data retention to five years. So if you opt in to allowing Claude to be trained with your data, Anthropic will keep your information for a five year period. Deleted conversations will not be used for future model training, and for those that do not opt in to sharing data for training, Anthropic will continue keeping information for 30 days as it does now.

Anthropic says that a "combination of tools and automated processes" will be used to filter sensitive data, with no information provided to third-parties.

Prior to today, Anthropic did not use conversations and data from users to train or improve Claude, unless users submitted feedback.
This article, "Anthropic Will Now Train Claude on Your Chats, Here's How to Opt Out" first appeared on MacRumors.com

Discuss this article in our forums

  •  
❌