❌

Normal view

Anthropic Promises Claude Will Remain Ad-Free, Mocks ChatGPT Ads in Super Bowl Commercial

As OpenAI is making plans to introduce ads to ChatGPT, competitor Anthropic has promised to keep Claude ad-free. In a blog post today, the company said that there are "many good places for advertising," but a "conversation with Claude is not one of them."


According to Anthropic, including ads in Claude would not be in line with its mission of creating a helpful assistant for work and deep thinking. Anthropic claims that users should not need to second-guess whether an AI is being helpful or "subtly steering the conversation towards something monetizable."

There will be no ads or sponsored links in conversations with Claude, and Claude's responses will not be influenced by advertisers or include third-party product placements.
Our analysis of conversations with Claude (conducted in a way that keeps all data private and anonymous) shows that an appreciable portion involve topics that are sensitive or deeply personal--the kinds of conversations you might have with a trusted advisor. Many other uses involve complex software engineering tasks, deep work, or thinking through difficult problems. The appearance of ads in these contexts would feel incongruous--and, in many cases, inappropriate.

Promising an ad-free experience could encourage people to choose Claude over OpenAI's ChatGPT. In January, OpenAI said that it would start testing ads in the United States for free and Go tier subscribers, though subscribers with higher paid tiers will not see ads. OpenAI claims that ads will be clearly labeled and will not influence the answers that ChatGPT provides, nor will the company provide conversation details to advertisers.

To further reinforce the difference between Claude's ad-free experience and ChatGPT's ad-supported experience, Anthropic plans to run a humorous Super Bowl commercial where a man gets an unwanted cougar dating ad after asking about his mother. "Ads are coming to AI," reads the video's text. "But not to Claude."


Anthropic plans to continue to monetize through enterprise contracts and paid subscriptions, with revenue reinvested in improving Claude. Anthropic will maintain a free tier, and the company says that it may also offer lower-cost subscription tiers and regional pricing in the future if there is demand for it. Claude Pro is priced at $20 per month, which is the same price as ChatGPT's higher-end Plus tier.

An ad-free Claude experience isn't a sure thing forever, as Anthropic gives itself an out in the blog post: "Should we need to revisit this approach, we'll be transparent about our reasons for doing so."
This article, "Anthropic Promises Claude Will Remain Ad-Free, Mocks ChatGPT Ads in Super Bowl Commercial" first appeared on MacRumors.com

Discuss this article in our forums

New Siri: Apple Almost Chose a Different Partner Before Google Gemini

In a recent interview with the tech podcast TBPN, Bloomberg's Mark Gurman revealed that Apple was initially "going to rebuild Siri around Claude," the large language model and chatbot developed by the company Anthropic. In the end, though, Apple announced that it had decided to use Google's Gemini platform instead.


According to Gurman, Apple went with Google due at least in part to money.

"Anthropic was holding them over a barrel," said Gurman, in a podcast clip shared by TBPN. "They wanted a ton of money from them, several billion dollars a year, and at a price that doubled on an annual basis for the next three years."

Nevertheless, Gurman said Apple currently "runs on Anthropic" internally.

"Anthropic is powering a lot of the stuff Apple's doing internally in terms of product development and a lot of their internal tools," he explained. "They have custom versions of Claude running on their own servers internally, too."

Apple was "not going to use Google" for the revamped Siri until "a few months ago," he said.

Apple announced that it plans to release a more personalized version of Siri powered by Google Gemini this year. It is expected to be part of iOS 26.4, which should enter beta testing in February and be released to the general public in March or April. The new-and-improved Siri likely requires an iPhone 15 Pro or newer.

Back in June 2024, Apple said the revamped Siri will have understanding of personal context, on-screen awareness, deeper in-app controls, and more. At the time, Apple showed an iPhone user asking Siri about their mother's flight and lunch reservation plans based on info retrieved from the Mail and Messages apps.

Bloomberg's @markgurman says that even though Apple partnered with Google Gemini for Siri, they actually run their business on Anthropic.

"Apple runs on Anthropic at this point. Anthropic is powering a lot of the stuff Apple's doing internally in terms of product development and… pic.twitter.com/NpW0Pyj03J

β€” TBPN (@tbpn) January 29, 2026

This article, "New Siri: Apple Almost Chose a Different Partner Before Google Gemini" first appeared on MacRumors.com

Discuss this article in our forums

Anthropic Brings Automatic Memory to Claude Pro and Max Users

Anthropic today said it is updating the Claude chatbot with a new memory feature, which will put Claude on par with ChatGPT. With memory enabled, Claude will be able to recall past conversations.


Anthropic first added memory to Claude earlier this year, but with the initial implementation, Claude would only recall details when specifically asked. In August, Anthropic expanded the memory feature to allow Claude to automatically remember conversation details without a specific user request, and that functionality has been limited to Team and Enterprise subscribers.

Claude's memory functionality is now expanding to all paid users, so Pro and Max subscribers can use the feature. Max users can turn it on now, while Pro subscribers will get access "over the coming days."

Memory is an opt-in feature that can be turned on in Claude's settings. There are options for "search and reference chats" and "generate memory from chat history." Claude offers an editable memory summary that users can view to see what Claude remembers from conversations.

In the projects section of Claude, each project will have a separate memory. The division ensures that different discussions remain distinct, allowing for separation of work and personal chats.
This article, "Anthropic Brings Automatic Memory to Claude Pro and Max Users" first appeared on MacRumors.com

Discuss this article in our forums

Anthropic Will Now Train Claude on Your Chats, Here's How to Opt Out

Anthropic announced today that it is changing its Consumer Terms and Privacy Policy, with plans to train its AI chatbot Claude with user data.


New users will be able to opt out at signup. Existing users will receive a popup that allows them to opt out of Anthropic using their data for AI training purposes.

The popup is labeled "Updates to Consumer Terms and Policies," and when it shows up, unchecking the "You can help improve Claude" toggle will disallow the use of chats. Choosing to accept the policy now will allow all new or resumed chats to be used by Anthropic. Users will need to opt in or opt out by September 28, 2025, to continue using Claude.

Opting out can also be done by going to Claude's Settings, selecting the Privacy option, and toggling off "Help improve Claude."

Anthropic says that the new training policy will allow it to deliver "even more capable, useful AI models" and strengthen safeguards against harmful usage like scams and abuse. The updated terms apply to all users on Claude Free, Pro, and Max plans, but not to services under commercial terms like Claude for Work or Claude for Education.

In addition to using chat transcripts to train Claude, Anthropic is extending data retention to five years. So if you opt in to allowing Claude to be trained with your data, Anthropic will keep your information for a five year period. Deleted conversations will not be used for future model training, and for those that do not opt in to sharing data for training, Anthropic will continue keeping information for 30 days as it does now.

Anthropic says that a "combination of tools and automated processes" will be used to filter sensitive data, with no information provided to third-parties.

Prior to today, Anthropic did not use conversations and data from users to train or improve Claude, unless users submitted feedback.
This article, "Anthropic Will Now Train Claude on Your Chats, Here's How to Opt Out" first appeared on MacRumors.com

Discuss this article in our forums

❌