❌

Normal view

Google Launches Gemini Import Tool for Switching From ChatGPT, Claude, and Other AI Apps

Google is adding a new memory import feature to Gemini, making it easier for customers to switch to Gemini AI from another AI service. Users can import memories, context, and chat history from other AI apps.


Importing memory will provide Gemini with an understanding of a user's preferences, relationships, and personal context. Google says that Gemini will understand the same key facts that have been shared with other apps, so there is no need to start over from scratch when moving to Gemini from another AI service.

The import option can be accessed through the Gemini settings, and it will provide a prompt to copy and paste into an existing AI app. The prompt will ask the AI to generate a preferences summary that can be pasted into Gemini.

Google will also allow users to import their full chat history in a ZIP format, with support for searching past conversation threads and building on those threads with Gemini.
This article, "Google Launches Gemini Import Tool for Switching From ChatGPT, Claude, and Other AI Apps" first appeared on MacRumors.com

Discuss this article in our forums

Google's Personal Intelligence Now Rolling Out to Free Gemini Users in the U.S.

Google is bringing Personal Intelligence to all Google Gemini users starting today, after testing the feature with its paid plans. Personal Intelligence allows Gemini AI to provide personalized responses based on information pulled from connected Google apps like Gmail, Google Photos, YouTube, and more.


Personal Intelligence is expanding in the U.S. across AI Mode in Search, the Gemini app, and Gemini in Chrome.

Gemini is able to draw on the information that it knows about you from your Google accounts, from emails you sent, items you purchased, and what you've searched for. Google says that it is designed to help you "find exactly what you need without having to give all the context."

Google provides several examples of how Gemini's Personal Intelligence can be helpful:

  • Custom shopping recommendations - Gemini can offer custom recommendations based on past purchases. If you want to find a bag to go with new shoes for example, Gemini can narrow the search to matching products.

  • Tech help - Google says users can get troubleshooting help for a product like a refrigerator without knowing the model, because the information can be pulled from a purchase receipt.

  • Making plans - When you're traveling and need to grab a bite to eat at an airport, Gemini can make suggestions based on the types of food that you like. You can also get recommendations on places to eat and visit when traveling based on your interests and past favorites.


Users can choose to connect apps like Gmail and Google β€ŒPhotosβ€Œ to Gemini for personalization, or can opt out, and the feature is off by default. Google says that Gemini and AI Mode do not train directly on a Gmail inbox or β€ŒPhotosβ€Œ library, but prompts in Gemini and the model's responses can be used for training purposes.

Personal Intelligence is already available in the U.S. for AI Mode in Search, and it is rolling out in the Gemini app and Gemini in Chrome for free users. Google says that connected experiences are designed for personal Google accounts and not for Workspace business, enterprise, or education users.

Gemini's personalization features could compete directly with the Siri personalization that Apple plans to bring to Siri later this year, as connecting Gmail and other apps to Gemini mirrors some of the functionality that Apple is introducing for β€ŒSiriβ€Œ. β€ŒSiriβ€Œ will be able to read emails, messages, files, photos, and more, learning information about the user to complete tasks and keep track of files.

The new β€ŒSiriβ€Œ features have been delayed several times, and at this point, we may not be getting the updated version of β€ŒSiriβ€Œ until closer to the end of the year.
Tags: Gemini, Google

This article, "Google's Personal Intelligence Now Rolling Out to Free Gemini Users in the U.S." first appeared on MacRumors.com

Discuss this article in our forums

Report: Apple Asks Google to Run Siri on Its Servers

Apple has asked Google to investigate setting up servers in its data centers to run a future version of Siri powered by Gemini, The Information reports.


Currently, Apple sends its more complex AI queries to Private Cloud Compute, a system that runs on Apple servers using Apple silicon chips. Today, only 10% of Apple's Private Cloud Compute capacity is said to be in use on average. The usage is low enough that some servers intended for Apple's AI cloud system are still in warehouses and have not yet been installed. This could change rapidly upon the launch of the next-generation version of β€ŒSiriβ€Œ, which could spike Apple's demands for cloud computing.

Apple has reportedly suffered from a cultural reluctance to bolster its cloud infrastructure for years, leading to the departure of some key cloud experts from the company, such as Patrick Gates. Gates pioneered the idea of bringing Apple chips to data centers, which later formed the basis of Private Cloud Compute. The company still strongly focuses on hardware devices and consumer features rather than their supporting cloud technologies, despite the growth of services, resulting in a neglect of the need for additional capacity.

At the time Apple realized it needed to use the cloud to support its AI efforts, its internal AI infrastructure was "beginning to decay." The company was amid the process of decommissioning old Nvidia-powered servers. Combined with financial pressure, this led the company to increasingly turn to third-party providers like Amazon.

For years, Apple banned its AI engineers from Google's cloud technologies because of privacy concerns. Apple software chief Craig Federighi repeatedly vetoed Google Cloud as an option for its AI computing requirements. In 2023, Google made changes to its security systems that satisfied Apple's privacy concerns. Apple then started to adopt Google's cloud infrastructure for artificial intelligence.

The issue has been exacerbated by problems with Private Cloud Compute, which takes longer to update than other servers. Moreover, the chips currently used in Private Cloud Compute servers were designed for consumer devices and are not optimized for AI workflows, meaning that they are not well equipped to run large models like Gemini.

Apple now wants to be prepared for a potential surge in AI use on its devices when the more powerful, Gemini-based version of β€ŒSiriβ€Œ debuts later this year, motivating the request for Google to run β€ŒSiriβ€Œ directly on its servers. See The Information's full report for more.

This article, "Report: Apple Asks Google to Run Siri on Its Servers" first appeared on MacRumors.com

Discuss this article in our forums

Don’t Want Ads in ChatGPT? Try Claude Instead

11 February 2026 at 01:06
If you’d like to maintain and ad-free AI experience, you might want to consider using Claude, which offers free AI tools, web chat, and clients for Mac, iPhone, and iPad, all of which are free from advertising clutter. Why are you mentioning this? Well, you might have seen that OpenAI has recently announced that ChatGPT ... Read More

Apple Explains How Gemini-Powered Siri Will Work

Apple CEO Tim Cook yesterday reiterated the structure of its partnership with Google to use Gemini AI models for the next generation version of Siri.


During the company's Q1 2026 earnings call yesterday, Apple CEO β€ŒTim Cookβ€Œ and CFO Kevan Parekh were asked several questions about Apple Intelligence and the company's recently announced deal with Google to power the personalized version of β€ŒSiriβ€Œ using Gemini.


We basically determined that Google's AI technology would provide the most capable foundation for AFM (Apple Foundation Models), and we believe that we can unlock a lot of experiences and innovate in a key way due to the collaboration. We'll continue to run on the device and run in Private Cloud Compute and maintain our industry-leading privacy standards in doing so. In terms of the arrangement with Google, we're not releasing the details of that.


That description closely matches language from Apple and Google's earlier joint announcement, which said that β€ŒApple Intelligenceβ€Œ would continue to operate on Apple hardware and Private Cloud Compute.

Cook also addressed Apple's own artificial intelligence development efforts, noting that the company continues to build its own technology alongside the Gemini partnership, but clarified that those efforts do not replace Google's role in the personalized β€ŒSiriβ€Œ system.


You should think of it as a collaboration. And we'll obviously independently continue to do some of our own stuff, but you should think of what is going to power the personalized version of Siri as a collaboration with Google.


When asked about monetization and return on investment, Cook framed β€ŒApple Intelligenceβ€Œ as a feature integrated across Apple's platforms rather than a discrete revenue driver.

We're bringing intelligence to more of what people love and we're integrating it across the operating system in a personal and private way, and I think that by doing so, it creates great value, and that opens up a range of opportunities across our products and services. And we're very happy with the collaboration with Google as well, I should add.


Neither Cook nor Parekh disclosed how many users currently have access to β€ŒApple Intelligenceβ€Œ features or whether those capabilities are driving hardware upgrades. Apple previously acknowledged that β€ŒApple Intelligenceβ€Œ is limited to devices with sufficient memory and processing capacity, which constrains availability somewhat.
This article, "Apple Explains How Gemini-Powered Siri Will Work" first appeared on MacRumors.com

Discuss this article in our forums

Here's When Apple Plans to Unveil a New Siri Powered by Google Gemini

Apple plans to unveil a more personalized version of Siri powered by Google Gemini next month, according to Bloomberg's Mark Gurman.


"The company has been planning an announcement of the new Siri in the second half of February, when it will give demonstrations of the functionality," he wrote, in the latest edition of his weekly Power On newsletter today.

Gurman does not yet know if Apple plans to hold a full-out event to demonstrate the Siri upgrades, or if it will hold private briefings with the media.

The more personalized Siri will be part of iOS 26.4, which will be available in beta in February and released to the general public in March or early April, according to Gurman. Based on that timeframe, the new-and-improved Siri should be available to all customers with an iPhone 15 Pro or newer in just a few more months.

As previewed by Apple, the assistant "should be able to tap into personal data and on-screen content to fulfill tasks," according to Gurman.

Apple first announced the more personalized version of Siri all the way back at WWDC 2024, but it was eventually delayed. At the time, Apple showed an iPhone user asking Siri about their mother's flight and lunch reservation plans based on info retrieved from the Mail and Messages apps, as one example of a new capability.

The revamped Siri reportedly experienced issues inside Apple, leading the company to turn to Google Gemini. The revamped Siri will technically still run on a new Apple Intelligence model that has Gemini's technology baked in.

Siri will reportedly get even better on iOS 27, as Apple is said to be planning to turn the assistant into a full-out chatbot, allowing users to have sustained, back-and-forth conversations with the assistant. This will essentially turn Siri into ChatGPT or Gemini, except it will be built right into the iPhone, iPad, and Mac, with no app required.

Gurman said the Siri chatbot will be "competitive with Gemini 3," and "significantly more capable" than the more personalized Siri coming with iOS 26.4.

Siri's chatbot might run directly on Google's servers.

Related Roundups: iOS 26, iPadOS 26
Related Forum: iOS 26

This article, "Here's When Apple Plans to Unveil a New Siri Powered by Google Gemini" first appeared on MacRumors.com

Discuss this article in our forums

Apple's New Siri Will Be Powered By Google Gemini

The smarter, more capable version of Siri that Apple is developing will be powered by Google Gemini, reports Bloomberg. Apple will pay Google approximately $1 billion per year for a 1.2 trillion parameter artificial intelligence model that was developed by Google.


For context, parameters are a measure of how a model understands and responds to queries. More parameters generally means more capable, though training and architecture are also factors. Bloomberg says that Google's model "dwarfs" the parameter level of Apple's current models.

The current cloud-based version of Apple Intelligence uses 150 billion parameters, but there are no specific metrics detailing how the other models Apple is developing measure up.

Apple will use Gemini for functions related to summarizing and multi-step task planning and execution, but Apple models will also be used for some β€ŒSiriβ€Œ features. The AI model that Google is developing for Apple will run on Apple's Private Cloud Compute servers, so Google will not have access to Apple data.

Gemini uses a Mixture-of-Experts architecture, so while it has over a trillion total parameters, only a fraction of them are activated for each query. The architecture allows for a large total compute capacity without racking up significant processing costs.

Apple weighed using its own AI models for the LLM version of β€ŒSiriβ€Œ, and also tested options from OpenAI and Anthropic, but it decided to go with Gemini after deciding Anthropic's fees were too high. Apple already has a partnership with Google for search results, with Google paying Apple around $20 billion per year to be the default search engine option on Apple devices.

Though Apple is planning to rely on Google AI for now, it plans to continue working on its own models and will transition to an in-house solution when its LLMs are capable enough. Apple is already working on a 1 trillion parameter cloud-based model that could be ready as soon as 2026. Apple is unlikely to publicize its arrangement with Google while it develops in-house models.

Apple was meant to debut an updated version of β€ŒSiriβ€Œ in iOS 18, but deficiencies required the company to overhaul the underlying β€ŒSiriβ€Œ architecture and significantly delay the rollout. The smarter β€ŒApple Intelligenceβ€Œ β€ŒSiriβ€Œ is expected to be introduced in an iOS 26.4 update that's coming in spring 2026.

β€ŒSiriβ€Œ will be able to answer more complex queries and complete more complicated tasks in and between apps. It will be closer in function to Claude and ChatGPT, though Apple is not planning a dedicated chatbot app.
This article, "Apple's New Siri Will Be Powered By Google Gemini" first appeared on MacRumors.com

Discuss this article in our forums

❌