New Details on the Apple and Google AI Deal Show the Partnership Is Far Deeper Than Anyone Realized


When Apple and Google announced their multi-year AI partnership in January 2026, the surface-level story was straightforward: Apple would use Google’s Gemini models to power a new version of Siri. A new report from The Information goes significantly further, revealing that the arrangement gives Apple a level of access to Gemini that is far more extensive than previously understood.

This is not just a licensing deal. It is a deep technical collaboration that changes what Apple can build, and what Siri could eventually become.

What Apple Gets Under the Agreement

The most significant detail in The Information’s reporting is that Apple has complete access to the Gemini model in its own data centre facilities. Apple can reach directly into Gemini and work with the model in a way that is described as giving it a lot more freedom with Google’s technology than was expected.

Specifically, Apple uses this access to produce smaller models through a process called model distillation. Distillation is a technique where a large “teacher” model trains a smaller “student” model by passing on its reasoning patterns and decision-making processes rather than raw training data. The result is a compact model that retains much of the teacher’s performance at a fraction of the computational cost.

In practical terms, this means Apple can take Gemini’s full-scale intelligence and compress it into models small and efficient enough to run directly on Apple devices without a network connection. That is a significant capability. It means future Siri features could use Gemini-level intelligence entirely on-device, maintaining Apple’s privacy architecture while dramatically expanding what the assistant can do.

How This Changes Apple Intelligence

Apple’s current on-device AI models are approximately 3 billion parameters in scale, a size chosen specifically to allow them to run on the Neural Engine chips inside recent iPhones and Macs. These models are capable for tasks like notification summarisation, photo search, and basic text processing, but they are outclassed by frontier models like Gemini and ChatGPT for complex reasoning.

The Gemini partnership was originally announced in a joint statement saying that Apple’s next-generation Foundation Models will be based on Google’s Gemini models and cloud technology. Reports suggested Apple was paying approximately $1 billion per year for access to a custom Gemini model estimated at 1.2 trillion parameters, roughly 400 times the scale of Apple’s existing on-device models.

With model distillation in the picture, the gap between what Siri does in the cloud and what it does on-device narrows significantly. Apple can produce specialised distilled models for specific tasks, optimised to run at full speed on the Neural Engine while benefiting from Gemini’s underlying knowledge.

See Also: Apple Is Opening Siri to Rival AI Assistants in iOS 27 and OpenAI Loses Its Exclusive Deal

The Information’s report also affirmed that Apple plans to unveil its big Siri changes at WWDC 2026 in June, including features such as Siri’s ability to remember past conversations and proactive features that could suggest leaving home early to avoid traffic ahead of an airport pickup.

The Complications Apple Is Navigating

The arrangement is not without friction. The Information noted that Apple’s goals for Siri do not always align with Gemini’s specialties. Gemini is an extraordinarily capable general-purpose model, but adapting it to power the specific kinds of tasks Siri needs, tightly integrated with Apple’s app ecosystem and privacy architecture, is a technically demanding process.

Apple’s own Foundation Models team is also still active and has not given up on building in-house AI. The tension between a team that spent years building Apple’s own models and a strategy that now relies heavily on Google’s technology is an internal challenge that has not been fully resolved. What the Apple Foundation Models team’s current goals are remains somewhat unclear according to The Information’s sources.

This is a common tension in large technology companies navigating the AI era. Building your own models is expensive, slow, and uncertain. Licensing the best available models from partners is faster but creates dependencies and alignment challenges. Apple appears to be pursuing both paths simultaneously.

Why Apple Chose Google Over OpenAI and Anthropic

Apple evaluated technologies from multiple AI providers before settling on Google for its Foundation Models partnership. OpenAI and Anthropic were both considered. Apple ultimately determined that Google’s technology provides the most capable foundation for Apple Foundation Models.

The choice also carries financial logic beyond pure capability. Google is already deeply integrated into Apple’s revenue structure, paying an estimated $20 billion per year to be the default search engine in Safari. The two companies have a long history of commercial negotiation. Apple paying approximately $1 billion per year for Gemini access adds another layer to a relationship that has been financially significant for both companies for over a decade.

OpenAI remains part of Apple’s AI ecosystem through the ChatGPT integration in Apple Intelligence. That integration allows Siri to hand off complex questions to ChatGPT. However, that arrangement is now secondary to the Gemini Foundation Models partnership.

Privacy Architecture: How Apple Keeps Gemini Invisible

A natural question from Apple users is whether this partnership undermines Apple’s long-standing privacy commitments. Apple has been explicit that it will maintain its privacy standards throughout.

The Gemini-based Apple Foundation Models run on Apple’s Private Cloud Compute servers, not on Google’s infrastructure. Private Cloud Compute is Apple’s proprietary cloud architecture designed so that Apple itself cannot see user data processed on it. Queries routed to these servers are handled in a privacy-preserving environment, with independent security researchers given the ability to verify the system’s properties.

The Gemini white-labeling goes further than just branding. Apple users will still see and interact with Siri. The underlying Gemini technology is not exposed. Apple has designed the architecture so that it maintains complete control over what queries go to Gemini and retains the ability to swap out the underlying technology.

For users who want to interact directly with the Gemini consumer service rather than the Gemini-based Siri intelligence, that option will be available through the new Extensions system coming in iOS 27, where users can route Siri queries to the Gemini app explicitly.

What to Expect at WWDC 2026

The June 8 keynote at WWDC 2026 is set to be Apple’s largest AI showcase in years. The company will reveal the redesigned Siri, including its new chatbot-like interface, conversation memory, and proactive features. The Extensions system for third-party AI integration will also be announced alongside the first developer betas of iOS 27.

The Gemini-powered Apple Foundation Models are expected to be the engine behind everything Apple shows, even if users will never see the Google branding.

Frequently Asked Questions

What is the Apple and Google Gemini AI deal?

Apple and Google announced a multi-year collaboration in January 2026 under which Apple’s next-generation Foundation Models will be built using Google’s Gemini technology and cloud infrastructure. The deal is reportedly worth approximately $1 billion per year and forms the foundation for a significantly upgraded Siri coming with iOS 27.

Can Apple distill Gemini models to run on-device?

Yes, according to new reporting from The Information. Apple has full access to Gemini in its own data centres and is using model distillation to produce smaller, efficient versions of Gemini that can run directly on Apple devices without an internet connection while maintaining high performance.

Does the Apple and Google deal mean Google can see your Siri data?

No. Apple has designed the partnership to maintain its privacy architecture. Gemini-based models run on Apple’s Private Cloud Compute servers, not on Google’s infrastructure, meaning Google does not have access to user queries processed through Siri.

Why did Apple choose Google over OpenAI or Anthropic?

Apple evaluated OpenAI and Anthropic before selecting Google. The choice was based on Apple’s assessment that Google’s technology provides the most capable foundation for its AI models. The existing commercial relationship between Apple and Google through search also likely played a role.

When will the new Gemini-powered Siri be available?

Apple plans to preview the new Siri at WWDC 2026 on June 8. The full rollout is expected alongside iOS 27, which typically ships in September.

Will Apple still use OpenAI’s ChatGPT after the Gemini deal?

Yes. ChatGPT remains integrated into Apple Intelligence as an option for handling complex queries through Siri. The Google deal governs Apple’s Foundation Models, while ChatGPT continues as one of several third-party options. OpenAI’s exclusivity ends with iOS 27, when other AI services will also be available inside Siri through the new Extensions system.

The Real Scope of Apple’s AI Bet

The Apple and Google Gemini deal was always bigger than an AI licensing agreement. It is a signal that Apple has made peace with the idea that building frontier AI models entirely in-house is not the right strategy right now. What Apple is building instead is a privacy-preserving, tightly integrated layer that brings Gemini’s intelligence to over a billion devices without ever exposing a user’s data to Google’s servers. That is a technically ambitious bet, and if WWDC 2026 delivers on what is being described, it may also be a brilliant one.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *