Apple Surrenders Siri to Google, OpenAI Preempts Gemini 3 with GPT-5.1, & Microsoft's GPU Bet Makes Sense while Oracle's Vulnerable
Google offers Apple an incredible price for white-labeling Gemini, OpenAI is drowning out Google's Gemini 3 launch, and the balance sheets behind Microsoft's GPU confidence and Oracle's vulnerability
Siri Surrenders: Apple Pays Google $1B/Year to Fix AI Mess
Mark Gurman at Bloomberg reported that Apple plans on paying Google $1 billion per year for a 1.2 trillion parameter AI model, which would help overhaul Siri.
We called this exact scenario in our October 15th article, where we laid out how Google’s patronage playbook would extend from search into AI. This will let them corner the native AI market just like they cornered search on mobile devices.
According to Bloomberg, Apple held a bake-off between Anthropic and Google, determined Anthropic offered a better model, but chose Google for financial reasons.
And while Gemini might be inferior to Claude, it’s still light years ahead of anything Apple can build themselves. The $1 billion price tag is pretty incredible, which explains why Google (along with their existing search relationship) won the deal.
Apple’s “Temporary” Solution Trap
Just how incredible is that $1 billion price? Consider that training frontier LLMs cost a fortune, many estimates believe that OpenAI’s GPT-5 run cost between $1.25 billion and $2.5 billion alone.
Also, consider that Meta spent $18 billion on GPUs alone in 2024.
In other words, Google is charging 5.5% of Meta’s GPU budget, or half of an LLM training run, for letting Apple white label Gemini.
Now Apple insists this is temporary while they build their own 1 trillion-parameter model, supposedly ready “as early as next year.” But will they really keep spending that money when Google is offering such a compelling price? Remember the company is bleeding AI talent, and Google has best in class resources and infrastructure.
Which explains why Google got so aggressive. They used the exact same playbook to make search the default on Apple, effectively bribing Apple with >$20 billion per year to not build their own competitive search engine, and are enjoying monopoly profits as a result.
After this deal is officially announced, the big question is: will Apple accelerate their AI roadmap, or will they rely on external vendors like Google in perpetuity?
Google pays $20 billion to keep Apple out of search, and they likely offered this Gemini deal to keep them out of the AI race as well.
OpenAI Rushes Out GPT-5.1—Preempting a Gemini 3 Announcement?
Speaking of Google, OpenAI released GPT-5.1, just three months after GPT-5’s August launch.
The release feels pretty rushed. For the first time in OpenAI’s history, a commercial model meant for API use came with no API and no benchmarks. OpenAI has released open-source models like GPT-2 and GPT-OSS without APIs, but those were designed to be downloaded and run locally, not accessed through OpenAI’s services.
It seems like they rushed this for Google. We noted that Gemini 3 should be coming out soon, and code for “gemini-3-pro-preview-11-2025” appeared on Google’s Vertex AI platform on November 7th.
OpenAI is losing traction in the enterprise, and may have rushed GPT 5.1 to maintain mindshare in the consumer market.
OpenAI’s Competitive Pressure
OpenAI’s deficiencies are no secret in the enterprise. According to OpenRouter, they’re well behind Anthropic, Google, and xAI (who is still offering Grok Code Fast 1 for free).
But OpenAI’s consumer market share is still dominant with a reported 800 million weekly active users, so their plan could be drowning out any announcement that could cause those users to switch.
It’s pretty surprising that OpenAI is maintaining their consumer advantage despite all of Anthropic and Google’s product traction. The switching costs for users seem much higher than it is for enterprises, at least for now.
Microsoft’s $80B GPU Bet: Why Investor Fears About OpenAI Miss the Point
Microsoft dropped 4% despite beating earnings with $77.7 billion in revenue and 40% Azure growth. The culprit? Investor anxiety after Microsoft took a $3.1 billion loss in their OpenAI investment, and announced they will boost AI capacity by 80% this year and double their total data center footprint over two years.
The concern is straightforward: is Microsoft building all this infrastructure just for a select few companies like OpenAI, who they just wrote some value off? And if OpenAI stumbles, does Microsoft get stuck with tens of billions in stranded, and quickly depreciating, assets?
CEO Satya Nadella addressed this during the earnings call, repeating one word eight times: “fungibility.”
In other words, Nadella stated that these GPUs aren’t sitting idle waiting for OpenAI workloads.
The Fungibility Reality
GPUs can serve several use cases for Microsoft: M365 Copilot, GitHub Copilot, Azure customers, internal R&D, gaming, and more. Microsoft argues this infrastructure is resilient regardless of OpenAI’s trajectory because demand vastly exceeds supply across all these use cases.
CFO Amy Hood admitted Microsoft is already turning away Azure customers because they don’t have enough GPUs. Nadella also shared that they’re turning down third-party requests, almost certainly including some from OpenAI, when they don’t fit “long-term interests.”
In other words, Nadella claims that they’re building for themselves, and OpenAI is just one customer among many fighting for capacity.
Meta operates the same way, building clusters with 600k H100-equivalent GPUs and using that infrastructure flexibly across Llama training, Instagram recommendations, advertising workloads, and more. The company runs thousands of training jobs daily from hundreds of different teams, with workloads spanning from single-GPU tasks to massive generative AI jobs requiring thousands of synchronized hosts.
Given Microsoft’s product suite, it seems reasonable that they could absorb much of this GPU demand, even if OpenAI disappeared tomorrow.
The Real Risk Isn’t Microsoft
Even if the worst happens, Microsoft will be fine with their current GPU spend. They have minimal debt and massive cash flows.
But this is how bubbles can start forming. Business leaders know GPU demand is high, and they see the demand on the horizon. They know if they don’t fulfill it, one of their competitors will. So everyone races to build capacity, funded increasingly by debt.
Which brings us to Oracle, who has nearly 20% of Nvidia’s GPU share, while Microsoft has 30%. Oracle’s share really stands out when you consider they only have 5% cloud revenue share, and 16% of Microsoft’s market cap ($620 billion vs $3.74 trillion).
How does Oracle have that many GPUs with far less revenue and value? Debt.
Oracle has $91.3 billion in debt, and just raised another $38 billion for Stargate. Microsoft has only $97.6 billion in debt, despite 6x the equity value.
Which means you could look at the current AI bubble from two perspectives.
Since hyperscalers like Microsoft and Google are reasonably capitalized, this bubble could get a lot bigger if these companies start levering up.
But if not, and GPU demand keels over tomorrow, Oracle is the one in serious trouble.









