OpenAI Didn't Leave the UK Because of Energy Costs, Meta Bets Every Dollar of Cash Flow, & Anthropic Is Quietly Nerfing Claude
OpenAI signed Stargate UK knowing the costs, Meta assembles a complete data center org in five days, and Anthropic is feeding the enterprise by throttling consumers.
OpenAI Knew UK Energy Was Expensive When They Signed, So Why Did They Really Leave?
OpenAI paused its UK Stargate data center, citing energy costs and regulation. UK industrial electricity is the highest in the IEA, 4x higher than the US, Norway, and Sweden.
But OpenAI announced Stargate UK in September 2025, when UK electricity was already the most expensive in the developed world. So what actually changed?
Two other things changed, and neither involved energy.
First, copyright went badly for OpenAI. On March 18, 2026, the UK published its Copyright and AI Impact Assessment. 88% of respondents backed mandatory licensing. The government rejected the opt-out model and backed licensing-first. Three weeks later, OpenAI paused. OpenAI can’t say “we’re leaving because you want us to pay for training data” out loud. That’s politically toxic in a country where the creative industries just won a major policy fight.
Second, the IPO. OpenAI closed $122 billion at $852 billion in late March 2026, including retail investors for the first time. Computer Weekly called the pause “a calculated move” rather than a sudden setback. Sifted’s analysis: the stated reasons “are unlikely to be the whole story... with an IPO on the horizon, it is hardly surprising that OpenAI is tightening its risk profile.”
Energy is a politically safe excuse, but copyright and IPO discipline seem like the actual triggers.
A quick note from me.
I built AI Pulse, which tracks how AI is changing your career, and how much more your role pays with AI experience.
I track 22,000+ jobs weekly across 42 roles and 14 industries to answer: what % of your field now requires AI skills, how much more those jobs pay (avg 32% premium), and what to learn.
Forward it to anyone who wants to stay ahead of the curve, or Subscribe free →
Meta Bets Every Dollar of Cash Flow on AI Infrastructure
Two days after the UK pause, the three executives who built Stargate left for Meta: Peter Hoeschele (data center planning), Shamez Hemani (engineering execution), and Anuj Saharan (supply chain). Meta then announced two deals this week that frame what the Stargate trio walked into.
CoreWeave and Meta expanded their AI cloud agreement to $21 billion through 2032, on top of September’s $14.2 billion deal.
Meta and Broadcom also announced a deal to co-develop four generations of MTIA accelerators starting at 1 gigawatt, scaling to multi-gigawatt through 2029. The chips will be the first AI silicon on TSMC’s 2nm, a node ahead of Nvidia’s Rubin.
In 2025, Meta generated $43.6 billion in free cash flow. 2026 capex guidance is $115-135 billion. At consensus revenue of ~$240 billion, cash from operations lands around $130 billion. Subtract capex and 2026 free cash flow falls between zero and $25 billion. Meta is about to spend nearly every dollar it generates on AI infrastructure.
And that money is mostly coming from Reality Labs. The division lost $19.1 billion in 2025. In January, Meta laid off 1,500 Reality Labs employees and paused most of its VR initiatives. On the Q4 call, Zuck said Reality Labs losses will “likely peak this year, then gradually reduce.”
Zuck’s TBD Lab is being built with Stargate’s architects and Broadcom’s silicon on money that doesn’t yet exist. He’s betting it will.
Anthropic Hit $800 Billion While Quietly Nerfing Claude
Last week, we wrote that Anthropic was reserving Mythos for enterprises and that frontier AI was getting less democratic. This week, the proof shipped in the product.
Fortune, VentureBeat, and a viral 6,852-session Substack analysis crystallized the same complaint: Claude has been quietly degraded over the last two months. Anthropic confirmed three changes between February and March. Adaptive thinking became the default. Effort level dropped to “medium.” Thinking is now redacted in the UI.
Anthropic denies they’re degrading users on purpose, and a Claude Code cache bug on April 13th and Claude.ai login failures on April 15th didn’t help their case.
Compute spent on a $200/month Pro subscriber can’t also be spent on a Fortune 500 CISO running Mythos at $125 per million output tokens. Microsoft already admitted to making the same call with Azure vs Copilot. Anthropic is doing the same thing without admitting it. What they call “balancing token use” is the allocation problem in softer language.
Now this week, Bloomberg reported that Anthropic is fielding unsolicited investor offers valuing it at $800 billion, more than 2x February’s $350 billion. Revenue is $30 billion ARR, up from $19 billion just a few months ago, and they’ve started early IPO talks with Goldman, JPM, and Morgan Stanley targeting October.
Investors are offering $800 billion because enterprise ARR is exploding. Enterprise ARR is exploding because Anthropic is allocating compute to enterprise.
If you’re a Pro subscriber, your compute is being reallocated to customers paying hundreds of times more per token. And Anthropic’s compute stays finite for at least another year. Per OpenAI’s own memo, Anthropic is at 1.4 gigawatts today vs OpenAI’s 1.9, and Anthropic’s response is 3.5 gigawatts starting in 2027.
The silver lining: if Anthropic keeps rationing Claude for the Fortune 500, the consumer and small-developer market is wide open.
Anthropic could always solve this with more compute, but it won’t be cheap. And when your $800 billion valuation is built on enterprise revenue, is it worth keeping consumers who pay a fraction of enterprises?






