OpenAI + Google Cloud, Salesforce Blocks Glean, Apple Exposes LLM Reasoning Limitations, Meta Acquihires Scale AI CEO, Reddit Sues Anthropic
OpenAI is diversifying away from Microsoft, Salesforce blocks AI rivals like Glean, Apples throws cold water on reasoning LLMs, Meta taps Scale AI CEO for SuperIntelligence, and Reddit sues Anthropic
OpenAI Taps Google Cloud in Strategic Shift Away from Microsoft
OpenAI announced a landmark cloud deal with Google, reducing their reliance on Microsoft’s Azure infrastructure.
The deal grants OpenAI access to Google’s compute resources, including their TPUs, to power AI model training and inference.
This follows OpenAI’s $300 billion valuation announced in April 2025 and comes as Google ramps up their cloud offerings to compete with Microsoft and AWS.
By partnering with Google, OpenAI hedges their dependence on Microsoft, who’s incentives are diverging from OpenAI.
Microsoft has access to OpenAI’s IP until 2030. Given that, Microsoft would likely prefer OpenAI investing in frontier large language models, as they improve Microsoft’s integrated products.
OpenAI, on the other hand, has a hit consumer product with ChatGPT, with over 1 billion users. They likely want to dominate this consumer market, as evidenced by partnering with Apple’s former Chief Design Officer Jony Ive for $6.5 billion. Reducing dependance on Microsoft will help them focus on consumer applications instead of just large language models.
Which begs the question, how does Microsoft feel about this move, as well as their partnership with Jony Ive? Will Microsoft protect their interests by using their investment and compute leverage?
Salesforce Blocks AI Rivals & Changes Data Policy
Slack updated their API terms of service in May, barring companies from training large language models with their own data.
Concerns started spreading about this, and those concerns came to fruition when Salesforce blocked AI companies including Glean from accessing customer data.
This is a meaningful shift in Salesforce’s policy about customers owning their data. In fact, Slack’s privacy statement still states “your data is your own”.
But if Salesforce can overrule who you can share your data with, and how you can use that data, is it still yours?
There are many use cases customers may have for training AI models off their Slack data. Here are just a few:
Internal AI assistants
Customer support chatbots, training off Slack messages so they mimic their support team's tone, style, and expertise
Knowledge extraction and summarization (which is likely why Salesforce cut off Glean)
Workflow optimization, like logging Jira tickets from Slack
Assessing team dynamics and decision-making
More importantly, though, what if Salesforce cuts customers off from training on their CRM data? What if all their sales and customer service data was barred from training AI models?
That’s the ultimate question, and the terms of service change, coupled with blocking AI companies like Glean, should cause of concern for any enterprise considering a Salesforce investment.
Apple Exposes LLM Reasoning Limitations
Apple researchers revealed critical weaknesses in large language model reasoning abilities in a new white paper. The study Anthropic and DeepSeek’s reasoning models, finding that even state-of-the-art LLMs suffer “complete accuracy collapse” on complex problems requiring generalizable reasoning.
The paper shows how LLMs struggle to extend narrow conclusions to broader contexts, which is a key component of human-like intelligence. For instance, when tasked with multi-step reasoning or novel problem-solving, models showed significant drops in performance.
These findings suggest reasoning models are more limited than many thought. Competitors like OpenAI and Anthropic could address these gaps, but it would likely cost a fortune. Google Gemini Ultra cost $191 million to develop, so investors may be skeptical unless these companies can show a meaningful breakthrough first.
Does this show that we’re hitting a plateau in LLM capabilities, or can novel training methods break through? This also underscores Apple’s growing clout in AI research. Since they’re behind in AI, they might as well throw cold water at the space with reports like this.
Is Apple biased because they’re behind in AI? Likely, but wouldn’t leading companies like OpenAI and Anthropic be biased in the other direction as well?
Meta Acquihires Scale AI CEO to Run SuperIntelligence Lab
Meta recruited Scale AI CEO Alexandr Wang to lead their new SuperIntelligence lab, as part of their strategy to stay competitive in the AI race.
The company essentially acquihired Alexandr Wang, and other Scale AI employees, by investing $14.8 billion in the company for a 49% stake.
This deal comes amid a broader reorganization of Meta’s AI division, which has faced internal challenges and a disappointing release with Llama 4.
Meta’s investment in Scale AI, paying existing shareholders rather than a full acquisition, appears designed to sidestep regulatory hurdles, especially given Scale’s role as a data-labeling supplier for Meta’s competitors like Google and OpenAI.
Given that, the deal risks disrupting Scale’s business with other AI labs, who may fear data sharing with Meta, especially as competitors like OpenAI increasingly handle data labeling in-house.
Meta’s aggressive push includes offering seven- to nine-figure compensation packages to top researchers from OpenAI, Google, and DeepMind, with some already joining.
This strategic shift underscores Meta’s view of AI as a sustaining technology to enhance their platforms like Facebook, Instagram, and WhatsApp. This is far different from Google, which faces potential disruption from their search business.
With $65 billion already committed to AI infrastructure this year, Meta’s investment in Scale AI shows how determined they are to lead in AI innovation, navigating complex regulations and intense market competition.
Reddit Sues Anthropic Over Unauthorized Data Use for AI Training
Reddit sued Anthropic, alleging that Anthropic unlawfully used Reddit's data for commercial purposes without permission or compensation.
This lawsuit comes after Reddit established formal licensing agreements with other major AI players like OpenAI and Google for their conversation-rich user data. Anthropic, while disagreeing with the claims, finds itself in a growing legal battle over the sourcing of data for AI model training.
Reddit claims they tried reaching an agreement with Anthropic but failed, and later discovered Anthropic kept accessing their site.
This lawsuit highlights how internet dynamics have changed. Previously, companies like Reddit would happily let sites like Google scrape their content, as Google’s search engine would drive more traffic in return.
LLMs Anthropic’s Claude, on the other hand, actively steer users away from sites like Reddit. If Anthropic gives you your answer, why click through to a website filled with ads or extra content?
But even though the deal has changed, does that mean scraping sites should be illegal? Many argue it is legal under fair use, suggesting lawmakers should step in.
And if lawmakers bar website scraping, what are the odds that new startups can afford entering this space? Google pays Reddit $60 million per year for their content, how many seed stage startups could afford that?
Finally, how would restrictions effect global competition? Put another way, even though US based companies may be restricted from scraping, do we really think DeepSeek or Alibaba would stop scraping Reddit’s data?
Assuming companies based outside of the USA keep scraping, overregulation could cede the AI startup market to other nations.
All of this makes the lawsuit a critical test case, and lawmakers should consider all tradeoffs as AI keeps evolving.