Salesforce's Token Delusion, Anthropic's Lock-In, Cursor's $29B Bet, & OpenAI's Consumer Blind Spot
Benioff brags about small numbers, Claude Code comes to Slack, Cursor raises billions while relying on Anthropic's engine, and OpenAI chases enterprise deals while Google eats their consumer lunch
Salesforce Is Delusional About Token Processing
Marc Benioff bragged that Salesforce processed 3.2 trillion tokens last quarter, calling it proof of “real enterprise adoption of agentic AI at scale globally.”
The market responded with memes, this one courtesy of Ethan Ding.
Because 3.2 trillion tokens isn’t that big.
OpenAI processes 8.6 trillion tokens per day. Google’s Gemini processes 43 trillion tokens daily. OpenAI has 30+ enterprise customers who have each processed over 1 trillion tokens with them alone.
But the real issue is that Salesforce’s executive team genuinely thought 3.2 trillion was impressive. They saw the word “trillion” and ran with it. That’s how far behind they actually are.
Imagine if Marc Benioff instead explained that he knows the number is smaller than than competitors, but shared a point of view about why they deliver more value while processing fewer tokens, and showed a roadmap for increasing that consumption or delivering more value over time.
That would have given him and Salesforce far more credibility. But instead, they got memes.
Claude Code Comes to Slack, Increasing Potential Lock-In and Switching Costs
Anthropic launched Claude Code in Slack, letting developers delegate coding tasks directly from chat threads. The beta feature builds on Anthropic’s existing Slack integration by adding full workflow automation.
Previously, developers could only get lightweight coding help via Claude in Slack like writing snippets, debugging, and explanations. Now they can spin up complete coding sessions by tagging Claude using Slack context like bug reports or feature requests. Claude analyzes recent messages to determine the right repository, posts progress updates in threads, and shares links to review work and open pull requests.
The integration uses Slack’s MCP server, giving Claude access to channels, messages, and files. Salesforce is already deploying Claude Code across their global engineering organization, and Rakuten reduced software development timelines from 24 days to 5 days.
AI coding assistants are quickly moving from IDEs into collaboration tools where teams already work, and that could create tremendous lock-in.
The bear case for Anthropic has been if a superior coding LLM gets released. I previously warned that if a competitor like Gemini releases a superior coding LLM, Anthropic could lose 10-20% of their revenue overnight just from Cursor switching away.
But integrating with the collaboration stack introduces far more workflows, and makes switching substantially harder.
DocuSign doesn’t make $3 billion per year because they have the best e-signature product at the best price, they make that much money because so many enterprises are locked in by integrating their workflows. They’re stuck, and DocuSign knows it.
Similarly for Anthropic, this could make it harder for customers to switch from them.
This lock-in is clearly the bull case for Google, as Gemini should be best at integrating with their stack.
But there’s one company going a different direction from workflow lock-ins, and that’s Cursor. Is their product so much better that workflows don’t matter?
Cursor vs Claude Code: The Production Automobile Still Needs Claude’s Engine
Cursor CEO Michael Truell told Fortune’s AI Brainstorm conference Monday that OpenAI and Anthropic’s coding products are “concept cars” while Cursor builds production automobiles.
What we do is we take the best intelligence that the market has to offer from many different providers
And we also do our own product-specific models in places. We take that, we build it together and integrate it, then also build the best tool and end UX for working with AI.
Investors are buying it. They poured $2.3 billion into the company last month at a $29.3 billion valuation. But is Cursor’s “secret sauce” actually delivering more value than using Claude or GPT directly?
Composer Answers the Wrapper Question
Cursor’s strongest answer to the “wrapper” criticism arrived in October with Composer, their first proprietary model. It’s a mixture-of-experts architecture trained with reinforcement learning inside actual coding environments.
The result: 4x faster than comparable models, completing most coding tasks in under 30 seconds. Speed keeps developers in flow during iteration cycles.
But GPT-5 and Claude Sonnet 4.5 still outperform Composer in raw coding intelligence. Cursor groups them in a “Best Frontier” class that “both outperform Composer.” The trade-off is speed for intelligence.
What the Enterprise Data Shows
Truell cited a University of Chicago Study that found companies using Cursor “ship 40% more code and get 40% more of their roadmap done.”
The enterprise testimonials certainly agree with that.
Coinbase: “Single engineers are now refactoring, upgrading, or building new codebases in days instead of months.”
Trimble: 50% more code shipped.
Stripe: “Cursor quickly grew from hundreds to thousands of extremely enthusiastic Stripe employees.”
These are scale-and-refactor stories. Codebase indexing and multi-agent orchestration create real value when you’re navigating million-line codebases. When you’re navigating million-line codebases and making coordinated changes across dozens of files, Cursor’s tooling matters.
The Real Question
Truell’s “production automobile” analogy assumes the engine (base models) is commoditized and the value lives in integration. But is the engine commoditized if they still rely mostly on Anthropic?
If Anthropic can make Claude Code work better with the same underlying intelligence, it might actually be the better vehicle for certain roads.
The more interesting strategic question: Cursor just demonstrated with Composer that they can build their own frontier-adjacent models optimized for speed. If they can close the intelligence gap while maintaining the speed advantage, the “production automobile” thesis holds.
If Claude Code keeps improving autonomous capabilities and embedding into workflow automations, Cursor may be on the outside looking in.
Truell is betting that speed and tooling win. Anthropic is betting on themselves, and embedding with company workflows.
Cursor is still relying on Anthropic’s engine, and it’s hard being too bullish until that changes.
OpenAI Touts Enterprise Wins While Missing the Consumer Opportunity
OpenAI released new data showing enterprise usage has surged dramatically, with ChatGPT message volume growing 8x since November 2024 and workers reporting 40-60 minutes saved daily. Custom GPT usage jumped 19x, now accounting for 20% of enterprise messages.
This report dropped just days after CEO Sam Altman sent an internal “code red” memo about competitive threats from Google. That memo delayed advertising plans and other initiatives to focus entirely on improving ChatGPT’s core experience.
Sam Altman has shared the logic of focusing on both enterprise and consumer in an October 2025 interview with Ben Thompson, “it’s not like you use Google at home and a different company at work.”
He’s right, but his logic is wrong. They use Google at work because Google won at home. But instead of learning from that, OpenAI is focusing on work.
Focusing on the enterprise is a big distraction from their consumer dominance. ChatGPT has a once in a generation opportunity to build a dominant consumer product to the likes we haven’t seen since Facebook and Google before that. This enterprise distraction is building cracks in their consumer offering.
Google’s Gemini grew from 450 million to 650 million monthly users in months, and Sensor Tower data shows ChatGPT’s growth has plateaued since summer while Gemini’s downloads, active users, and time-in-app are all growing faster.
Especially when OpenAI is 4th in API token share, and has been 3rd or below in share all year.
The majority of OpenAI’s revenue still comes from consumer subscriptions, and Google is eroding that consumer base
Very rarely do companies get consumer opportunities like 800 million weekly users. Facebook had it with social networking. Google had it with search. OpenAI has it now with AI assistants.
Instead, they’re distracting themselves with the enterprise, and just hired Slack’s CEO as their new CRO.
The enterprise licenses won’t matter if Google captures the consumer market OpenAI is ignoring. That’s the real code red.








