Salesforce's Agentforce Mirage, Anthropic's Pentagon Standoff, & OpenAI's Cerebras Cost Play
Salesforce inflates Agentforce with a bundling trick while leadership walks out, Anthropic risks a supply chain risk label after questioning a U.S. military operation, OpenAI runs on cheaper chips
Salesforce’s $1.4 Billion AI Number Is a Bundling Trick
Salesforce quietly laid off fewer than 1,000 employees on February 10, including staff on the Agentforce AI product teams. The company didn’t announce it, the news surfaced through employee LinkedIn posts.
Five senior executives departed between December and February, including Adam Evans, the EVP who architected Agentforce itself, the CEO of Slack (who left for OpenAI), and the CMO (who left for AMD). If Agentforce was working so well, why is this happening?
Start with the headline number. Salesforce claims “$1.4 billion in Agentforce ARR.” But read the investor release closely and the metric is “Agentforce and Data 360 ARR.”
Data 360 is a rebrand of Data Cloud, a product that was generating significant revenue before Agentforce even launched. Strip out Data 360 and Agentforce standalone is roughly $540 million. Salesforce took a pre-existing revenue stream, renamed it, bundled it with the new AI product, and presented a combined number that makes the AI story look 2.5x bigger than it is.
In December, we wrote about Salesforce’s “token delusion” when Benioff bragged that Salesforce processed 3.2 trillion tokens in a quarter. OpenAI processes 8.6 trillion per day, and Google’s Gemini processes 43 trillion daily. The market responded with memes. The pattern is the same: find a number that sounds big in isolation, strip it of context, and hope nobody checks.
The 18,500 “deals” number needs context too. Salesforce has 150,000+ customers, meaning a 6% adoption rate. Benioff was asked about this directly at Dreamforce. Over half of those deals are existing customers expanding, not net-new AI customers. Salesforce introduced Agentic Enterprise License Agreements that bundle Agentforce into larger contracts, and a Flex Credits model that lets customers convert existing license spend toward Agentforce without spending new money.
Then there’s the product itself. Benioff claims 93% agent accuracy. Salesforce’s own benchmark study showed only 35% of complex multi-turn flows resolved end-to-end. Under Six Sigma standards, 93% on a million support cases is 70,000 wrong answers. Enterprise developers describe a “doom-prompting” cycle where identical scenarios trigger different execution paths and the only fix is rewriting prompts endlessly. Senior consultants call the failure mode “confidently wrong,” which is the worst possible outcome for customer-facing AI.
The pricing tells the same story, as Salesforce has overhauled Agentforce pricing three times in one year: $2 per conversation, then Flex Credits, then a three-tier model.
And then there’s the layoff boomerang. Benioff bragged about cutting support staff from 9,000 to 5,000 because he “needs less heads with AI.” Then executives quietly admitted Salesforce was “too confident” in AI’s ability to replace human judgment. As resolution times for complex cases got longer, they started re-hiring former employees.
We wrote in December about why SaaS is in trouble, and Salesforce is the clearest case study. The stock is down 40%+ from its December 2024 peak. When Anthropic’s legal plugin wiped $285 billion from European software stocks in February, analysts at Jefferies called it the “SaaSpocalypse”. The threat isn’t that Agentforce doesn’t work. It’s that AI-native platforms like Claude and ChatGPT could reduce Salesforce to a dumb database that agents query in the background, making their entire UI obsolete.
While the AI problem is existential, Salesforce's decline has been mostly self-inflicted. They see the problem just like the rest of us, but just can't execute. Now their top talent is moving on.
Anthropic’s Pentagon Standoff Could Cost Far More Than $200 Million
Axios broke on February 15 that the Pentagon is threatening to cut business ties with Anthropic and designate the company a “supply chain risk” over Anthropic’s refusal to allow unrestricted military use of Claude. Anthropic holds a $200 million Pentagon contract, and Claude is the only AI model currently operating in classified military systems.
The Pentagon wants all four AI providers, Anthropic, OpenAI, Google, and xAI, to allow “all lawful purposes” including weapons development, intelligence collection, and battlefield operations. Anthropic is restricting two areas: mass surveillance of Americans and fully autonomous weaponry.
The flashpoint was the Maduro capture operation. An Anthropic executive contacted Palantir to ask whether Claude had been used in the raid where “kinetic fire” occurred. The U.S. government captured a foreign adversary, and Anthropic’s response was to call Palantir and ask questions. That call did not go over well at the Pentagon. Defense Secretary Pete Hegseth is reportedly “close” to designating Anthropic a supply chain risk, a label normally reserved for foreign adversaries like Huawei.
Two days before the Axios story dropped, Dario Amodei sat for a long interview on the Dwarkesh Podcast and disclosed Anthropic’s revenue run rate has hit $14 billion, up from zero three years ago. He said Anthropic could be profitable in 2026. He also said: “We’re under an incredible amount of commercial pressure and make it even harder for ourselves because we have all this safety stuff we do.” Commentators flagged that the interview contained almost no discussion of alignment at all.
OpenAI, Google, and xAI aren’t drawing the same lines with the Pentagon. If Anthropic loses the contract, those three are positioned to absorb every classified AI workload. And the competitive gap is narrowing. On February 17, Anthropic released Sonnet 4.6, which matches Opus 4.6’s performance at one-fifth the cost. Last week’s Super Bowl ads pushed Claude to No. 7 on the App Store with an 11% jump in daily active users. Anthropic just closed $30 billion in funding. The product is strong, and the business is growing. But strong products and growing businesses don’t help if the U.S. government decides you’re a risk.
The $200 million contract is less than 2% of Anthropic’s revenue. They can absorb that loss. What they can’t absorb is the label. If “supply chain risk” sticks, it follows Anthropic into every enterprise sales cycle, every government RFP, and every boardroom where a CIO is choosing between Claude and a competitor that doesn’t come with a Pentagon warning attached.
OpenAI Running Codex on Cerebras Could Be a Cost Play
OpenAI launched GPT-5.3-Codex-Spark between February 12 and 13, an ultra-fast coding model generating over 1,000 tokens per second. The speed comes from Cerebras wafer-scale chips. This is OpenAI’s first production deployment on non-Nvidia hardware.
Codex now has over one million weekly active users, and at 1,000+ tokens per second, the model generates code faster than most developers can read it. Real-time AI pair programming viable in a way that 100-200 token-per-second GPU inference never could.
But Codex Spark isn’t a frontier model. OpenAI built a smaller, less capable model optimized for speed. The trade-off works because coding autocomplete needs to be fast, not brilliant.
And fast on cheap hardware is exactly what OpenAI needs right now. The company is projecting $14 billion in losses for 2026, with cumulative losses approaching $44 billion by 2028. Cerebras is 32% cheaper than Nvidia’s Blackwell GPUs and uses a third of the power.
When Nvidia acquired Groq for $20 billion in December, we wrote that Cerebras was “the last independent SRAM-based inference company.” Nvidia saw the value in the inference market and moved to consolidate it. We’ve tracked Nvidia’s strategy of commoditizing their complements since October, following how Jensen invests in competing AI labs to ensure none of them has a dominant position. OpenAI partnered with Broadcom in October to design custom inference chips. Now they’re running production coding workloads on Cerebras.
None of this threatens Nvidia’s training monopoly. Frontier models will still train on Nvidia because training requires the massive interconnect bandwidth that only NVLink provides. And not everyone is shopping for alternatives. On the same day Codex-Spark launched on Cerebras, Meta signed a multiyear deal with Nvidia for millions of Blackwell GPUs, Grace CPUs, and Spectrum-X switches, a deal worth tens of billions. Nvidia’s data center business is on pace for $170 billion this fiscal year.
For Cerebras, they got a flagship reference customer right before a planned IPO at a $23 billion valuation. For OpenAI, this is less about breaking Nvidia’s grip and more about a company that needs to cut costs somewhere.
Every AI CEO Showed Up in New Delhi
India is hosting the AI Impact Summit this week, the first major international AI summit in the Global South. Sundar Pichai, Sam Altman, Dario Amodei, Demis Hassabis, and heads of state from over 60 countries are in attendance.
In November, we wrote about the battle for India’s 1.4 billion users, covering how Google partnered with Reliance Jio (505 million subscribers), Perplexity partnered with Airtel (360 million subscribers), and OpenAI went direct-to-consumer with ChatGPT Go. We questioned whether Indian consumers would pay $20/month for AI subscriptions when their entire mobile plans cost $3-5.
The distribution question has an answer. Altman disclosed at the summit that India has 100 million weekly active ChatGPT users, second only to the United States. Anthropic said India is now its second-largest market by revenue.
And now the capital is following the users. Blackstone is leading a $600 million round in Neysa, an Indian AI cloud startup with 20,000+ GPUs. Adani committed $100 billion to AI data centers by 2035. India earmarked $1.1 billion for an AI and advanced manufacturing venture fund. Qualcomm is deploying $150 million into Indian AI startups.
Five months ago the story was which carrier would bundle which AI chatbot. Now every major AI CEO flew to New Delhi personally. India didn’t wait for Silicon Valley to build its AI infrastructure. The 100 million users were already there, and the investment is catching up.








