Yann LeCun Leaves Meta: The Godfather Bet Against LLMs and Lost the Room
Meta's chief AI scientist and Turing Award winner spent years arguing LLMs are a dead end. He might be right, but Meta's survival depends on them.
The AI community is treating this like a funeral.
Yann LeCun, Meta’s chief AI scientist and founder of FAIR, announced that he is leaving Meta after 12 years. He’s one of the most influential figures in modern AI, a Turing Award winner who helped invent deep learning.
The 65-year-old will launch a startup focused on “world models”, a fundamentally different approach to the large language models that are dominating tech.
“For Meta, the loss is both symbolic and strategic,” wrote Tech Startups. “FAIR was once the intellectual engine of its AI ambitions. Without LeCun at the helm, Meta risks losing one of its strongest academic anchors.”
Analysts at Seeking Alpha warned that LeCun’s exit “signals a shift away from longer-term, fundamental AI research at Meta, potentially slowing innovation in advanced AI concepts.” They flagged concerns about “strategic disruption and talent loss.”
The brain drain narrative writes itself. Only 3 of the 14 original Llama researchers remain at Meta. The 600 AI layoffs in October hit FAIR directly. One analysis called it a “spectacular own goal”, losing the person who literally invented foundational AI technologies while betting $600 billion on AI infrastructure.
Make no mistake, losing one of the godfathers of AI is a big deal. But his departure, frankly, makes a ton of sense.
It’s the natural conclusion of a philosophical divide that’s been widening since ChatGPT launched three years ago.
LeCun has spent years publicly arguing that LLMs are a “dead end” for achieving human-level intelligence.
“A cat can remember, can understand the physical world, can plan complex actions, can do some level of reasoning—actually much better than the biggest LLMs,” LeCun testified to Congress.
And so far, he’s been proven right.
The Research Backs LeCun Up
LeCun’s skepticism sounds contrarian against ChatGPT mania. But the research keeps piling up in his favor.
Apple researchers showed how even frontier reasoning models suffer “complete accuracy collapse” beyond certain complexity thresholds. The models fail to use explicit algorithms consistently and can’t generalize narrow conclusions to broader contexts, which is a fundamental component of human-like intelligence.
The scaling story isn’t as clean as it once looked either. Performance improvements from larger models are following exponential decay curves, meaning each doubling of compute yields diminishing returns. One analysis found that going from 100 billion to 200 billion parameters might only deliver 1-2% improvement, compared to 10-15% gains at smaller scales.
Sam Altman himself acknowledged that GPT-5 has “saturated the chat use case”—a striking admission from someone who earlier claimed he wouldn’t be smarter than GPT-5.
Meta’s $60 Billion Reason to Ignore Him
But even if LeCun is completely right about LLMs being a dead end for superintelligence, Meta can still make a fortune from them.
The company’s Generative Ads Model (GEM) is the largest foundation model for recommendation systems in the industry. It was trained at LLM scale across thousands of GPUs, and is already driving significant ad conversion increases across Instagram and Facebook.
Meta’s AI-powered ad solutions make over $60 billion ARR, more than 30% of total revenue. Over 4 million advertisers now use Meta’s generative AI tools, up from 1 million just six months ago. Advantage+ shopping campaigns grew 70% year-over-year and crossed $20 billion in ARR.
Meta’s vision is simple: brands input a product image, budget cap, and timeline. AI handles the rest: copywriting, image generation, targeting, and optimization.
And LLMs are the engine. They can generate endless ad copy permutations, A/B test thousands of image variations, and optimize click-through in real time.
The company expects their GenAI products will generate between $460 billion and $1.4 trillion in total revenue by 2035. But they need a frontier model to pull it off, otherwise they’re dependent on other vendors.
And they are behind just about every other LLM you can imagine. Llama disappeared from the top 10 in market share according to OpenRouter.
Which would matter less if Meta wasn’t also losing ground on the product side.
Every Minute on Sora Is a Minute Off Instagram
Zuckerberg can’t afford philosophical debates about world models versus LLMs, because the existential threat is already here.
OpenAI’s Sora app hit over a million iOS downloads within the first week of launch, despite being invite-only. It spent three weeks at #1 on the App Store.
Meta’s answer, Vibes, is still behind.
Aside from being behind, Vibes doesn’t even run on Meta’s own models. It relies on third-party tools from Midjourney and Black Forest Labs while Meta develops their own technology. Meta is even paying creators to populate the Vibes feed under NDA, hoping this will bootstrap engagement.
And then you have Google. Their Veo 3 now powers YouTube Shorts with free AI video generation for millions of creators. Over 275 million videos have been generated in Google’s Flow filmmaking tool. YouTube already owns the creator ecosystem. If generative AI video becomes the default content format, Google has the distribution locked up.
If Sora has staying power, they could eat away at the time people spend scrolling Instagram and Facebook. That attention is worth $192 billion in ad revenue this year alone. Every minute someone spends generating videos on Sora or watching AI content on YouTube is a minute they’re not on Reels.
Which is why Meta is moving so quickly.
$2 Million Packages Weren’t Enough
But moving quickly means hiring quickly, and hiring became a problem.
How do you recruit top AI researchers when your chief scientist is publicly calling their life’s work “basically an off-ramp, a distraction, a dead end?”
Meta was losing 4.3% of their AI talent in 2024, the second-highest attrition rate behind Google. The company was offering $2 million-plus annual packages and still losing candidates to OpenAI and Anthropic.
Zuckerberg responded by recruiting Scale AI CEO Alexandr Wang, plus SSI’s CEO Daniel Gross and former GitHub CEO Nat Friedman.
All of this new talent drowned out LeCun. The writing was on the wall.
The Amicable Divorce
Given all this, LeCun’s departure makes sense for both sides.
LeCun will focus on the capabilities he believes are necessary for genuine intelligence: AI systems that understand the physical world, maintain persistent memory, and plan complex action sequences. Meta will partner with LeCun’s new venture, supporting projects aligned with their interests while letting other work remain independent.
Meta gets a cleaner organizational structure, removing the tension between research-first and product-first factions.
LeCun might be right that world models are necessary for AGI. Meta might be right that LLMs can unlock trillions in ad revenue. Both things can be true.
The godfather of deep learning bet against LLMs. He lost the room. Now he gets to prove them wrong.









Following his career has been great. Really looking forward to seeing where he goes and contributes!
LeCun’s exit underscores the clash between foundational AI research and immediate revenue-driven AI applications, illustrating how strategic priorities can shape the direction of the entire industry.