OpenAI's Content Bets, Disney and Amazon Partnerships, & Gemini 3 Flash Makes Frontier Reasoning Cheap.
Why Disney's IP solves AI's obscurity problem, OpenAI accelerates their image model, and what Amazon's $10B buys beyond compute, and Google makes frontier reasoning cheap
Disney’s $1B OpenAI Deal Shows AI Video Needs Recognizable Content
Disney announced a $1 billion equity investment in OpenAI, becoming the first major content licensing partner on Sora, OpenAI’s AI video generation platform.
The three-year deal gives Sora users access to more than 200 animated characters from Disney, Marvel, Pixar, and Star Wars. Think Darth Vader, Ariel, Iron Man, and Mickey Mouse. Users can generate short social videos using these characters starting in early 2026. Curated selections will also stream on Disney+.
What the deal doesn’t include: talent likenesses or voices. A video could feature Woody from Toy Story but without Tom Hanks. The agreement also excludes long-form content. These are 30-second social clips, not feature films.
This is how Disney operates. The company has always been built on IP that doesn’t depend on individual humans. In fact, their asset that depends on individual humans, ESPN, is actually a joint venture between Disney and Hearst Ventures.
Walt Disney passed away in 1966, but Mickey Mouse kept printing money. This deal extends that principle. To the extent this initiative succeeds is the extent to which consumers spend more time with Disney’s assets than they do consuming work from actual humans. Iger can talk about respecting creators, but the scarce resource is attention, and every minute spent generating Darth Vader videos is a minute not spent elsewhere.
Disney’s IP also solves a fundamental problem with AI-generated content: obscurity. The most viral Sora videos involved someone most people knew, OpenAI CEO Sam Altman. Here’s the viral video of him stealing Studio Ghibli art.
Now imagine how many more people know who Darth Vader is. Using him in an AI video is far more interesting than using some random person that most people don’t recognize.
The deal also exposes different strategic approaches between OpenAI and Google. On the same day Disney announced the partnership, it sent a cease-and-desist letter to Google alleging “massive” copyright infringement.
Google Search wouldn’t be dominant if it had to negotiate with everyone it wanted to index. It’s easy to imagine Google deciding it won’t make deals for anything unless absolutely forced to.
That’s OpenAI’s opening. The company doesn’t have anything close to Google’s infrastructure. So it’s very much in OpenAI’s interest to set the precedent that deals are part of doing business. Deals are an area where OpenAI isn’t structurally disadvantaged. Disney is a proof of concept, and Altman hinted more partnerships are coming.
OpenAI Bets on Creative Relationships with GPT-Image-1.5 Release
OpenAI released GPT-Image-1.5 on Tuesday, promising better instruction-following, more precise editing, and up to 4x faster image generation. The model is available to all ChatGPT users and via the API.
OpenAI originally planned a January release but accelerated after CEO Sam Altman issued an internal “code red” memo in early December, pausing non-core projects to counter Google’s Gemini 3 momentum. Google’s Nano Banana Pro has been running circles around DALLE since its November launch.
The upgrade focuses on consistency, which matters more than benchmarks suggest. Most AI image tools are bad at iteration. Ask for a specific change like “adjust the facial expression” or “make lighting colder,” and models will often reinterpret the entire image. GPT-Image-1.5 promises more granular controls to maintain visual consistency across edits.
But consistency is about more than technical capability. It’s about whether users come back. ChatGPT’s approach makes you want to continue the adventures of characters who persist across a thread. Gemini generates perfectly fine images, but by not incorporating context from previous generations, it feels like a tool for specific needs rather than a creative playground. Users never explicitly asked for character consistency, but they like when they get it.
This is the product insight OpenAI is betting on: one role of product is to show you what you can do; another role is to inspire you to come up with more ideas. The new dedicated image entry point in ChatGPT’s sidebar, which works “more like a creative studio” according to Fidji Simo, OpenAI’s CEO of Applications, pushes in this direction.
Google maintains its lead on benchmarks. Gemini 3 and Nano Banana Pro still top the LMArena leaderboard. Altman told CNBC that “Gemini 3 has had less of an impact on our metrics than we feared.” But if OpenAI can make image generation feel like an ongoing creative relationship rather than a one-shot tool, the benchmark gap may matter less than it appears.
Google’s Gemini 3 Flash Brings Frontier Intelligence to Budget Pricing
Google released Gemini 3 Flash on Tuesday, a faster and cheaper model that now powers the Gemini app and AI Mode in Google Search by default.
The model combines Gemini 3 Pro’s reasoning capabilities with Flash-level latency and cost. Pricing sits at $0.50 per million input tokens and $3 per million output tokens, less than a quarter of Gemini 3 Pro’s cost. Google claims the model outperforms Gemini 2.5 Pro across most benchmarks while running three times faster.
Enterprise adoption is already strong. Harvey, the AI platform for law firms, reported a 7% improvement on their internal legal benchmark. Resemble AI found it processes forensic data for deepfake detection 4x faster than Gemini 2.5 Pro. Box reported 15% higher accuracy on extraction tasks like handwriting, contracts, and financial data. JetBrains, Figma, Cursor, Bridgewater Associates, and Replit are all using the model in production.
The benchmarks are solid: Gemini 3 Flash scored 33.7% on Humanity’s Last Exam without tool use, compared to 37.5% for Gemini 3 Pro and 34.5% for GPT-5.2. On MMMU-Pro, the multimodality benchmark, it outscored all competitors at 81.2%.
Independent benchmarking from Artificial Analysis adds nuance. Gemini 3 Flash recorded 218 output tokens per second, 22% slower than Gemini 2.5 Flash but significantly faster than GPT-5.1 (125 t/s) and DeepSeek V3.2 reasoning (30 t/s). Artificial Analysis also crowned it the new leader in their knowledge accuracy benchmark.
Google’s combination of speed, cost, and capability makes it ideal for high-frequency agentic workflows where response time matters. Google has processed over one trillion tokens per day through the API since Gemini 3’s launch.
Amazon’s $10 Billion OpenAI Bet Could Finally Validate Trainium
Amazon is in early discussions to invest up to $10 billion in OpenAI. The deal would value OpenAI at over $500 billion and require OpenAI to use Amazon’s Trainium chips.
The Trainium angle is the real story here. AWS has a massive chicken-and-egg problem with their custom silicon. No major frontier model trains on Trainium, so developers don’t optimize for it. Developers don’t optimize for it, so no major frontier model trains on it. Meanwhile, Google proved with Gemini 3 that you can train state-of-the-art models entirely on custom TPUs, without any Nvidia dependency.
AWS is light years behind Google’s TPU infrastructure. Google’s vertical integration produced the best model in the industry at the lowest cost structure. Anthropic trains on Google TPUs. AWS announced Trainium2 with impressive specs, but specs don’t matter without proof that the chips can produce frontier intelligence.
OpenAI training on Trainium would break the cycle overnight. If the company behind ChatGPT validates your silicon, every enterprise customer paying attention reconsiders their chip strategy.
Amazon already has $8 billion invested in Anthropic, which trains on Google TPUs and AWS infrastructure. Adding OpenAI diversifies Amazon’s exposure to frontier model providers while creating a second proof point for Trainium. If both Anthropic and OpenAI run workloads on Amazon silicon, the chip graduates from “interesting alternative” to “proven infrastructure.”
The deal would also let Amazon offer OpenAI capabilities to its marketplace customers, similar to partnerships OpenAI has signed with Etsy, Shopify, and Instacart. Microsoft still holds exclusive rights to OpenAI’s most advanced models on its cloud platform until the 2030s, but Amazon would get chips in the game.
For OpenAI, this is about compute diversification. The company has made over $1.4 trillion in infrastructure commitments since September across Nvidia, AMD, Broadcom, Oracle, and now potentially AWS. Spreading workloads across multiple chip architectures reduces dependency on any single supplier.
For AWS, it’s about proving Trainium belongs in the conversation. That could reshape a market worth trillions.








“The agreement also excludes long-form content. These are 30-second social clips, not feature films.”
I was waiting for more details on this. This actually makes more sense, it’s basically an ads generator for Disney, not competition.