Posted on Leave a comment

OpenAI fires back at Google with GPT-5.2 after ‘code red’ memo

OpenAI launched its latest frontier model, GPT-5.2, on Thursday amid increasing competition from Google, pitching it as its most advanced model yet and one designed for developers and everyday professional use. 

OpenAI’s GPT-5.2 is coming to ChatGPT paid users and developers via the API in three flavors: Instant, a speed-optimized model for routine queries like information-seeking, writing, and translation; Thinking, which excels at complex structured work like coding, analyzing long documents, math, and planning; and Pro, the top-end model aimed at delivering maximum accuracy and reliability for difficult problems. 

“We designed 5.2 to unlock even more economic value for people,” Fidji Simo, OpenAI’s chief product officer, said Thursday during a briefing with journalists. “It’s better at creating spreadsheets, building presentations, writing code, perceiving images, understanding long context, using tools and then linking complex, multi-step projects.”

GPT-5.2 lands in the middle of an arms race with Google’s Gemini 3, which is topping LMArena’s leaderboard across most benchmarks (apart from coding — which Anthropic’s Claude Opus-4.5 still has on lock).

Early this month, The Information reported that CEO Sam Altman released an internal “code red” memo to staff amid ChatGPT traffic decline and concerns that it is losing consumer market share to Google. The code red called for a shift in priorities, including stalling on commitments like introducing ads and instead focusing on creating a better ChatGPT experience. 

GPT-5.2 is OpenAI’s push to reclaim leadership, even as some employees reportedly asked for the model release to be pushed back so the company could have more time to improve it. And despite indications that OpenAI would focus its attention on consumer use cases by adding more personalization and customization to ChatGPT, the launch of GPT-5.2 looks to beef up its enterprise opportunities. 

The company is specifically targeting developers and the tooling ecosystem, aiming to become the default foundation for building AI-powered applications. Earlier this week, OpenAI released new data showing enterprise usage of its AI tools has surged dramatically over the past year. 

Techcrunch event

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

Join the Disrupt 2026 Waitlist

Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector.

San Francisco
|
October 13-15, 2026

This comes as Gemini 3 has become tightly integrated into Google’s product and cloud ecosystem for multimodal and agentic workflows. Google this week launched managed MCP servers that make its Google and Cloud services like Maps and BigQuery easier for agents to plug into. (MCPs are the connectors between AI systems and data and tools.)

OpenAI says GPT-5.2 sets new benchmark scores in coding, math, science, vision, long-context reasoning, and tool use, which the company claims could lead to “more reliable agentic workflows, production-grade code, and complex systems that operate across large contexts and real-world data.”

Those capabilities put it in direct competition with Gemini 3’s Deep Think mode, which has been touted as a major reasoning advancement targeting math, logic, and science. On OpenAI’s own benchmark chart, GPT-5.2 Thinking edges out Gemini 3 and Anthropic’s Claude Opus 4.5 in nearly every listed reasoning test, from real-world software engineering tasks (SWE-Bench Pro) and doctoral-level science knowledge (GPQA Diamond) to abstract reasoning and pattern discovery (ARC-AGI suites). 

Research lead Aidan Clark said that stronger math scores aren’t just about solving equations. Mathematical reasoning, he explained, is a proxy for whether a model can follow multi-step logic, keep numbers consistent over time, and avoid subtle errors that could compound over time. 

“These are all properties that really matter across a wide range of different workloads,” Clark said. “Things like financial modeling, forecasting, doing an analysis of data.”

During the briefing, OpenAI product lead Max Schwarzer said GPT-5.2 “makes substantial improvements to code generation and debugging” and can walk through complex math and logic step by step. Coding startups like Windsurf and CharlieCode, he added, report “state-of-the-art agent coding performance” and measurable gains on complex multi-step workflows.

Beyond coding, Schwarzer said that GPT-5.2 Thinking responses contain 38% fewer errors than its predecessor, making the model more dependable for day-to-day decision-making, research, and writing. 

GPT-5.2 appears to be less a reinvention and more of a consolidation of OpenAI’s last two upgrades. GPT-5, which dropped in August, was a reset that laid the groundwork for a unified system with a router to toggle the model between a fast default model and a deeper “Thinking” mode. November’s GPT-5.1 focused on making that system warmer, more conversational, and better suited to agentic and coding tasks. The latest model, GPT-5.2, seems to turn up the dial on all of those advancements, making it a more reliable foundation for production use. 

For OpenAI, the stakes have never been higher. The company has made commitments to the tune of $1.4 trillion for AI infrastructure buildouts over the next few years to support its growth — commitments it made when it still had the first-mover advantage among AI companies. But now that Google, which lagged behind at the start, is pushing ahead, that bet might be what’s driving Altman’s “code red.” 

OpenAI’s renewed focus on reasoning models is also a risky flex. The systems behind its Thinking and Deep Research modes are more expensive to run than standard chatbots because they chew through more compute. By doubling down on that kind of model with GPT-5.2, OpenAI may be setting up a vicious cycle: spend more on compute to win the leaderboard, then spend even more to keep those high-cost models running at scale.

OpenAI is already reportedly spending more on compute than it had previously let on. As TechCrunch reported recently, most of OpenAI’s inference spend — the money it spends on compute to run a trained AI model — is being paid in cash rather than through cloud credits, suggesting the company’s compute costs have grown beyond what partnerships and credits can subsidize.

During the call, Simo suggested that as OpenAI scales, it is able to offer more products and services to generate more revenue to pay for additional compute.

“But I think it’s important to place that in the grand arc of efficiency,” Simo said. “You are getting, today, a lot more intelligence for the same amount of compute and the same amount of dollars as you were a year ago.”

For all its focus on reasoning, one thing that’s absent from today’s launch is a new image generator. Altman reportedly said in his code red memo that image generation would be a key priority moving forward, particularly after Google’s Nano Banana (the nickname for Google’s Gemini 2.5 Flash Image model) had a viral moment following its August release.

Last month, Google launched Nano Banana Pro (aka Gemini 3 Pro Image), an upgraded version with even better text rendering, world knowledge, and an eerie, real-life, unedited vibe to its photos. It also integrates better across Google’s products, as demonstrated over the past week as it pops up in tools and workflows like Google Labs Mixboard for automated presentation generation.

OpenAI reportedly plans to release another new model in January with better images, improved speed, and better personality, though the company didn’t confirm these plans Thursday.

OpenAI also said Thursday it’s rolling out new safety measures around mental health use and age verification for teens, but didn’t spend much of the launch pitching those changes.

This article has been updated with more information about OpenAI’s compute efficiency status.

Got a sensitive tip or confidential documents? We’re reporting on the inner workings of the AI industry — from the companies shaping its future to the people impacted by their decisions. Reach out to Rebecca Bellan at rebecca.bellan@techcrunch.com or Russell Brandom at russell.brandom@techcrunch.com. For secure communication, you can contact them via Signal at @rebeccabellan.491 and russellbrandom.49.

Ref link: OpenAI fires back at Google with GPT-5.2 after ‘code red’ memo

Posted on Leave a comment

TIME names ‘Architects of AI’ its Person of the Year

Each December, TIME Magazine names a person of the year — someone who has most influenced the news and world, for good or ill. Last year, TIME chose President Donald Trump for the second time. The year before that, it was Taylor Swift, who many claimed saved the economy from a recession with her Eras Tour. In 1938, the magazine chose Adolf Hitler

This year, TIME has chosen to bestow its award on not just one person, but a group of people: the so-called “Architects of AI,” comprising the CEOs shaping the global AI race from the U.S. With AI on everyone’s minds, embodying hope for a small minority and economic anxiety for a majority, per recent Edelman data, this tracks.

“For decades, humankind steeled itself for the rise of thinking machines,” the article reads. “Leaders striving to develop the technology, including Sam Altman and Elon Musk, warned that the pursuit of its powers could create unforeseen catastrophe […] This year, the debate about how to wield AI responsibly gave way to a sprint to deploy it as fast as possible.”

Based on one of TIME’s two cover photos, some of those people appear to be Nvidia’s Jensen Huang, Tesla’s Elon Musk, OpenAI’s Sam Altman, Meta’s Mark Zuckerberg, AMD’s Lisa Su, Anthropic’s Dario Amodei, Google DeepMind’s Demis Hassabis, and World Labs’ Fei-Fei Li — all individuals who raced “both beside and against each other.” 

TIME writes that these individuals, through their multibillion-dollar bets on “one of the biggest physical infrastructure projects of all time,” have reshaped government policy, turned up the heat on geopolitical competition, and pushed AI adoption forward. 

This is the story of how AI changed our world in 2025, in new and exciting and sometimes frightening ways. It is the story of how Huang and other tech titans grabbed the wheel of history, developing technology and making decisions that are reshaping the information landscape, the climate, and our livelihoods… AI emerged as arguably the most consequential tool in great-power competition since the advent of nuclear weapons.

TIME only announced the news on Thursday morning, but images of the cover photo were leaked on prediction market Polymarket on Wednesday evening.

Ref link: TIME names ‘Architects of AI’ its Person of the Year