Skip to content
Nathan Delacretaz

AI release cadence is compressing fast

12 Dec 2025

aithinking

The GPT-5.2 release made me wonder about the acceleration we're seeing at the end of this year. What happens when fierce competition, commoditization pressures, and economic fundamentals all converge? Are we seeing a real shift, or is this just recency bias?

I wanted data. So I opened Claude and asked Opus 4.5 to build metrics around AI release timelines. A few messages later, I had elegant charts showing the patterns.

Then I wanted to test GPT-5.2 in Codex. I dumped the dashboard code and asked for integration on this website. A few more messages, and it was live with the right design—on a site that was never built for this kind of functionality.

That's what impresses me most about these tools. You can explore complex data questions and get production-ready visualizations in just a handful of messages. No setup, no boilerplate, just conversation and results.

I work with Claude Code, Codex, and Gemini CLI every day, but the speed still surprises me. This dashboard would have taken me days to build alone, if I'd even bothered to try.

Below is the interactive dashboard examining release cadence and competitive response times across OpenAI, Anthropic, and Google.

If you disagree with the dataset, that's the point: switch to the Data tab and edit it. The charts update instantly.

LLM Release Dynamics

20 releases · 3 vendors · Nov 2022 – Dec 2025

Release intervals are compressing

Rolling average of days between consecutive releases (3 releases)

OpenAI
Anthropic
Google
2023-2024 avg74 days
2025 avg41 days
Thinking models12 of 20
Data covers major frontier model releases from OpenAI, Anthropic, and Google (Nov 2022 – Dec 2025). Thinking models include those with explicit reasoning capabilities.
← Back to Thinks