News
OpenAI Slashes ChatGPT o3 API Prices by 80% Without Hurting Performance
OpenAI is making waves again, but this time it’s not a new model or feature. It’s a major price drop—one that could shift how developers build and scale AI apps overnight.
On Wednesday, OpenAI announced it’s reducing prices for its most capable reasoning model, o3, by a staggering 80%. And here’s the kicker: the model’s performance hasn’t changed one bit. That’s not just a marketing line. Independent testers backed it up.
What’s Actually Changed With o3?
The price cut is massive. We’re talking about a reduction from $10 to $2 per million input tokens, and $40 to $8 per million output tokens.
That alone is enough to turn heads in the dev community. But it’s not just the numbers that matter—it’s what didn’t change that’s more impressive.
The o3 model you get through the API today is the exact same one you got before the price drop. OpenAI clarified this point clearly on X, saying they simply “optimized [their] inference stack.” Basically, they found a more efficient way to run the same model—nothing more, nothing less.
Performance? Still Rock Solid, Says ARC
There were plenty of raised eyebrows after the announcement. A price drop that steep often smells like a downgrade. Not this time.
ARC Prize, a benchmark group that independently tests AI models, confirmed the performance of the o3-2025-04-16 model hasn’t budged.
Just one sentence from their statement summed it all up:
“We compared the retest results with the original results and observed no difference in performance.”
Let’s pause there.
That’s a critical point. No performance hit. No sneaky model swap. Just better backend work.
Why This Actually Matters for Developers
For devs building with OpenAI’s tools, pricing isn’t just a budgeting line item—it’s make or break.
Here’s what this means for developers:
Input now costs $2 per million tokens
Output now costs $8 per million tokens
Same model, same accuracy, lower burn rate
Apps like Cursor and Windsurf, which are built directly on the API, instantly become more cost-effective to run. That trickles down into cheaper tools for users—or better margins for startups.
And if you’re an indie developer? This move could bring enterprise-grade AI into your weekend side project.
A Look at the Numbers
Let’s put it in perspective with a table. Here’s how the old and new pricing stack up:
Token Type | Old Price (per million) | New Price (per million) | % Decrease |
---|---|---|---|
Input | $10 | $2 | 80% |
Output | $40 | $8 | 80% |
That’s not a small dip. That’s OpenAI essentially opening the floodgates.
The “o3-pro” Surprise That Followed
While all eyes were on the price cut, OpenAI quietly added something else to the API: o3-pro.
This new variant of the model is built for users who want even better output quality. It uses more compute, which likely means it’ll cost more—but also means it can give stronger responses in complex scenarios.
The timing wasn’t random either. OpenAI knows that not every user wants to save money. Some want better answers, even at a higher price. o3-pro fills that gap.
API Users Win Big, Regular Users… Not So Much
If you’re using ChatGPT through the regular app, this change doesn’t directly affect you. Prices there haven’t moved.
But under the hood, the ripple effect is real. Lower API costs mean third-party tools powered by ChatGPT might become faster, smarter, or cheaper.
You might not see it on your bill, but you might feel it in the apps you use every day.