News
OpenAI Slashes ChatGPT o3 API Prices by 80% Without Hurting Performance
<p data-start="250" data-end="421">OpenAI is making waves again, but this time it’s not a new model or feature. It’s a major price drop—one that could shift how developers build and scale AI apps overnight.</p>
<p data-start="423" data-end="676">On Wednesday, OpenAI announced it’s reducing prices for its most capable reasoning model, o3, by a staggering 80%. And here&#8217;s the kicker: the model’s performance hasn&#8217;t changed one bit. That’s not just a marketing line. Independent testers backed it up.</p>
<h2 data-start="678" data-end="713">What’s Actually Changed With o3?</h2>
<p data-start="715" data-end="854">The price cut is massive. We&#8217;re talking about a reduction from $10 to $2 per million input tokens, and $40 to $8 per million output tokens.</p>
<p data-start="856" data-end="1002">That alone is enough to turn heads in the dev community. But it’s not just the numbers that matter—it’s what didn’t change that’s more impressive.</p>
<p data-start="1004" data-end="1290">The o3 model you get through the API today is the exact same one you got before the price drop. OpenAI clarified this point clearly on X, saying they simply “optimized [their] inference stack.” Basically, they found a more efficient way to run the same model—nothing more, nothing less.</p>
<p data-start="1004" data-end="1290"><a href="https://www.theibulletin.com/wp-content/uploads/2025/06/OpenAI-o3-model-pricing-2025-chart.jpg"><img class="aligncenter size-full wp-image-57661" src="https://www.theibulletin.com/wp-content/uploads/2025/06/OpenAI-o3-model-pricing-2025-chart.jpg" alt="OpenAI o3 model pricing 2025 chart" width="1486" height="817" /></a></p>
<h2 data-start="1292" data-end="1334">Performance? Still Rock Solid, Says ARC</h2>
<p data-start="1336" data-end="1466">There were plenty of raised eyebrows after the announcement. A price drop that steep often smells like a downgrade. Not this time.</p>
<p data-start="1468" data-end="1600">ARC Prize, a benchmark group that independently tests AI models, confirmed the performance of the o3-2025-04-16 model hasn’t budged.</p>
<p data-start="1602" data-end="1762">Just one sentence from their statement summed it all up:<br data-start="1658" data-end="1661" />“We compared the retest results with the original results and observed no difference in performance.”</p>
<p data-start="1764" data-end="1782">Let’s pause there.</p>
<p data-start="1784" data-end="1876">That&#8217;s a critical point. No performance hit. No sneaky model swap. Just better backend work.</p>
<h2 data-start="1878" data-end="1921">Why This Actually Matters for Developers</h2>
<p data-start="1923" data-end="2022">For devs building with OpenAI’s tools, pricing isn’t just a budgeting line item—it’s make or break.</p>
<p data-start="2024" data-end="2062">Here’s what this means for developers:</p>
<ul data-start="2064" data-end="2193">
<li data-start="2064" data-end="2105">
<p data-start="2066" data-end="2105">Input now costs $2 per million tokens</p>
</li>
<li data-start="2106" data-end="2148">
<p data-start="2108" data-end="2148">Output now costs $8 per million tokens</p>
</li>
<li data-start="2149" data-end="2193">
<p data-start="2151" data-end="2193">Same model, same accuracy, lower burn rate</p>
</li>
</ul>
<p data-start="2195" data-end="2387">Apps like Cursor and Windsurf, which are built directly on the API, instantly become more cost-effective to run. That trickles down into cheaper tools for users—or better margins for startups.</p>
<p data-start="2389" data-end="2496">And if you’re an indie developer? This move could bring enterprise-grade AI into your weekend side project.</p>
<h2 data-start="2498" data-end="2522">A Look at the Numbers</h2>
<p data-start="2524" data-end="2610">Let’s put it in perspective with a table. Here&#8217;s how the old and new pricing stack up:</p>
<div class="_tableContainer_16hzy_1">
<div class="_tableWrapper_16hzy_14 group flex w-fit flex-col-reverse" tabindex="-1">
<table class="w-fit min-w-(--thread-content-width)" data-start="2612" data-end="2940">
<thead data-start="2612" data-end="2691">
<tr data-start="2612" data-end="2691">
<th data-start="2612" data-end="2625" data-col-size="sm">Token Type</th>
<th data-start="2625" data-end="2651" data-col-size="sm">Old Price (per million)</th>
<th data-start="2651" data-end="2677" data-col-size="sm">New Price (per million)</th>
<th data-start="2677" data-end="2691" data-col-size="sm">% Decrease</th>
</tr>
</thead>
<tbody data-start="2775" data-end="2940">
<tr data-start="2775" data-end="2857">
<td data-start="2775" data-end="2788" data-col-size="sm">Input</td>
<td data-start="2788" data-end="2815" data-col-size="sm">$10</td>
<td data-start="2815" data-end="2842" data-col-size="sm">$2</td>
<td data-start="2842" data-end="2857" data-col-size="sm">80%</td>
</tr>
<tr data-start="2858" data-end="2940">
<td data-start="2858" data-end="2871" data-col-size="sm">Output</td>
<td data-start="2871" data-end="2898" data-col-size="sm">$40</td>
<td data-start="2898" data-end="2925" data-col-size="sm">$8</td>
<td data-start="2925" data-end="2940" data-col-size="sm">80%</td>
</tr>
</tbody>
</table>
<div class="sticky end-(--thread-content-margin) h-0 self-end select-none">
<div class="absolute end-0 flex items-end"></div>
</div>
</div>
</div>
<p data-start="2942" data-end="3015">That’s not a small dip. That’s OpenAI essentially opening the floodgates.</p>
<h2 data-start="3017" data-end="3055">The “o3-pro” Surprise That Followed</h2>
<p data-start="3057" data-end="3150">While all eyes were on the price cut, OpenAI quietly added something else to the API: o3-pro.</p>
<p data-start="3152" data-end="3362">This new variant of the model is built for users who want even better output quality. It uses more compute, which likely means it’ll cost more—but also means it can give stronger responses in complex scenarios.</p>
<p data-start="3364" data-end="3523">The timing wasn’t random either. OpenAI knows that not every user wants to save money. Some want better answers, even at a higher price. o3-pro fills that gap.</p>
<h2 data-start="3606" data-end="3654">API Users Win Big, Regular Users… Not So Much</h2>
<p data-start="3656" data-end="3773">If you’re using ChatGPT through the regular app, this change doesn’t directly affect you. Prices there haven’t moved.</p>
<p data-start="3775" data-end="3921">But under the hood, the ripple effect is real. Lower API costs mean third-party tools powered by ChatGPT might become faster, smarter, or cheaper.</p>
<p data-start="3923" data-end="4010">You might not see it on your bill, but you might feel it in the apps you use every day.</p>

-
News4 months ago
Taiwanese Companies Targeted in Phishing Campaign Using Winos 4.0 Malware
-
News2 months ago
Justin Baldoni Hits Back at Ryan Reynolds, Calling Him a “Co-Conspirator” in Blake Lively Legal Battle
-
News4 months ago
Apple Shuts Down ADP for UK iCloud Users Amid Government Backdoor Demands