When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.
Back in December,OpenAIlaunched a reasoning model known aso3.
But theres just one problem.
When the model was first launched, OpenAI teamed up with the Arc Prize Foundation.
This group runs theARC-AGI, a benchmark exam to test highly capable AI models.
But that’s not the case for AI.
The foundation originally believed it cost around $3,000 for o3 to solve a complex problem.
That’s no longer the case, with re-estimates putting it nearer to$30,000 per task.
So, while OpenAI scored highly on the test, it did so at a higher-than-average cost.
In fact, it’s an astronomical cost that shows just how expensive cutting-edge AI models are to run.
What does this all mean?
OpenAI’s o3 is separate from the models that powerChatGPT.
Specifically, o3 is reflective.
This is essentially asking the model to think harder.
In the ARC-AGI test, it was o3 (high) that came with such a massive price tag.
Arguably, in a test like this, the model overdid it.
Finally, I want to note how preposterous the o3-high attempt was.
While the low-effort version didnt score as highly, it could end up being a more cost-effective version.
For now, OpenAI hasnt made any announcements about how much o3 will cost for developers to use.
A more cost-efficient versionknown as o3-mini is available.
This, while being cheaper, is nowhere near as powerful.
For the average person, this pricing will have no effect.