San Francisco, California – OpenAI has introduced a more potent iteration of its o1 “reasoning” AI model known as o1-pro in its developer API. This updated version promises to deliver more accurate and enhanced responses through the utilization of more computing power compared to the original o1 model.
Developers who have invested at least $5 in OpenAI API services have exclusive access to o1-pro, although at a steep price. The cost is notably high, with OpenAI charging $150 per million tokens for feeding data into the model and $600 per million tokens generated by the model. This pricing structure exceeds that of OpenAI’s GPT-4.5 for input and significantly surpasses the cost of the standard o1 model.
Despite the significant expense, OpenAI is optimistic that the superior performance of o1-pro will justify the premium pricing. The enhanced capabilities of o1-pro aim to provide more dependable responses by utilizing more computational resources to tackle complex problems effectively.
According to an OpenAI spokesperson speaking to TechCrunch, o1-pro in the API is designed to engage in deeper thinking and offer improved solutions to challenging issues. This move was prompted by numerous requests from the developer community, highlighting the demand for enhanced reliability in AI responses.
Initial feedback on o1-pro, available on the ChatGPT platform for ChatGPT Pro subscribers since December, has been mixed. Users reported that the model faced difficulties in solving Sudoku puzzles and was unable to interpret simplistic optical illusion jokes effectively.
Recent internal benchmarks conducted by OpenAI at the end of last year revealed that while o1-pro demonstrated slightly better performance than the standard o1 in coding and math problems, it exhibited higher reliability in answering such queries. Despite these findings, the overall reception of o1-pro remains varied among users and developers.