The OpenAI o3-mini, previewed in December 2024, has been released by OpenAI as its “most cost-efficient model” in its reasoning series. The o3-mini’s strength lies in science, math, and coding, and it’s optimized for STEM reasoning.
OpenAI says that o3-mini provides a specialized alternative for tech domains that require speed and precision, and with medium reasoning efforts, it matches the o1’s performance in science, math, and coding while delivering faster responses. The company also said that o3-mini was 24% faster than o1-mini in A/B testing, with an average response time of 7.7s compared to o1-mini’s 10.16s.
The o3-mini is OpenAI’s first small reasoning model supporting highly requested developer features, including function calling, developer messages, and Structured Outputs. Additionally, it will support streaming, and developers will have the option to choose from three reasoning efforts – low, medium, and high. o3-mini will also work with search to find up-to-date answers with links to relevant sources on the web.
The o3-mini is already available to ChatGPT Plus, Team, and Pro subscribers. It can be selected from the model picker, where it will replace the o1-mini. All paid users will have the option to choose “o3-mini-high” in the model picker if they are willing to trade response time for higher intelligence.
ChatGPT Pro users will have unlimited access to both o3-mini and o3-mini-high, while Plus and Team users will get a higher rate limit – up from 50 messages/day with o1-mini to 150 messages/day with the o3-mini.
ChatGPT’s free users not having any subscription can try o3-mini by selecting “Reason” in the message composer or by regenerating a response, making o3-mini OpenAI’s first reasoning model that’s available to free users in ChatGPT. It’s also available in Microsoft’s Azure OpenAI Service.