Home Technology Open-Source Tools OpenAI has just released new o3

OpenAI has just released new o3

0
OpenAI has just released new o3

Elyse Betters Picaro / ZDNET

Following the recent launch of a new family of GPT-4.1 models, OpenAI released o3 and o4-mini on Wednesday, its latest addition to the existing line of reasoning models. The o3 model, previewed in December, is OpenAI’s most advanced reasoning model to date, while o4-mini is a smaller, cheaper, and faster model.

Meet o3 and o4-mini

Simply put, reasoning models are trained to “think before they speak,” which results in a longer time to process the prompt but higher-quality responses. As a result, like older models, o3 and o4-mini show strong performance in coding, math, and science tasks. However, they also have an important new addition: Visual understanding.

Also: How to use ChatGPT: A beginner’s guide to the most popular AI chatbot

OpenAI o3 and o4-mini are OpenAI’s first models to “think with images.” OpenAI explains that this means the models don’t just see an image; they can actually use the visual information in their reasoning process. Users can also now upload images that are low quality or blurry, and the model will still be able to understand them.

Another major first is that o3 and o4-mini can independently use all ChatGPT tools, including web browsing, Python, image understanding, and image generation, to better resolve complex, multi-step problems. OpenAI says this ability allows the new models to take “real steps toward acting independently.”

Also: The top 20 AI tools of 2025 – and the #1 thing to remember when you use them

A recent report from The Information claimed that the new models would synthesize data from different fields, and subject expertise, and then use this knowledge to suggest new and innovative experiments. The report cites insider sources who have tested this model as saying that these experiments would cover many complex topics such as nuclear fission and pathogen detection. OpenAI has not confirmed this.

Accessing the new models

ChatGPT Plus users, Pro and Team members can access OpenAI o3 today. The models will be listed in the model selector as o3, and o4 mini-high. They replace o1, and o3 mini-high. Pro users will have access to o3-pro within a few weeks. Until then, however, they can still access o1-pro. The models are available to developers via the API. ChatGPT has just made it easier to find and edit the AI images that you’ve generated.

OpenAI also shared:ChatGPT has just made it easier to find and modify all the AI Images you’ve ever created.

OpenAI shared, To ease concerns over model safety, both of the new releases have been stress-tested and evaluated under their safety program. updated Preparedness Frame

Codex CLI Bonus

OpenAI also launched Codex, an open-source coder that runs locally on users’ terminals. It’s designed to give users a clear and simple way to connect AI models such as o3 and GPT-4.1 (with support coming soon) to their own code or tasks running on their computers. Report:OpenAI will launch AI models capable of creating their own experiments.

OpenAI has announced the launch a $1 million initiative. It’s designed to support early projects through API credits and grants of $25K increments. Want to read more about AI?

Want to know more stories about AI. Subscribe to our weekly newsletter, Innovation.

Artificial Intelligence

www.aiobserver.co

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version