has launched GPT-4.1 as a successor to its GPT-4o multimodal model of AI. The company launched this model last year. OpenAI announced during a livestream Mondaythat GPT-4.1 is better than GPT-4o “in just about every dimension” and has a larger context windows. It also said GPT-4o was improved in “just a few dimensions,” and had big improvements in coding and instructions.
GPT 4.1 is now available for developers, along two smaller models. GPT-4.1 Mini is a more affordable model for developers, while GPT-4.1 Nano is a lighter, more compact version that OpenAI claims is the “smallest, fastest and cheapest” yet.
The three models can each process up to a million tokens of contextual information — the text or images included in a prompt. This is far more than GPT-4o’s limit of 128,000 tokens. OpenAI announced that they had trained GPT-4.1 models to reliably attend information across a 1 million context length. “We’ve trained it to be much more reliable than GPT-4o in noticing relevant texts, and ignoring distractions across long and brief context lengths.”
GPT-4.1 is also 26 per cent cheaper than GPT-4o. This metric has become more significant following the debut DeepSeek AI model.
This launch coincides with OpenAI’s plans to phase out the two-year-old GPT-4 from ChatGPT by April 30th. In a changelog, they announced that recent upgrades to GPT-4o made it a “natural replacement” for it. OpenAI plans to remove the GPT-4.5 API preview on July 14th as “GPT 4.1 offers better or similar performance in many key capabilities with much lower costs and latency.” “
GPT-4o was updated last month with new image-generation features for the ChatGPT chatbot. This update proved so popular that OpenAI was forced to limit requests and suspend access to free ChatGPT account to prevent GPUs from “melting”.
GPT-4.1 confirms our report last week about OpenAI’s preparations to launch new models and marks a pivotal change in the release schedule. CEO Sam Altman announced April 4th on X the GPT-5 launch would be delayed and now should arrive “in a couple of months,” rather than the May deadline previously expected. Altman says the delay is due to OpenAI’s “finding it harder than we expected to smoothly integrate everything.”