Richard Lawler is a senior journalist who follows news in tech, culture, politics, and entertainment. He joined The Verge 2021, after several years of covering news for Engadget.
After delivering an “open” AI with better performance on a solitary GPU, Google now has introduced an update to its AI models with Geminiwhich combines “a substantially enhanced base model and improved post-training” for a better overall performance. The company claims that the first version, Gemini 2.5 Pro Experimental, is ahead of OpenAI, Anthropic xAI and DeepSeek in common AI benchmarks measuring understanding, mathematics, coding and other capabilities. The new model can be accessed in Google AI Studio and by Gemini Advanced subscribers via the model dropdown menu in the app.
Gemini’s native Multimodality is also a big advantage for the company, as it can interpret not only text, but audio, still images and video. A 2 million token context is also “coming soon” in order to help it process even more data. Demis Hassabis, CEO of Google DeepMind, called Gemini 2.5 Pro a “state-of-the art model” that was ranked No.1 in LMArena with +39 ELO, and had significant improvements in STEM, multimodal reasoning, coding, and STEM.
Google claims that it has improved the quality of its Gemini models because they are now “reasoning models” that process tasks step by step and make more informed choices, which results in better responses and answers for complex prompts. The blog post states, “”…we are building these thinking abilities directly into all of models so they can handle even more complex problems and support more capable, contextual-aware agents.”