Home News The benchmarks for Claude 4 show improvements but the context is still...

The benchmarks for Claude 4 show improvements but the context is still 200K.

0
The benchmarks for Claude 4 show improvements but the context is still 200K.

OpenAI rival Anthropic today announced Claude 4 model, which is significantly better than Claude 3 on benchmarks. However, we are disappointed with the same 200,000-context window limit. Anthropic stated in a blog that Claude Opus 4 was the company’s strongest model and the best model to code in the industry.

For example, in SWE-bench (SWE is short for Software Engineering Benchmark), Claude Opus 4 scored 72.5 percent and 43.2 on Terminal-bench.

“It delivers sustained performance on long-running tasks that require focused effort and thousands of steps, with the ability to work continuously for several hours, dramatically outperforming all Sonnet models and significantly expanding what AI agents can accomplish,” Anthropic Noted

Although benchmarks show that Claude 4 Sonnet, Opus and Gemini 2.5 Pro are ahead of their predecessors in coding and competitors such as Gemini 2.5 Pro, we’re still worried about the model’s 200,000 window limit.

This could be one of the reasons why Claude 4 models excel at coding and complex-solving tasks in these benchmarks, because these models are not being tested against a large context.

For comparison, Google’s Gemini 2.5 Pro ships with a 1 million token context window and support for a 2 million context window is also in the works.

ChatGPT’s 4.1 models also offer up to a million context window.

Claude’s context window is not as good as the competition.

www.aiobserver.co

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version