What can we do (and what will come next) with the most powerful AI yet?

Grossman/Dall-e

Join our daily and weekday newsletters to receive the latest updates on AI coverage. Learn More


AI large language models (LLMs) such as Claude 3.7 by Anthropic or Grok 3 by xAI are the latest releases. Often performing on PhD levels — at the least according to certain benchmarks. This achievement marks the next step towards what former Google CEO Eric Schmidt Imagines: A world in which everyone has access to a “great polymath”an AI capable to draw on vast amounts of knowledge to solve problems across disciplines. Professor Ethan Mollick, Wharton Business School, noted

on his One Useful Thing (19459087) blogged that these models were trained with significantly more computing power than GPT-4 when it was launched two years ago. Grok 3 was trained on up 10 times as many compute. He said that Grok 3 would be the first “gen 3 AI model,” emphasizing the fact that “this new technology is smarter and the leap in capabilities is striking.” According to Anthropic it is the first Hybrid reasoning models, combining traditional LLMs for fast responses and advanced reasoning capabilities to solve complex problems. Mollick (19659009) attributed these advancements to two converging tendencies: the rapid expansion of computing power for training LLMs and AI’s growing ability to solve complex problems (often referred to as reasoning or thinking). He concluded that the two trends “supercharge AI abilities.”

What can we do now that AI is supercharged?

OpenAI has taken a major step. Launched their “deep research” AI Agent at the beginning February. In his review of Platformer Casey Newton commented on deep research as “impressively competent”. Newton noted that similar tools and deep research could accelerate research, analysis, and other forms knowledge work. However, their reliability in complex areas is still an unanswered question.

Deep research is based on a variant o3 reasoning models, which has not yet been released. It can engage in extended reasoning for long periods of time. It does this by using chain-of thought (COT) logic, breaking complex tasks down into multiple logical stages, just like a human researcher would refine their approach. It can also search on the web to get more current information than the model’s original training data.

Timothy Lee authored in Understanding AIdescribes several tests experts conducted of deep research. One test asked for instructions on how to build an electrolysis plant. A mechanical engineer commented on the quality of output and estimated that it would take a professional one week to create something similar. OpenAI generated a 4,000-word report in just four minutes.

Google DeepMind was also recently released

. Released AI co-scientist, a multi-agent AI built on its Gemini2.0 LLM. It’s designed to help scientists come up with new hypotheses and research ideas. Imperial College London, has already proven the value of this software. According to Professor Jose R. Penades’ The team spent years trying to understand why certain superbugs are resistant to antibiotics. AI replicated their findings in just 48 hours. While the AI dramatically accelerated hypothesis generation, human scientists were still needed to confirm the findings. Nevertheless, Penades said the new AI application “has the potential to supercharge science.”

What would it mean to supercharge science?

Last October, Anthropic CEO Dario Amodei wrote in his ” Machines of Loving Grace” blog that he expected “powerful AI” — his term for what most call artificial general intelligence (AGI) — would lead to “the next 50 to 100 years of biological [research] progress in 5 to 10 years.” Four months ago, the idea of compressing up to a century of scientific progress into a single decade seemed extremely optimistic. With the recent advances in AI models now including Anthropic Claude 3.7, OpenAI deep research and Google AI co-scientist, what Amodei referred to as a near-term “radical transformation” is starting to look much more plausible.

However, while AI may fast-track scientific discovery, biology, at least, is still bound by real-world constraints — experimental validation, regulatory approval and clinical trials. The question is no longer whether AI will transform science (as it certainly will), but rather how quickly its full impact will be realized.

In a February 9 blog post, OpenAI CEO Sam Altman claimed that “systems that start to point to AGI are coming into view.” He described AGI as “a system that can tackle increasingly complex problems, at human level, in many fields.”

Altman believes achieving this milestone could unlock a near-utopian future in which the “economic growth in front of us looks astonishing, and we can now imagine a world where we cure all diseases, have much more time to enjoy with our families and can fully realize our creative potential.”

A dose of humility

These advances of AI are hugely significant and portend a much different future in a brief period of time. Yet, AI’s meteoric rise has not been without stumbles. Consider the recent downfall of the Humane AI Pin — a device hyped as a smartphone replacement after a buzzworthy TED Talk. Barely a year later, the company collapsed, and its remnants were sold off for a fraction of their once-lofty valuation.

Real-world AI applications often face significant obstacles for many reasons, from lack of relevant expertise to infrastructure limitations. This has certainly been the experience of Sensei Ag, a startup backed by one of the world’s wealthiest investors. The company set out to apply AI to agriculture by breeding improved crop varieties and using robots for harvesting but has met major hurdles. According to the Wall Street Journal, the startup has faced many setbacks, from technical challenges to unexpected logistical difficulties, highlighting the gap between AI’s potential and its practical implementation.

What comes next?

As we look to the near future, science is on the cusp of a new golden age of discovery, with AI becoming an increasingly capable partner in research. Deep-learning algorithms working in tandem with human curiosity could unravel complex problems at record speed as AI systems sift vast troves of data, spot patterns invisible to humans and suggest cross-disciplinary hypotheses .

Already, scientists are using AI to compress research timelines — predicting protein structures, scanning literature and reducing years of work to months or even days — unlocking opportunities across fields from climate science to medicine.

Yet, as the potential for radical transformation becomes clearer, so too do the looming risks of disruption and instability. Altman himself acknowledged in his blog that “the balance of power between capital and labor could easily get messed up,” a subtle but significant warning that AI’s economic impact could be destabilizing.

This concern is already materializing, as demonstrated in Hong Kong, as the city recently cut 10,000 civil service jobs while simultaneously ramping up AI investments. If such trends continue and become more expansive, we could see widespread workforce upheaval, heightening social unrest and placing intense pressure on institutions and governments worldwide.

Adapting to an AI-powered world

AI’s growing capabilities in scientific discovery, reasoning and decision-making mark a profound shift that presents both extraordinary promise and formidable challenges. While the path forward may be marked by economic disruptions and institutional strains, history has shown that societies can adapt to technological revolutions, albeit not always easily or without consequence.

To navigate this transformation successfully, societies must invest in governance, education and workforce adaptation to ensure that AI’s benefits are equitably distributed. Even as AI regulation faces political resistance, scientists, policymakers and business leaders must collaborate to build ethical frameworks, enforce transparency standards and craft policies that mitigate risks while amplifying AI’s transformative impact. If we rise to this challenge with foresight and responsibility, people and AI can tackle the world’s greatest challenges, ushering in a new age with breakthroughs that once seemed impossible.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

www.aiobserver.co

More from this stream

Recomended