According to computer scientists at UC Berkeley, AI models are a promising way to discover and improve algorithms.
In a paper titled “Barbarians at the Gate: How AI is Upending Systems Research,” published in a preprint UC Berkeley researchers describe their use of OpenEvolvean open-
The authors claim that they have used OpenEvolve in order to achieve a five-fold speedup of an Expert Parallelism Load Balancer algorithm (EPLB), which is used to route token
According to the authors, AI-Driven Research for Systems, which involves an AI model that iteratively generates and evaluates solutions before refining them, will transform systems research. In their paper, they claim
“As AI assumes a central role in algorithm design, we argue that human researchers will increasingly focus on problem formulation and strategic guidance,” . “Our results highlight both the disruptive potential and the urgent need to adapt systems research practices in the age of AI.”
Google talked about AlphaEvolvein May. This “evolutionary coding agent” improved the efficiency of Google’s data center orchestration and optimized matrix multiplication operations on its Tens
To further highlight the potential of machine-learning as an algorithmic discovery tool, a paper from Google DeepMind researchers published this week in Nature describes “an autonomous method for discovering [reinforcement learning] rules solely through the experience of many generations of agents interacting with various environments.”
The UC Berkeley team has now demonstrated the value of AI based optimization work by having OpenEvolve develop a more efficient method to load balance across GPUs handling LLM.
They started with DeepSeek’s open-source EPLBimplementation, which is slow due to its Python code and reliance on a for loop to perform a linear search to The DeepSeek version of the EPLB took an average of 540 ms for rebalancing the expert modules between GPUs. BBC probe finds AI chatbots misrepresent nearly half of news summaries. The UC Berkeley paper describes a case study where the authors used OpenEvolve to speed up relational analysis by a factor three. SQL queries were used to invoke LLM inference over each row.
When asked whether OpenEvolve’s “reasoning” consists only of connecting dots that people overlooked in available data, or shows evidence of a new approach, coauthor Audrey Cheng (a
LLMs benefit from having been trained on a larger corpus than any human researcher can comprehend. This gives them advantages in discovering ways to apply ideas from different domains.
“Currently in systems/database performance research, we consider algorithms as ‘novel’ if they show significant improvements in some way, even if they borrow ideas from other fields (as an example, see my paperapplying fair sharing ideas from networking/operating systems to databases). According to this criteria, the development would be considered novel according to research standards.”
When asked whether OpenEvolve simply brute-forces novelty from known data, or is being “creative,” Cheng replied that this is also a difficult question. Cheng said. Cheng believes ADRS has a huge impact. She explained. “Performance problems are generally easier to verify, and we’ve already seen some initial adoption in industry (see Datadog’s recent blog post as an example). I expect that the majority of companies who run systems at scale, will eventually use ADRS in some form for performance tuning.”
Cheng expects ADRS will be able to provide more novel solutions once researchers learn how to verify other problems, such as security and fault tolerance. She explained. “If that is in place, I imagine ADRS can apply widely to all kinds of systems problems (and also beyond computer science).” (r)

