AWS distinguished scientist Byron Cook makes the case for “automated reasoning.” Amazon AWS
The term “reasoning” is a familiar metaphor in today’s artificial intelligence (AI) technology, often used to describe the verbose outputs generated by so-called reasoning AI models such as OpenAI’s o1 or DeepSeek AI’s R1.
Another kind of reasoning is quietly taking root in the most advanced applications, perhaps closer to actual reasoning.
Also: Will AI think like humans? We’re not even close – and we’re asking the wrong question
Recently, Amazon AWS distinguished scientist Byron Cook made the case for what is called “automated reasoning,” also known as “symbolic AI” or, more abstrusely, “formal verification.”
It is an area of study as old as the artificial intelligence field, and, said Cook, it is rapidly merging with generative AI to form an exciting new hybrid, sometimes termed “neuro-symbolic AI,” which combines the best of automated reasoning and large language models.
Cook gave a talk about automated reasoning at the AWS Financial Services Symposiumwill be held in New York, this May.
Automated reasoning is a term that refers to algorithms which search for statements about the world, that can be verified by logic. The idea is that knowledge is backed up by what can be logically asserted. AWS says AI will boost human creativity in financial services.
Cook said, Cook gave a short snippet as an example to demonstrate how automated reasoning achieves this rigorous validation. Cook explained that a computer code instruction loop can be predicted to stop running based on its statements. The question “Can this loop run forever?” is answered by logical analysis.
In Cook’s example, two variables, X and Y, are integers; Y is positive, and X is greater than Y. Y is repeatedly subtracted from X, reducing the value of X. Eventually, subtracting Y from X will make X smaller than Y. At that point, the conditions of the code loop have been violated, and the loop will terminate.
The simple fact — that eventually X will be smaller than Y — can be inferred logically without exhaustively running the code loop itself. That’s perhaps the most important element of automated reasoning, a principle that Cook returned to repeatedly: Automated reasoning can answer fundamental questions about something with logic rather than with exhaustive trial and error.
“That’s what symbolic AI is,” said Cook. “We find arguments, step by step, and we can check them mechanically using the foundations of mathematical logic to make sure each statement is true. And then automated reasoning is the algorithmic search for arguments of that form.”
Such step-by-step solutions go back to the dawn of AI in the late 1950s, said Cook. In fact, in 1959, a top-of-the-line IBM machine, the 704, ran a form of automated reasoning to prove all of the theorems of Whitehead and Russell’s famous Principia Mathematica (19459109]
Cook told the audience that there have been many advances since then. “The tools keep getting remarkably better” by new algorithms.
Also: What is DeepSeek AI Is it safe to use? Here’s what you need to know.
AWS uses automated reasoning to accomplish real-world tasks, such as ensuring delivery of AWS Services according to SLAs or verifying network safety.
All that is needed is to translate a problem into a logically evaluable step-by-step process, such as the code loop.
Cook explained that network security often involves statements that can be either true or false. This means that they can also be tested the same way the code loop is to determine whether conditions are met.
“When you look at the questions [AWS] customers ask, they use lots of words like, ‘for all,’ and ‘always,’ and ‘never’,” said Cook, such as “Is my data always encrypted at rest and in transit?”
“These are universal statements; they range over very large, if not intractably large, if not infinite sets,” said Cook. “It’s not possible to exhaustively test any policy to know such absolutes,” said Cook. “The number of lifetimes of the sun it would take to exhaustively test all possible authorization requests would take 92,686 digits to write down” — not practical, in other words.
Using automated reasoning, AWS’s Identity and Access Management tool Cook said that IAM Analyzer, which is available for free since four years, has been “can solve the same problem in seconds,” for four years. Cook said that the power of automated reason will make it “That’s the value proposition of reasoning and mathematical logic as opposed to exhaustive testing.”
increasingly “a form of artificial super-intelligence.”
Also, OpenAI o1 lies are more than any other major AI model. Cook explained why this is important. He said that automated reasoning was used to “solve open math conjectures,” stuff he “grabs headlines,” had said.
“We are solving in milliseconds or seconds or hours what humans could never solve in, like, a hundred lifetimes.”
AWS also uses automated reasoning to”solve open math conjectures,”the stuff that”grabs headlines,”he said.
Cook said all of these applications — the AIM Analyzer, the code proving, the AWS access authorization, and numerous other tools and services — draw upon an internal automated reasoning infrastructure at AWS called Zelkova, which can translate policies into mathematical formulas.
A lot of the momentum for automated reasoning and Zelkova has come from the financial services industry, said Cook.
“We’ve had really nice partnerships with folks like Goldman, Bridgewater,” said Cook, citing the investment bank and the hedge fund. The technology has helped those clients’ teams “deploy faster, and, actually, save a lot of money.”
Also:AI has grown beyond human knowledge, says Google’s DeepMind unit
(John Kain, who is head of market development efforts in financial services for AWS, recently spoke to ZDNET about the use of automated reasoning for financial clients.)
The future of automated reasoning is melding it with generative AI, a synthesis referred to as neuro-symbolic.
On the most basic level, it’s possible to translate from natural-language terms into formulas that can be rigorously analyzed using logic by Zelkova.
In that way, Gen AI can be a way for a non-technical individual to frame their goal in informal, natural language terms, and then have automated reasoning take that and implement it rigorously. The two disciplines can be combined to give non-logicians access to formal proofs, in other words.
Also: What Apple’s controversial research paper really tells us about LLMs
“You’re an expert in financial services, in immigration law, with automated reasoning checks, we give an individual the ability to encode that, and here are the rules derived.”
The other reason for a hybrid is to deal with the limitations of generative AI that have become apparent, especially what are called hallucinations or confabulations, the tendency for large language models (LLMs) to produce false assertions, sometimes wildly so.
“People got super excited about them [LLMs], and now they’re beginning to realize that, oh, wait, some of these things have limitations,” said Cook. “You can’t just force infinite data into these things, and they’ll just always get better.”
Scholars, especially critics of the current generative AI approach, have long discussed the idea of a hybrid neuro-symbolic approach. Noted gen AI skeptic Gary Marcus has suggested that gen AI needs something like formal logic to ground it in truth.
Also: With AI models clobbering every benchmark, it’s time for human evaluation
There is even a venture-backed startup named Symbolicawhose stated mission implies that it will surpass the limitations it perceives of LLMs. Cook gave a practical example for the hybrid approach
: verifying the veracity chat bots. Cook said
“In a chat bot, you have questions and answers, and you want to know, is it true?” . Automated reasoning allows for the evaluation of statements based on formal logic.
One example is a service from AWS, currently in preview and announced at AWS re-Invent, called. Automated reasoning checksThe program can convert a chatbot’s natural-language output into formal logic, which can then be checked. Cook gave an example of a chat with a chatbot for a bank loan. A person asks for an estimate of how long it will take to approve their loan application. The chatbot replies with a series statements, such a “1 business day of approval.”
Automated reasoning checks whether the answers provided by the bot are correct.
Explained Cook, “In the background, what we’re doing is we’re taking the natural language text, we’re mapping it into mathematical logic, we’re proving or disproving the correctness of the statements, and then we’re providing witnesses so you can, as a customer, pull on that, the log of the argument, that the property is true, but in a way that could be audited.”
Cook said automated reasoning will become even more important in an age of agentic AI. “Where things are headed is, we’re hearing more and more about agents; on the hype curve, this is sort of the new, new entry,” he said.
Also: Google’s new AI tool Opal turns prompts into apps, no coding required
“If you are going to allow natural language to be converted into action that makes one-way-door decisions on your behalf with your money, with your reputation, with your career, with your code, that correctness is going to be absolutely paramount. With agentic AI, we’re allowing mere mortals to essentially write and execute distributed systems.”
Agentic AI consists of many AI systems operating in parallel, and should be solved the way automated reasoning has solved other distributed systems work at AWS, he argued.
For example, in the case of AWS’s S3 storage system, the internal tool, Zelkova, was used to “prove the correctness of the distributed systems design,” he said.
“S3 [Amazon’s object storage] under the hood is hundreds of protocols,” Cook explained. “Assuming all the machines are speaking the protocols correctly, then you will get strong consistency — collectively, we will get the correct outcome.”
He explained that the same group voting approach, a kind of wisdom of the crowd, can be harnessed to verify agents’ actions.
Also: Hacker slips malicious ‘wiping’ command into Amazon’s Q AI coding assistant – and devs are worried
“That’s the sort of thing we can show very quickly and very easily with automated reasoning.”
Cook expressed optimism that the merger of automated reasoning and gen AI will continue to make progress.
“I’m glad to be alive and I’m glad to be a practitioner in this field right now,” he said. “Because these branches are really very quickly actually coming back together now.”
Those wishing to explore the topic further may want to start with Cook’s introductory post on automated reasoning in 2021Want to read more about AI?
Want to know more stories about AI. Sign up for Innovationour weekly newsletter.
