According a panel of hundreds artificial intelligence researchers, we are currently pursuing artificial general Intelligence in the wrong way. This insight was revealed by the Association for the Advancement of Artificial Intelligence’s (AAAI) 2025 Presidential panel on the Future of AI Research. The The report is a lengthy document that was compiled by 24 AI researchers, whose expertise ranges between the state of AI infrastructure and the social aspects of AI.
Each section of the report had a key takeaway, as well a section for community opinion where respondents were asked to share their thoughts.
In the section “AI Perception vs. Reality”which was chaired MIT computer scientist Rodney Brooks referred to the Gartner hype cycle, a five stage cycle that is common for technology hype. Gartner estimated that in November 2024 the hype for Generative AI was just about to peak and be on the decline, according to the report. 79% of respondents to the community opinion section said that current public perceptions about AI’s capabilities don’t match the reality of AI development and research. 90% of those respondents said that the mismatch hinders AI research. 74% of this number stated that “the directions for AI research are dictated by the hype.” Artificial general intelligence (AGI), also known as human-level intelligence, is a hypothetical machine that can interpret information and learn from it in the same way a human would. AGI is the holy grail for the field. It has implications across countless fields. Think of any mundane task that you do not want to spend a lot of time on, such as planning a vacation or filing your taxes. AGI can be used to automate repetitive tasks and also to catalyze advancements in other areas, such as transportation, education, and technology.
A surprising majority of 475 respondents, 76%, said that scaling up current AI approaches will not suffice to produce AGI. The report stated that “Overall, responses indicate a cautious but forward-moving strategy: AI researchers prioritize safety and ethical governance, benefit sharing, and gradual innovation. They advocate for collaborative and responsible development, rather than a rush to AGI.”
Despite the hype that has distorted the state of AI research, and current approaches to AI which do not put researchers on the optimal path towards AGI, the technology has made great strides.
Five years ago, this conversation would not have been possible – AI was only used in applications where a large percentage of errors was acceptable, like product recommendations, or where domains of knowledge were strictly defined, like classifying scientific images,” explained Henry Kautz a computer science at the University of Virginia, and chair of the section on Factuality & Trustworthiness of the report, via email to Gizmodo. ChatGPT and other chatbots brought general AI to the public’s attention. New training methods and new ways to organize AI can improve their performance. Kautz continued, “I believe that the next step in improving trustworthiness is the replacement of individual AI agents by cooperating teams of agents who constantly fact-check each other and try keep each other honest.” “Most of the general public as well as the scientific community–including the community of AI researchers–underestimates the quality of the best AI systems today; the perception of AI lags about a year or two behind the technology.”
AI is not going anywhere; after all, the Gartner Hype Cycle doesn’t end with “fade into oblivion,” but instead the “plateau of productivity.” Different arenas of AI use cases have different levels of hype, but with all the clamor about AI–from the private sector, from government officials, heck, from our own families–the report is a refreshing reminder that AI researchers are thinking very critically about the state of their field. There is always room for improvement and innovation, from the way AI systems work to how they are used in the real world. We can’t go back to a period without AI. The only way forward is forward.