A controversy is brewing over the “AI-generated studies” submitted to this years’ ICLRis a long-running conference that focuses on AI.
Three AI labs — Block, Intology Autoscience — claims to have used AI in order to generate studies that were approved for ICLR workshops. Workshop organizers at conferences like ICLR typically review studies to be published in the workshop track.
Sakana notified ICLR leaders of its AI-generated papers before submitting them and obtained peer reviewers’ permission. An ICLR spokesperson confirmed that the other two labs, Intology and Autoscience, did not.
AI academics have taken to social media to criticize Intology’s and Autoscience’s stunts, claiming that they are co-opting the scientific peer review system.
All these AI scientist papers use peer-reviewed venues for their human evaluators, but no one consented in providing this free labor,” wrote Prithviraj Amanabrolu an assistant computer science prof at UC San Diego. An X post“It makes lose my respect for everyone involved, regardless of how impressive the systems is.” Please let the editors know.
The critics have noted that peer review is a labor-intensive and time-consuming process. According to a recent Nature survey40% of academics spend between two and four hours reviewing one study. This work is increasing. The number of papers presented at the largest AI conference NeurIPS grew by 41% to 17,491 in 2018, up from 12,345 a year earlier.
Academia had a copy problem that was caused by AI. One analysis found that between 6.5% to 16.9% of papers presented at AI conferences in 2023 contained likely synthetic text. AI companies are using peer review as a way to benchmark and advertise their technology. This is a relatively recent phenomenon.
[Intology’s] received unanimously positive reviews,” Intology reported in a Post on X praising its ICLR Results. The company claimed that workshop reviewers had praised a study generated by AI for its “clever idea (19459062)”.
This was not well received by academics.
Ashwinee Panda is a postdoctoral scholar at the University of Maryland. In an X post, Panda said that submitting AI-generated paper to a venue without contacting [reviewers] was bad.
It’s not for nothing that many researchers are skeptical about whether AI-generated documents are worth the effort of peer review.
Sakana admitted that its AI had made “embarrassing citation errors” and that only one of the three AI papers it chose to submit would have met the conference acceptance bar. Sakana retracted its ICLR paper in the interest transparency and respecting ICLR conventions, the company stated.
Alexander Doria is the co-founder and CEO of AI startup Pleias. He said that the raft surreptitious synthetic ICLR papers pointed to the necessity for a “regulated agency/company” to perform “high quality” AI-generated study assessments for a fee.
“Evaluations [should be] performed by researchers were fully compensated for the time they spent,” Doria stated in a Series Posts by X. “Academia does not exist to outsource [AI] evaluations.”
Kyle Wiggers, TechCrunch AI Editor. His writings have appeared in VentureBeat, Digital Trends and a variety of gadget blogs, including Android Police and Android Authority, Droid-Life and XDA-Developers. He lives in Manhattan, with his music therapist partner.
View Bio