Despite its widespread use, AI is still not considered a normal technology. AI systems will soon be referred to as “superintelligence” and the former Google CEO suggested that we should control AI models in the same way we control uranium or other nuclear weapons materials. Anthropic is devoting time and money to studying AI. Welfare” includes what rights AI models might be entitled to. These models are also moving into disciplines which feel very human, like music or therapy.
It’s no wonder that those who are pondering AI’s future tend to fall into either the utopian or dystopian camps. OpenAI’s Sam Altman speculates about AI’s impact feeling more like the Renaissance rather than the Industrial Revolution. However, more than half are more concerned than excited about AI’s future. This half includes some of my friends who, at a recent party, speculated on whether AI-resistant groups might emerge – modern-day Mennonites carving out spaces in which AI is limited only by choice and not necessity.
In this context, a recent articlewritten by two AI researchers from Princeton felt quite provocative. Arvind Narayan, director of the university’s Center for Information Technology Policy and doctoral student Sayash Kapoor, wrote a 40 page plea for everyone not to panic and to think of AI as just a normal technology. The researchers say that this is contrary to the “common tendency” to treat AI as a separate species or a highly intelligent, autonomous entity.
They believe AI is a technology with a wide range of applications, and its application could be compared more to the gradual adoption of electricity, the internet, or nuclear weapons, though they admit that the analogy is flawed in some respects.
Kapoor’s core point is that we must differentiate between the rapid development of AI methods– the flashy and impressive demonstrations of what AI can achieve in the laboratory–and the actual application which, in historical examples, lags behind by decades. “Much of AI’s discussion ignores this adoption process,” Kapoor told us, “and expects the societal impact to occur at the pace of technological development.” In his view, the adoption and use of useful artificial intelligence will be more of a trickle than a tsunami. In their essay, the two make some other strong arguments: AI won’t automate all tasks, but it will create a new category of human work that monitors and verifies AI. We should also focus on the likelihood that AI will worsen existing problems in society rather than the possibility that it could create new ones.
“AI supercharges capitalism,” Narayanan says. He says that depending on how AI is deployed, it can either help or harm inequality, labor markets and the free press as well as democratic backsliding.
The authors fail to mention one alarming application of AI: the use by militaries. This is a rapidly growing trend, and it raises alarms about AI being used to make life-and-death decisions. The authors have excluded this use from their essay as it is difficult to analyze without accessing classified information. However, they claim that their research on the topic will be forthcoming.
Treating AI as “normal” would have a major impact on the position taken by the Biden administration, and now the Trump White House: Building the best AI was a national priority. The federal government should take various actions to make this happen, including limiting the chips that can be exported to China and dedicating more energy towards data centers. The two authors describe the “AI arms race rhetoric” between the US and China as “shrill.” He says that the knowledge needed to build powerful AI models is spread quickly by researchers all over the world. “It is not possible to keep secrets on such a scale,” he adds. Kapoor’s plan is not based on sci-fi fears but rather on “strengthening democracy, increasing technical expertise within government, improving AI literacy and incentivizing defense to adopt AI.” That’s sort of the point.
Originally published in The Algorithm – our weekly newsletter on AI. Sign up to receive stories like this first in your inbox by clicking here.