The US saw its first AI candidate in the spring. In a short campaign for the Wyoming mayor, virtual integrated citizen, a ChatGPT bot created by a real human Victor Miller, pledged to govern exclusively by AI.
At first, it was believed that generative AI, even if it didn’t win office, would play an important role in democratic elections. This is because more than 2 billion people will be voting in more than 60 different countries. Experts and analysts have now changed their tune. They say that generative AI had little or no effect.
Were all those predictions that 2024 would be AI election year incorrect? The truth is… not really. WIRED spoke with experts who said that this could have been the “AI elections”–just maybe not in the way most expected.
To begin with, much of the hype surrounding generative AI focused on the threat of “deepfakes,” which experts and pundits feared could flood the already murky space of information and fool the public.
Scott Brennen, Director of the Center for Technology Policy, New York University, says that the concern over misleading deepfakes dominated the conversation when it came to AI. Brennen claims that many campaigns were hesitant to use deepfakes created by generative AI, especially of opponents, due to the complexity of the technology. Some in the US were worried that they would be subject to a new set of state-level laws which restrict “deceptive deepfake” ads or require disclosure of AI used in political advertising. Brennen says that “I don’t believe any campaign, politician, or advertiser would want to be a trial case, especially because the way these legislations are written, it is unclear what ‘deceptive,’ means.”
WIRED launched its AI Elections Project earlier this year to track instances of AI being used in elections around the globe. The Knight First Amendment Institute of Columbia University published an analysis of the WIRED AI Elections Project. It found that half of the deepfakes were not intended to be misleading. This is similar to a report from The Washington Post that found deepfakes did not necessarily mislead or change people’s minds, but they did deepen partisan divides.
Many AI-generated pieces of content were used to show support or fandom for certain candidates. An AI-generated video showing Donald Trump and Elon musk dancing to “Stayin Alive” by the BeeGees was shared on social media millions of times, including Senator Mike Lee of Utah, a Republican.
It’s all about social signals. It’s all about the reasons people share this stuff. It’s not artificial intelligence. Bruce Schneier is a public-interest technologist at Harvard Kennedy School and a lecturer on polarized politics. “It’s a shame that we haven’t had perfect elections in our history, and now there’s AI. It’s all misinformation.” In the days leading up to Bangladesh’s election, deepfakes circulated on the internet encouraging supporters of a political party in the country to boycott the vote. Sam Gregory, the program director at Witness, a nonprofit that helps people use technology for human rights, and runs a rapid response detection program for journalists and civil society organizations, says his team has seen an increase in deepfakes cases this year.
He says that in multiple election contexts, “there were examples of both deceptive and confusing use of synthetic audio, video, or image format that puzzled journalists or that they could not fully verify or challenge.” These detection tools are even more unreliable in places other than the US and Western Europe.
“Fortunately AI was not used in deceptive or pivotal ways in most elections, but it is very clear that the detection tools are lacking and people with the greatest need do not have access to them,” says Gregory. “This is not a time for complacency.” (They weren’t.) Gregory says in an analysis all the reports to Witness’ deepfake rapid response force, around a third of cases involved politicians using AI to deny the evidence of a true event – many involving leaked conversation.
Brennen says the most significant use of AI in the past year was done behind the scenes and in subtler ways. Brennen says that while there were fewer deepfakes, than many had feared, a lot of AI was happening behind the scenes. Brennen says that because these uses of generative AI don’t have the same consumer-facing nature as deepfakes it is difficult to determine how widely they were used. Schneier claims that AI played a major role in the election, including “language translating, canvassing and assisting in strategy.” Prime Minister Narendra Modi in India used AI translation software to translate speeches in real-time into multiple languages spoken in India. Schneier believes that these AI applications can be beneficial for democracy in general, as they allow more people to feel involved in the political process. They also help small campaigns to access resources that are otherwise out of reach.
He says, “I think local candidates will have the greatest impact.” “Most campaigns are small in this country. Schneier says that AI tools could help candidates connect to voters or file paperwork. In early 2018, Belarusian dissidents living in exile used an AI candidate to protest against Europe’s last dictator, Alexander Lukashenko. Lukashenko’s government has arrested dissidents and journalists, as well as relatives.
For their part, generative AI firms have already been involved in US campaigns this year. Microsoft and Google both provided training to several campaigns on how to use their products in the election.
Schneier says that these tools have just begun, and it may not be time for the AI election this year. “But they are just starting.”