Home News The threat of terrorists from generative AI is ‘purely hypothetical’

The threat of terrorists from generative AI is ‘purely hypothetical’

0
The threat of terrorists from generative AI is ‘purely hypothetical’

UK terrorist legislation advisor takes stock on the potential for generative AI systems to be adopted, especially for propaganda and attack-planning purposes, but acknowledges that the impact may only be limited.

by

Published: 17 Jul 2025 13:30

Generative Artificial Intelligence (GenAI) could help terrorists disseminate propaganda and prepare for attacks, according the UK’s terror adviser, but the threat level remains “purely hypothetical” without further evidence that it is used in practice.

According to his Jonathan Hall, government’s independent reviewer for terrorism legislation, stated in his latest annual reportthat GenAI systems are capable of being exploited by terrorists. However, it is unclear how effective this technology will be and what can be done about it. Hall, who is the government’s independent reviewer of terrorism legislation, explained that GenAI could be used in service of a terrorist group’s propaganda, and how it can be used to speed up production and increase its dissemination. This would allow terrorists to create easily shareable images, narratives, and forms of message with much fewer resources. He also pointed out that the likelihood of terrorists “flooding’ the information environment with AI generated content is not a certainty, and that groups may be hesitant to adopt it because of its potential impact on their messaging.

Depending on how important authenticity is, the mere fact that text or images are AI-generated can undermine the message. He said that spam-like messages could be a turn-off. Some terror groups, like Al-Queda who “place a high value on authentic messages by senior leaders”might avoid it. They may also be reluctant to delegate their propaganda functions to a robot.

“Conversely it may be boom-time for conspiracy theorists, anti-Semites, and extreme right-wing groups who enjoy creative nastiness.” He said that GenAI could be used to research key events or locations for targeting, suggest ways to circumvent security, and provide tradecraft about using or adapting terrorist cell-structures or weapons.

A chatbot would allow for the use of online instructions and make them more accessible. [while] GenAI can provide technical advice to avoid surveillance or make knife-strikes lethal. Hall added that GenAI could also be used to “extend the attack methodology” by identifying and synthesizing harmful biological or chemical agents. However, this would require the attacker to possess prior expertise, skills, and access to labs and equipment. “GenAI’s efficacy here has been questioned,” he said.

The first article made a similar point. International AI Safety Reportwas created by a global group of almost 100 artificial intelligence experts as a result of the first AI Safety Summit, hosted by the UK Government at Bletchley park in 2023. The report stated that although new AI models could create step-bystep guides to creating pathogens or toxins that are beyond PhD-level expertise and potentially lower the barriers to developing chemical or biological weapons, the process was “technically complicated” meaning that the “practical utility for beginners remains uncertain”. Hall also identified the risk of AI being used in the process of radicalisation online via chatbots. He said that the one-toone interactions between a human and machine can create “a closed-loop of terrorist radicalisation”… especially for lonely or unhappy individuals who are already inclined towards nihilism and looking for extreme answers, and lack real-world or internet counterbalance.

He noted that, even if the model is not guarded and has been trained with data “sympathetic” to terrorist narratives, the outputs are largely dependent on what the user asks.

Possible solutions? Hall (19659026) – In terms of legal solutions to prevent GenAI from being misused for terrorism, he highlighted the difficulty in preventing it. He noted that “upstream responsibility” for those who developed these systems is limited as models can be used for so many different and unpredictable purposes. He suggested that “tools-based liabilities” be introduced, which would target AI programs specifically designed to assist terrorist activities. Hall said that the government should consider legislation against the creation or ownership of computer programs that stir up racial and religious hatred. However, he acknowledged it would be difficult for the government to prove that the programs were specifically created for this purpose. He said that while developers would be prosecuted if they created a terrorism-specific AI or chatbot under UK terror laws, it was unlikely that GenAI tools would be specifically designed to generate novel forms of terrorist propagandist. It is more likely that powerful general models would be harnessed.

I can foresee enormous difficulties in proving a chatbot’s [or GenAI model] design to produce narrow terrorism material. It would be better to make it a crime to create… a computer programme specifically designed to incite hatred on the basis of race, religion, or sexual orientation.

Hall acknowledged, however, that it is still unclear how AI will be utilized by terrorists.

Some will say that there isn’t anything new to see. GenAI is a new technology that will be used by terrorists to attack people, just like vans. “Without proof that the current legal framework is inadequate, it is not possible to adapt or extend it to deal with theoretical use cases.” In fact, the lack of GenAI-enabled attack could suggest that the whole issue is exaggerated.

Hall said that even if some type of regulation is necessary to avoid future harms it could be argued criminal liability is not the best option, especially when considering the political imperative to harness AI for economic growth and public benefits. He said that “alternatives to criminal responsibility include transparency reporting and voluntary industry standards. Third-party auditing is also an option. Reporting suspicious activity, licensing, bespoke AI-watermarking solutions, restrictions on advertising as well as civil liability and regulatory obligations are other options.”

Hall, while expressing uncertainty about the extent to which terrorist groups would adopt generative artificial intelligence, concluded that the technology’s most likely impact was a “social degradation” caused by the spread of disinformation online.

“Poisonous misrepresentations of government motives or targeting demographics can lead to polarisation, hostility, and real-world terrorist violence, even though they are far removed from bombs, shootings, or blunt-force assaults,” he said. “But terrorism legislation has no role here because any linkage between GenAI-related material and eventual terrorism is too indirect.”

Although not covered in the study, Hall acknowledged that GenAI could have further “indirect effects” on terrorism as it could create widespread unemployment and an unstable social climate “more conducive for terrorism”.

Alex Scroxton,
  • Bill Goodwin,
  • www.aiobserver.co

    Exit mobile version