Home AI Companies News Anthropic AI in national security raises privacy and proportionality concerns

AI in national security raises privacy and proportionality concerns

0
AI in national security raises privacy and proportionality concerns

Alexander – stock.adobe.com

AI can enable investigations to cover a much larger number of individuals than previously possible. This is why oversight is necessary

By

Published on: April 30, 2025 12:00

A study published in conjunction with the Centre for Emerging Technology and Security’s annual Showcase 2025 has highlighted some of public’s concerns about automated data processing for national safety.

UK public attitudes towards national security data processing – Assessing human and mechanical intrusion Research reported that UK public awareness of the work of national security agencies is low.

In a panel discussion to present the research, Brian Leveson, the investigatory powers commissioner, who presided over the panel, discussed challenges posed by the new technology. “We face new and growing challenges,” said Leveson. “Rapid technological advances, especially in AI [artificial intelligence]are transforming our government authorities.”

Leveson pointed out that these technological advancements are changing the way information is gathered in the intelligence world. He said that AI could soon be the backbone of the investigative cycle. Leveson believes that this shift is not without risk. “AI could allow investigations to cover more people than ever before, which raises questions about privacy, proportionality, and collateral intrusion,” said Leveson.

According to the CETaS study, based on a Savanta survey of 3,554 adults and a 33-person citizen’s panel commissioned by Hopkins Van Mil, there is more support for a national agency processing data than opposition, even when it comes to sensitive datasets like identifiable medical data. The study also found that data use by police is generally well-supported, though support is slightly lower among regional police forces than national security agencies.

While the public supports the processing of personal information by national security agencies for operational purposes, they are opposed to sharing personal information with political parties or commercial organizations.

Marion Oswald is a senior visiting fellow and co-author of the CETaS report. She noted that data collection will always be intrusive even if the analysis is automated and nobody sees the data.

According to her, the study shows that the public is reluctant about national security agencies gathering data for predictive tools. Only one in ten people support the use of these tools. Also, accuracy and fairness were a concern.

Oswald said that panel members had concerns about accuracy and fairness and wanted to see safeguards. He added that there are expectations regarding technology oversight and regulation.

Despite the efforts of national security agencies to engage with the public more directly in recent years, a significant gap still exists in public understanding. The majority of respondents (61%) said that they understood the work of national security agencies “slightly”, “not at all”and only 7% felt that they understood it “a lot”. Rosamund Powell is a research associate at CETaS and a fellow co-author. She said that previous studies had suggested that people’s perceptions of national safety were influenced by James Bond-style novels.

People are more concerned about national security when they are made aware of its activities, such as the gathering of facial recognition data. She added that “There is more support for agencies analyzing data in the public domain, such as posts on social media, compared to private information like messages or medical records.”

Paul Kirvan

By: Cliff Saran

By Josh Osman

By Josh Osman

ByJosh Osman[19659039]ByBernard Keenan

byPaul Kirvan.

byCliff Saran.

ByJosh Osman.

www.aiobserver.co

NO COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Exit mobile version