Police agencies and intelligence agencies are using AI to sift data in order to identify security threats and potential suspects. They also use AI to identify individuals who could pose a risk to security.
Agencies like GCHQ and MI5 rely on AI techniques to collect data from multiple sources, identify connections between them and triage the results for human analysts.
The use of automated systems by agencies such as GCHQ and MI5 to analyse large volumes of data has raised concerns about privacy and human rights.
How much AI is appropriate and how much is too much? This is a question for the oversight body of the intelligence services.
The Investigatory Powers Commissioner’s Office (IPCO) is a body that deals with the issue.
What is the appropriate use of AI?
Muffy Caled is the chairperson of IPCO
Technical Advisory Panel (TAP)is a small group consisting of experts from academia, the UK Intelligence Community and the Defence Industry. Her job is
to advise the Investigatory powers Commissioner (IPC), and IPCO’s judges, who are responsible for signing or refusing applications for surveillance warrants.
Members from the panel accompany IPCO inspectors to visit police, intelligence agencies, and other government agencies that have surveillance powers under the Investigatory Act. Calder, in the first IPCO interview on the work done by the TAP, says that one of its key functions is to advise the Investigatory Powers Commissioner on future technology trends. “It was obvious that we would be doing something about AI,” she says.
How AI could be used for surveillance
Calder said she was unable to comment on the impact AI has had on the police, intelligence agencies, and other government entities that IPCO supervises. She says that is a question best left to the agencies who are using it.
But a publicly accessible
The Royal United Services Institute (RUSI) research report commissioned by GCHQ suggests possible uses. These include identifying people by their voice, writing style, and the way they use a computer keyboard.
Muffy Calder University of Glasgow
The most compelling case is to triage and analyze the vast amounts of data collected by intelligence agencies.
The systems of augmented intelligence can provide analysts with the most relevant data from a sea or data to help them assess and make an informed decision.Calder says that the computer scientists and mathematicians who make up the TAP are familiar with AI and have studied it for many years. They understand that using AI to analyze personal data raises ethical issues.
People raise issues of fairness and transparency, but don’t always ask what these mean in a technical context, says Calder.
Balance between privacy and intrusion.
This framework is designed to give organisations the tools they need to assess how much AI invades privacy and how to minimize intrusion. It does not provide answers but rather a series of questions to help organisations consider the risks of AI.
I think everyone’s goal in investigations is to minimize privacy intrusion. She says that we must maintain a balance between an investigation’s purpose and the intrusion of people, such as collateral intrusion (19459056).
It is a framework that allows us to start asking, Are we doing the right thing? Is AI a suitable tool for the situation? She says, “It’s not a question of can I do it but more about should I”, she says.
Is AI the right tool for the job?
First, we need to ask if AI is the best tool for the job. In some cases, like facial recognition, AI is the only way to solve the problem. It’s difficult to solve mathematically, so showing examples of how it works makes sense.
In some cases, when people understand the “physics” (Calder calls it) of a problem – such as when calculating taxes – a mathematical algorithm may be more appropriate.
AI is very useful when the analytical solution to a problem is either too complex or we do not know what it is. She says that it is important to ask yourself, “Do I really need AI here?” right from the start.
A second issue to consider is the frequency of retraining AI models. This will ensure that they make decisions based on the most accurate and relevant data for the applications for which the model is used.
A common mistake is to train AI models on data that does not match their intended use. “That’s probably a classic.” She says that you have trained the AI on images of cars and now you want to use it for tanks.
A critical question might be whether the AI model is able to balance false positives with false negatives for a specific application.
If AI is used by police to identify individuals using facial recognition technology, then too many false positives can lead to innocent people being wrongly arrested and questioned. In the event of too many false negatives, suspects may not be identified.
When AI mistakes
Then, what would happen if someone was wrongly placed under electronic monitoring as a result an automated decision? Calder agrees that it’s a critical question.
This framework asks organisations to consider how they will respond when faced with a crisis.
AI makes mistakes or hallucinates.
The answer could be to retrain a model with more accurate or up-to date data. There are many possible answers. The key is whether you recognise that there is a problem and whether you have a plan to deal with it. Was it user input? Was it the way the human operator produced and processed the result?
You might also want to question whether this was a result of the way the tool was optimized. She adds: “For example, was the tool optimised to minimise false positives and not false negatives? And what you did gave you a true positive?”
Intrusion during training
It is sometimes acceptable to accept a greater level of privacy intrusion during the training phase if it means a lesser level of intrusion once AI is deployed. By training a model using the personal data from a large number people, the model will be more targeted and less likely to cause “collateral” intrusion.
The end result is an instrument that you can use to pursue criminal activity, for example. She says that you can use the tool to target your privacy and only a few people will be affected.
While a human being in the loop of an AI system can reduce the risk of errors, it also comes with other dangers.
The human in loopComputer systems in hospitals, for instance, allow clinicians to dispense drugs more efficiently, as they can choose from a list relevant drugs and quantities rather than writing out prescriptions manually.
On the downside, it’s easier for clinicians “desensitize” and make mistakes by selecting the incorrect drug or dose, or failing to consider a better drug that may not have been included in the preselected list.
AI systems can also lead to a similar desensitisation. People can become disengaged if they have to constantly check a large number outputs. It is easy for a human reviewer who is tired or distracted to make a mistake when the task becomes a checklist.
Calder says, “I think that there are many parallels between the use of AI in medicine and the use of sensitive data. Both have a direct impact on people’s life.”
Chief information officers and chief digital officer’s who are considering deploying AI within their organisation will find the TAP’s AI proportionality Assessment Aid essential reading. Calder says, “I believe the vast majority are applicable outside an investigative context.”
“Almost every organisation that uses technology must consider their reputation and efficacy.” She says that she doesn’t believe organisations deliberately make mistakes or do bad things. Instead, they aim to help people in a way that is appropriate.