Ciaran was the first CEO of the UK’s National Cyber Security Centre from 2013 to 2020. He is a distinguished civil servant, who has held senior positions in HM Treasury, the Cabinet Office and GCHQ. He is now a professor of government at Oxford University and a fellow of Hertford College in Oxford, where he studied history as an undergraduate. He is also chair of CyberCX UK, managing director at Paladin Capital and head of the SANS CISO Institute. He is an adviser to Garrison Technology, Red Sift and the SANS CISO Institute.
He gave a sneak preview of a paper at the Infosecurity show in Europe this month.
The Blavatnik School at Oxfordpublished a report about the extent to artificial intelligence (AI), which could disrupt a rough balance between attackers, and defenders in cyber security. He maintains that historically, this balance has been governed primarily by three principles. First, computer systems that put human safety at risk, such as air traffic control systems, tend to have failsafes. Second, the most dangerous capabilities are in the hands the most capable actors who have a sense of rationality, escalating risk and tend to be the leaders of USSR and US during the Cold War. Third, if you are able to use advanced code to do harm, you can usually use it (to offset) for good. His contention is that artificial intelligence (AI), at least, puts into question the second and third.
In the summary of his paper, he concludes that “The Digital Security Equilibrium” is a useful concept to understand why cyberspace remains a place where harm, contestation and catastrophe have not yet occurred. It is possible to maintain this status, but it will take a sustained effort over many years and smart policymaking. The most concerning aspect is the increasing accessibility of powerful cyber capabilities for new actors.”
In a conversation with Computer Weekly, he went into greater detail on this and other issues. This is a compressed version of the original.
Wouldn’t you say that the biggest threat to security is companies who are unwilling to invest in cyber resilience.
Although I am beginning to agree with that view, I will not be slamming companies. I believe that most companies try to act rationally.
I would say that in the past there was a lot hype about more and more catastrophes. In one sense, this means that people are paying attention, especially big businesses. On the other hand I think that it was accidentally a little infantilising. We might have worried about nuclear Armageddon when we were young during the Cold War.
I don’t think AI gives you any magic new tools. But in terms of the capability battle, I’m optimistic. I think there’s a huge potential for AI in cyber security to make things better
Ciaran Martin, Blavatnik School of Government, Oxford University
But also, we knew there wasn’t a thing we could do about it. And if you’re being told there’s this huge cyber risk and so forth, you think, “Hang on, what can I do about it? That’s why I pay taxes to the government”.
I think the second thing was – while personal data is really important and its theft and misuse can lead to serious harm – we have to balance things. We live in a country where companies, by and large, obey the law, and the legal balance hitherto has been very onerous for some years on data protection and very light on service disruption, on resilience.
I think we do have to incentivise resilience more as well. Marks and Spencer is a good example. They are a well-run company that had been doing really well until the cyber attack. They’re not suddenly stupid or negligent when it comes to cyber. You have to look a bit deeper. What are their incentives? What have they been told to do? What are they legally mandated to prioritise? And now we’re thinking: resilience is king.
I got the impression that you said in your presentation that AI means there is no certainty if the’security balance’ you refer to holds. Is this correct?
AI doesn’t give you any new magic tools. There’s a lot hype about big, red buttons that can bring planes down and all that. It doesn’t work that way. AI won’t get you there but it will lower the costs and other barriers of entry to do something disruptive and bad.
I’m optimistic about the battle for capability. I think AI has a lot of potential to improve cyber security. In
Baddies scan for vulnerabilities to exploit [vulnerabilities]while goodies do it to patch. This is a win-win situation for us.
Does this not come down people? About one-third of government cyber security professionals are contractors, because it has been difficult to recruit and pay civil servants at the level they can earn in the private sector.
I have a luxurious interpretation of this question based on my past because GCHQ did a great job of retaining staff. They didn’t pay Microsoft or Crowdstrike salary, but they did give them a little more and the mission was good. Incentivizing [a cyber security professional] for a major payment department like Work and Pensions, or HMRC will be a little different.
I do think that people are very important. I think that first and foremost, users are important. We should give them meaningful and sensible things to control, rather than expecting them to be able take on the Russians alone.
I think that there is a tendency to be Cassandra like when it comes to skills. When I set up the NCSC, I was warned that it wouldn’t work because the organisation and the economy lacked the necessary skills. There are many great people and people who can be retrained. You don’t really need that many Ninjas. You need layers. You need elite defense units in the government and some of the largest companies. We need good corporate cyber defences. We need a workforce that is cyber-savvy and knows how to do the basics.
Often, it is said that the NCSC represents a fundamental change. What was the shift?
If you want to get high falutin, if we look back in history, computing and computer safety were on both sides the preserve of major global powers and government, and that’s it.
All of that.
GCHQ’s security mission dates back to 1919. It was all about protecting Britain’s intelligence and military secrets. With mass digitisation there is a shift to the open. You can’t defend an economy behind barbed-wire in a building without access to cell phones. You can’t. You can’t talk to people, give them advice or respond to an event.
Second, be a little more activist. There was a lot of passivity in public-private partnerships, and information sharing. It went from secret to open and passive to proactive.
I heard Jeremy Fleming ( [the former director of GCHQ] ) speak at Palo Alto Networks Ignite London in March. He was surprised when he conducted a straw poll of the cyber security professionals in the audience. They believed that AI was an advantage for the attacker. He was ‘broadly hopeful’ that the advantage lies with the defender, as long as the pace of technology deployment and agility are maintained. What do you make of that? Was his surprise probably due to his background in national security?
I broadly agree with him. There’s a tendency to pessimism in this subject. Objectively, who has the advantage? It’s too early to tell, as [the Chinese prime minister]
Zhou Enlai is reputed to have said [about the French Revolution].
But secondly, it doesn’t have to be like this. What advantages do the baddies have? Fundamentally, recklessness and a lack of ethics. They are prepared to do things that we might not be prepared to do, and they want to cause harm. So it’s a different calculus for them. But what are our advantages? Well, firstly, the stability of rule of law and the market economies that turbocharge innovation. They didn’t build any of this tech. They are just cheating with other people’s tech.
A lot of this is about economics and business climate. And regulation and the posture of the country. Do you incentivise people to take security seriously? And if you do, then a major British corporate will say: “We’re well off, we’re booming, we’re a bit worried about this security business, so we’re gonna buy. And there’s a whole suite of really innovative stuff out there that there’s a market for, then we’re going to win. If none of that works, then they’re going to win.
And for us, in the UK, which I would share in common with Jeremy, is the poacher and gamekeeper model at GCHQ, which is common in the Five Eyes, but it’s not common in continental Europe: that is to have the attackers and the defenders in the same place so they can learn from each other, and so forth. GCHQ is primarily a foreign intelligence digital espionage agency, but many of the people who worked for me in the NCSC, and in its predecessor body, the
CESG, are focused on protection.
By the same token, the people who build tech are those who can secure it, as with Microsoft. And [at US defence level], secure by design is being kept by this Administration, and I am pleased about that.
Read more on Hackers and cybercrime prevention
-
Cyber Monitoring Centre develops hurricane scale to count cost of cyber attacks
By: Bill Goodwin
-
Russia focuses cyber attacks on Ukraine rather than West despite rising tension
By: Bill Goodwin
-
The Security Interviews: What is the real cyber threat from China?
By: Alex Scroxton
-
Anne Keast-Butler named as new director of GCHQ
By: Alex Scroxton
