Leading AI developers such as OpenAI, Anthropic and others are trying to sell software that will make the Pentagon more effective without letting AI kill people. In a recent phone interview with TechCrunch, the Pentagon’s Chief AI and Digital Officer, Dr. Radha Plumb, said that their tools were not used as weapons today, but AI was giving the Department of Defense an “important advantage” in identifying and tracking threats.
Plumb said, “We are increasing ways to speed up the execution kill chain so our commanders can react in the right time for our forces.”
“Kill chain” is the military’s process for identifying, tracking and eliminating threats. It involves a complex system involving sensors, platforms and weapons. According to Plumb, generative AI is helpful during the strategizing and planning phases of the kill chains.
According to Plumb, the relationship between Pentagon and AI developers has only been established for a short time. OpenAI, Anthropic and Meta changed their usage policies to allow U.S. intelligence agencies and defense agencies to use their AI systems in 2024. They still do not allow their AI to harm people.
When asked how the Pentagon works in conjunction with AI model providers, Plumb replied: “We have been very clear about what we will use and not use their technology for.”
This sparked a speed-dating round for AI companies, and defense contractors.
Meta In November, Anthropic partnered with Lockheed Martin, Booz Allen, and others to bring Llama AI models into defense agencies. Anthropic and Palantir teamed up in the same month. In December, OpenAI also struck a similar agreementWith Anduril. Cohere, in a more discreet manner, has also been deploying their models with Palantir.
As the Pentagon proves generative AI’s usefulness, it may push Silicon Valley to relax its AI usage policy and allow more military application. Plumb said that generative AI is useful for playing through different scenarios. It allows you to use the full range tools that our commanders are available with, but also to think creatively about alternative response options and possible trade-offs in an area where there is a potential threat or series of threats that need to prosecuted. Anthropic directed TechCrunch to Dario Amodei, its CEO, in response to our questions. In a recent interview with The Financial Times he defended his work in the military:
I don’t understand why we shouldn’t use AI for defense and intelligence. The idea that we should use AI to create anything we want, including doomsday weaponry, is just as absurd. We’re looking for a middle ground and doing things responsibly.
OpenAI Meta and Cohere have not responded to TechCrunch’s request for comments.
AI weapons and life and death
A debate on defense technology has erupted in recent months about whether AI weapons should be allowed to make decisions that could affect lives and deaths. Some say the U.S. has weapons that can make such decisions. Palmer Luckey, Anduril’s CEO, recently spoke with
. On Xit was noted that the U.S. Military has a long-standing history of buying and using autonomous weapon systems such as a CIWS turret.
The DoD has been using and purchasing autonomous weapons systems for decades. Their use (and their export!) Luckey said that the rules are clearly defined, strict, and not at all optional.
When TechCrunch asked Plumb if the Pentagon purchases and operates fully autonomous weapons – those without humans in the loop — he rejected the idea out of principle. Plumb replied, “No.”
The short answer is: “No.” “As a matter both of reliability and ethics, humans will always be involved in the decision to use force, and that’s for our weapon system.”
This word “autonomy,” is somewhat ambiguous, and has sparked discussions all over the tech world about when automated systems, such as AI coding agent, self-driving vehicles, or self firing weapons, become truly independent.
Plumb said that the idea of automated systems making decisions about life and death was “too binary,” but the reality is less “science-fiction-y.” Instead, she suggested that the Pentagon’s AI systems are a collaboration between human and machine, where senior leaders make active decisions throughout the process. “People tend think that there are robots and then the gonculator (19459042) spits a piece of paper out, and humans simply check a box,” Plumb said. “That’s just not how human-machine teams work, and it’s also not an effective way to utilize these AI systems.”
Silicon Valley employees have not always been happy with military partnerships.AI security in the Pentagon.
Last year, Amazon and Google employees were arrested and fired for protesting their companies’ military contracts with Israel. Cloud deals that fell under “Project Nimbus.” Evan Hubinger of Anthropic, for example, believes that the use AI in militaries will be inevitable. It is important to work with the military directly to ensure they get the right outcome. Hubinger said in a November posting to the LessWrong forum that “if you take catastrophic AI risks seriously, the U.S. Government is an extremely important actor.” Blocking the U.S. Government from using AI was not a viable approach. “It is not enough to focus on catastrophic risk, you must also prevent any way the government could misuse your models.”