The Pentagon is focused on using generative artificial intelligence systems for productivity before it asks those tools to offer battlefield decisions, according to a Defense Department AI official.
Developments in private sector and commercial AI are likely to influence DOD offices’ adoption of emerging technology like large language models into business and military processes.
Eventually, generative AI systems may be asked to analyze data and provide the US military with “recommended courses of action” in combat, according to Kimberly Sablon, the DOD research and engineering office’s principal director for trusted AI and autonomy.
“Even then, I’d be very careful in saying that’s something that we’re going to do,” she said in an interview. “There’s certainly opportunity there, but there are a lot of challenges.”
In August, the Pentagon announced Task Force Lima, led by its chief AI office, to analyze and integrate generative AI tools and large language models across DOD.
The task force is holding sessions with officials from the defense secretary’s office, DOD’s Joint Staff, service branches, combatant commands, other defense agencies, and the intelligence community.
The task force will recommend DOD-wide use cases for large language models and develop safeguards, responsible-use criteria, and performance metrics, according to the Pentagon.
Sablon said before AI-based machines drive wartime strategy, there has to be more research into understanding how large language models work. DOD needs to ensure it’s fielding “responsible” AI to prevent adversarial or unintentional misuse. The Pentagon needs to weigh the risks involved in unleashing the technology, including use by adversaries, she added.
“There’s adversarial failure mode where there’s malicious third parties that we have to be concerned about, where you can manipulate some inputs to induce targeted mispredictions,” Sablon said.
One company involved in the Pentagon’s exploration of AI is C3.ai, based in Silicon Valley. The company sells AI software to federal agencies and commercial customers and has contracts for generative AI models with several Pentagon departments, including the Missile Defense Agency and nonpublic work with a US intelligence agency.
“We’ve done numerous what the industry would classify as generative AI projects, even before ChatGPT created a bunch of noise in the market,” said Ed Abbo, C3.ai president and chief technology officer.
C3 Generative AI allows users to quickly find information they need to make decisions. Information that C3.ai’s programs receive is traceable, unlike a public ChatGPT-like system, and the information is private to the user. “It’s a one-way feed,” Abbo said.
The tools are domain-specific for industries such as aerospace, defense, and intelligence and for the commercial arena. Using generative AI models for defense and intelligence purposes requires a setup with access controls, according to Abbo. The DOD models must be blocked from the internet, ensuring that answers are derived only from the dataset provided by the agency.
For a defense scenario, users could ask: “How many aircraft are operationally ready in Central Command?” and follow-ups like, “How many of these are bombers?”
Analysts at the Missile Defense Agency are using C3.ai’s software to analyze, summarize, and compare flight test data without manually reviewing hundreds of data points for every test, Abbo said.
Another defense use case includes troubleshooting system maintenance. If users ask the generative AI why a defense system stopped working, the C3 tool will navigate sensor data and standard operating procedure guides to find out why.
Beyond the projects C3.ai has underway with the Pentagon, Sablon said she’s met with federally funded research centers and military services to get a better view of how the Defense Department is using AI in other areas and planning for future applications.
DOD is interested in using AI to assist with writing contracts, responding to emails, generating or tracking tasks, and creating presentations, she said.