To go faster on AI, start with existing gaps

Small is fast, fast is big.

For faster integration of AI across the national security enterprise, government and innovation leaders should keep in mind the advice of retired Marine General Jim Mattis: “Operations only move at the speed of trust.” 

To build the trust needed for faster adoption of AI – and development of more exquisite tools – the Defense Department and the innovation community should resist the hype and pressure to ‘think big.’ Instead, they should ‘think small’ – at least, at first. 

By focusing on practical applications of reliable AI, they should embrace known technologies to bridge known gaps in the military and the defense industrial base. This will not only give significant, near-term advantages at a fraught moment for global security, it will also help them ‘get the reps’ in with AI. Focused collaborations on distinct problems pave the pathway for greater access between builders and users – essential for iterating and refining these tools – all of which contributes to trust-building. 

When it comes to integrating AI into DoD, small is fast, fast is big. By demonstrating trustworthiness in small matters, AI can prove itself worthy of greater ones. 

The root of mistrust: hallucination

Everyone in the defense enterprise feels the need for speed in incorporating AI into national security. Various important projects are underway, from Senate Majority Leader Chuck Schumer’s Safe Innovation Framework for AI Policy, to the military services’ efforts, like Air Force Secretary Frank Kendall’s directive that his service’s scientific advisory board explore applications of generative AI.

But there is equally broad – and valid – concern about reliability, especially regarding generative AI like ChatGPT and similar tools. On these, Kendall cautioned that he saw only “limited utility in that type of AI for the military,” that too often these assets were “not reliable in terms of the truthfulness.” 

Large Language Model (LLM) hallucinations – when models fill in gaps in what they know – is a key challenge and the biggest obstacle to building trust at the macro level.

Grounding as antidote for hallucination

Industry is working solutions to hallucinations, and the incentives are such that I have no doubt there will be a number available soon. At Primer, we tackle it by “grounding” our models to customer data. In essence, a user’s query only returns relevant information from their source of truth. 

Instead of predicting next words, a grounded system first retrieves relevant information from a trusted system of record, then generates a prompt for the generative model. That prompt instructs the model to answer the user’s question based only on the retrieved information and only if possible. Otherwise, the model answers “not enough information.”

(Grounding helps us act in accord with DoD’s ethical principles for AI, especially reliability.)

Grounding is no silver bullet – the quality of the model’s answers are only as good as the data provided – but the resultant difficulties are a far better understood engineering problem. Further, grounding works especially well when leveraging LLMs against highly focused, discrete problem sets. 

And here is the opening for both greater capability faster and trust-building. 

So what does practical, trusted application of AI look like? 

For a better sense of building trust at the microlevel, the case of one of our partners, the Joint Interagency Task Force South is instructive. Its mission: interdicting drug and human trafficking. Its challenge: an overwhelming amount of high volume, messy disparate data. 

To augment human decision-making, we helped install advanced AI tools on classified systems within hours to let watch floor analysts rapidly extract information and discover connections of potential drug and human trafficking incidents captured in a self-generating knowledge base. Analysts or sailors no longer have to sift thousands of reports manually; they can hone in on the handful most likely to yield results. 

There are far wider applications, however. Consider: 

  • In maintenance, delivering cost-savings by ensuring warehouses are stocked with the precise number and type of parts and at the optimum time. The benefits are obvious – for airmen at Hickam Air Force Base in Hawaii or the plant manager at the manufacturing facility in Troy, Alabama. 
  • In mission planning, buying back thousands of hours for pilots and flight crews by automating flight planning. Time bought back compounds like interest.
  • In human performance, automating functions to deliver ground truth on the health and readiness of the force, from the front line to the production line, down to the level of individual infantrymen and welders. 
  • In intelligence, summarizing data and reducing time to insight from hours to minutes, again using grounding to assist in generating reports.

These are just a handful of use cases, but they can happen right now. Achieving efficiencies like this at scale across DoD can deliver greater capability and the opportunity for trust-building that begins a virtuous cycle – developing yet more exquisite tools, applying and iterating for faster capability, more trust. Repeat.

The centrality of iteration

I have spent a career rolling out emerging tech in legacy environments and building intuitive AIs for risk-averse customers – nearly 30 years of it. 

My number one take-away is this: you simply must be able to access and iterate with your client. The innovation community operates via commercial iterative cycles, engaging thousands of customers to rapidly get data points to refine and go to market. Controlled trials, test cases, scaling – it all happens fast

This simply isn’t the case with DoD. Conducting valid, low-risk experiments is ponderously slow. The audience is incredibly narrow. Too often, the access isn’t there, which means no iteration. 

On the commercial side, we get plenty of chances to iterate and improve. On the defense side, we get too few. As with any competition, including war, the more reps you can get in before the event begins, the better. 

Changing America’s trajectory 

Integration of AI isn’t happening fast enough; there’s too little trust. The consequence is America’s pace of adoption is still far behind the pace of innovation, far behind our adversaries.

It is not for me to warn the defense community of the consequences of losing the AI arms race. It is very much for me, and other innovators like me, to offer avenues to a future different from the one pointed at by our current trajectory. 

We don’t need magic right now. We need trusted, practical AI that enables DoD and the intelligence community to harness every available efficiency and advantage. By starting small and gaining trust through proven capabilities, we can go bigger, faster.

But we can only move at the speed of trust. 

Small is fast, fast is big. 

Sean Moriarty is CEO of Primer Technologies, an AI defense company. From 2005 to 2009 he served as President and Chief Executive Officer of Ticketmaster