Our CEO Sean recently joined Emerj CEO Dan Faggella on the AI In Business Podcast to discuss the topic of building trust and utility in AI systems for defense.
“… the technology allows you to do very, very powerful things … but what it can’t and won’t solve for is human judgment. And the reason for that, quite simply, is that, ultimately, the human is responsible for the decision that is made, it needs to be understood in human terms.”
– Sean Moriarty, Primer CEO
Demystifying AI: beyond hype and fear
In the rapidly evolving landscape of AI and defense, reliability of AI capabilities is key. Sean talked about how AI often falls victim to the “magic pixie dust” hype or fear-inducing narratives, especially concerning issues like hallucinations in large language models (LLMs). He suggested that we need a mental shift in our approach to AI that focuses more on what it enables as a practical tool.
To aid technological shifts like this, it’s important to internalize the risks of the technology relative to the benefits and understand what it truly offers. AI is software designed to enhance human capabilities when used correctly. By substituting “AI” with “software that helps me to be more effective in my job” people can better comprehend the power and potential of these technologies.
Human centric design
AI can massively increase productivity and efficiencies of operators and analysts inundated with vast amounts of data that come on a spectrum of reliability. As LLMs and Generative AI enter the defense sector, the significance of utility and trust must be underscored.
The conversation then delved into the role of humans in the loop, with Sean highlighting the principle of “always human” in the design philosophy at Primer. AI tools are designed to serve and enhance human decision-making rather than supplant it. Human judgment in evaluating and determining the options presented by AI is so critical — technology should augment human capabilities.
He highlighted two of Primer’s products built to enhance human decision making:
- Primer Command: Users can ingest millions of documents including news and social from thousands of sources simultaneously, translating from over 100 languages to understand rapidly changing events.
- Primer Delta: The next generation of this flagship AI platform operates on a whole different level in terms of power, speed, and accuracy. It delivers powerful semantic search capabilities for analysts and operators and provides a user-friendly interface. Sean explained that it will allow analysts to express queries naturally and refine searches easily, enhancing the overall ease of use for operators and leaders.
Building confidence and transparency
Sean and Dan continued their discussion about ways to build confidence in AI, honing in on two ways to do this specifically:
- Grounding models to customer data: grounding AI models in customer data enables the model to return relevant information from trusted sources. This not only aligns with DoD’s ethical principles for AI but also addresses challenges related to ungrounded predictions, ensuring a focus on accuracy and reliability.
- Confidence intervals in AI interfaces: Acknowledging the inherent uncertainties in AI, it is so important to provide users with a spectrum of probability to make informed decisions. This deliberate approach aims to enhance user trust and foster a nuanced understanding of the technology’s limitations.
Tailoring trust levels for varied use cases
The discussion then expanded to the diverse applications of AI in defense, emphasizing the need to tailor trust levels based on specific use cases. The degree of precision required in decision-making varies across scenarios, and understanding these nuances is crucial for effective AI deployment in mission-critical domains.
“It’s our goal to be as accurate as possible, but there are going to be situations where you just don’t know what’s going to happen. And so what you want to be able to do is make sure that the end user understands where they’re sitting on a spectrum of probability to make an informed decision.”
– Sean Moriarty, Primer CEO
In navigating both the game-changing utility and the challenges of AI in defense, Sean and Dan’s conversation highlights the importance of transparency, reliability, and user-centric design. The conscious effort to build trust by grounding models in reliable data, providing confidence indicators, and tailoring trust levels for diverse applications underscores the commitment to responsible and effective AI implementation.
To learn more about Primer’s work with the defense and intelligence communities, visit https://primer.ai/get-mission-ready-with-primer-ai/.
1https://www.ai.mil/docs/Ethical_Principles_for_Artificial_Intelligence.pdf