Practical, reliable AI will grow trust as it puts AI at the service of people, not the other way around.
As AI travels along the Gartner Hype Cycle, it’s important for highly consequential decision-makers (especially, those in national security), to adopt AI at the right pace. It’s natural to question AI’s role in high-stakes decisions and blindly trusting unproven tools can put your mission at risk.
AI has a long way to go before full autonomy, and we are not ready for AI to make decisions reserved for humans – the decision to commit lethal force foremost among them. But it’s already delivering faster, more accurate analysis that drives better decisions and improves security outcomes.
So if AI shouldn’t automate high-stakes decisions, then where and how should it be deployed to safely and effectively support the mission?
When AI is properly deployed – that is, when it truly supports mission requirements, when it complements, supports, and supercharges the human analyst – then national security decision-makers should have greater confidence in it.
When it comes to emerging tech and our highest-stakes decisions, we need to think about an adoption maturity model that’s married to the maturity of the technology.
At Primer, we’ve thought a lot about how to deploy AI in a safe, secure, practical way so it can be harnessed to support national security professionals in those high-stakes environments. More than just thinking – we’ve actually done it.
Practical AI: how Primer views the world
We provide trusted, decision-ready AI to the world’s most critical organizations, enabling leaders, researchers, operators, and analysts to better understand the changing world around us.
This work stems from our philosophical approach which is centered around our principle of “Always human.” Always Human means just what it sounds like: put human considerations first. More than a vague aspiration, this is an operational principle built into our products for practical, trusted AI.
And to ensure AI does meet those mission requirements noted above, that means it needs to be trustworthy, deployable in mission-critical environments, and cost-effective.
Trust AI as much as you trust your data
Primer goes beyond standard AI implementations, prioritizing accuracy and reliability by enhancing Retrieval Augmented Generation (RAG). To ensure that generated answers to queries are based strictly on trusted customer data, Primer executes multi-step validation, minimizing hallucinations and improving the depth of coverage.
We want to make sure our models work well against customers’ data. We fine-tune the results to expand and narrow the aperture of search results depending on how narrow a customer wants those results to be.
Our approach tailors AI to our customers’ data. We work closely with our customers to understand their workflow and analytical needs. Whether our customers seek to accelerate intelligence cycles, enhance situational awareness, or reduce time to insight for decision-makers, Primer delivers not only relevant, but comprehensive results and provides citations and portion-marking to the data sources used to generate that output. The result is greater accuracy, explainability, and more trust.
Another key element needed for practical, trusted AI is the ability to deploy it in mission-critical environments.
Here, most competitors are still looking at SaaS oriented deployments – essentially, wrappers on ChatGPT. Very few have the experience to deploy in IL5/6 environments, or air-gapped networks. Primer does.
Another example. Take users of LLM-powered software who operate in denied, degraded, intermittent, or limited (DDIL) communications environments. From SIGINT analysts on reconnaissance planes to special forces commanders in the field and engineers on submarines, LLMs are not reliably available. They require AI tools just as much as any military users, and require robust solutions built for them.
Where others run from complexity, we run towards it, ensuring our systems work in the environments our customers need to operate in.
Lastly, practical AI requires delivering performance for the lowest possible cost.
LLMs are big, expensive power tools, but they are just one tool in the toolbox, and the right tool should be used to optimize cost and performance.
A helpful analogy here is that of a carpenter. There are sledgehammers for big jobs and there are ball-and-pin hammers for smaller ones. Current AI models use a sledge for all kinds of jobs; Primer knows when to deploy a lightweight hammer for the penny nail equivalent. Using the right tool for the right job lets customers optimize from both a cost and performance standpoint.
How?
Primer’s algorithmic framework BabyBear dramatically reduces cost by automatically triaging documents in real-time and minimizing the workload for LLMs. By tasking the best AI model for the job, BabyBear reduces unnecessary processing sent to LLMs, delivering far faster speeds at lower cost relative to current industry standards.
In essence, deploying the fit-for-purpose models delivers good results faster for sustainable, cost-efficient enterprise AI.
AI adoption should marry the maturity of the tech
The concern about deploying LLMs in combat and other critical scenarios is understandable. But much of the underlying assumptions come from putting AI to a purpose that it ought not to be put toward. Just as we shouldn’t fall for the hype of AI, neither should we fall for the fears surrounding this new tech.
By focusing on practical, reliable AI – tools that are grounded in customer data, capable of operating in the most secure environments, and functional at cost – national security leaders can deliver enormous advantages, at speed and with greater trust in the answers they’re getting.
They should – and they will, if they can have confidence in the products available to them.
# # #
Leonard Law is CPO for Primer AI. Prior to joining Primer, Law held roles leading developer products at Coinbase, heading the Financial Services vertical at Google Cloud, and leading technology as the CIO/CTO at W. P. Carey. Leonard holds Bachelors degrees in Computer Science and Economics from Yale, a Masters in Computer Science with a concentration in AI from Yale, and an MBA from the Wharton School at the University of Pennsylvania.