AI in the public sector: New approaches and modalities

How public sector agencies are using generative, multimodal, and agentic AI today.

With the success of ChatGPT, artificial intelligence has rapidly evolved as a valuable tool in the public sector. Generative AI, Multimodal AI and Agentic AI are being used in various combinations to achieve mission-focused goals. 

Last week, FedInsider hosted a roundtable, AI Branches Out Into New Solutions with experts from multiple public sector organizations across the nation, including AI provider organizations, state and local governments, higher education, and the Library of Congress. Primer’s Chief Technology Officer & SVP Customer Solutions Engineering, Matt Macnak, was on the panel as one of the industry experts driving the advancement of AI. 

The discussion centered around how AI is being used across public sector domains with an emphasis on real-world use cases. Matt Macnak led the way in discussing the different tools available today. 

The three primary modes of AI driving missions today. 

Three AI modalities are widely adopted in the marketplace because they complement each other and can be combined in ways that meet precise requirements. 

  • Generative AI is perhaps the best-known form of AI, creating new content based on prompts or inputs. It is good at making plain-language sense of unstructured data and for summarizing, drafting and translating the input given. 
  • Multimodal AI works with multiple media inputs, such as text, video, images and audio to create simplified outputs in the context of your query that give you deeper insights. This is especially relevant for environmental monitoring and document digitization. 
  • Agentic AI is an emerging mode that is designed to operate autonomously, making decisions and taking action with little human intervention. It can act as a mini research assistant, automating tasks and running analysis continuously.  

Matt stressed that there is no one-size-fits-all solution and each use case will require a different modality or blend of modalities. Further, these large foundational models may be impractical at the edge, depending on sensitive and remote needs. So Matt’s team helps clients fine-tune smaller models that can operate locally or in restricted environments. 

Humans and trust—Two critical considerations when working with AI. 

The first consideration that Matt pointed out is that, with AI, having a human in the loop is critical. You need a human to monitor and validate AI outputs, because results can contain errors. This is especially important when using Generative and Agentic models. 

All outputs should be linked back to source documents for traceability and trust and human intervention should be used to validate. “Primer’s philosophy is that we always want to be relating predictions and outputs back source documents and have a human in the loop,” says Macnak.

Trust was also a big issue for everyone on the panel. The primary concern was hallucinations or inaccurate AI outputs. Natalie Buda Smith, Artificial Intelligence & Digital Strategy Director, Library of Congress defined it perfectly when she said, “The hallucination is now what the bug used to be.” There’s no guarantee there will be a glitch in the system, but you should be cognizant of that possibility. In fact, hallucinations are a likelihood at this point in AI’s development. 

Matt Macnak shared that Primer mitigates hallucinations through Retrieval Augmented Generation to enhance and verify generated statements against external knowledge sources. Primer’s RAG solutions decrease hallucinations, bringing more trust to the process, which is critical in agencies where immediacy and accuracy are top concerns. 

He also stressed that, in terms of governance, agencies should have a plan in place as to how they are going to address and mitigate hallucinations. Because they are going to happen when you utilize LLMs. The rise in adversarial AI, such as data poisoning, just compounds this issue. So having a plan in place in terms of how to address hallucinations and inaccurate returns within your governance framework is very important.

Use cases that illustrate the breadth of how agencies are using AI. 

From simplifying expense reporting to using synthetic data to share data between hospitals and universities without sacrificing privacy, AI is being used in the public sector to solve specific, everyday problems. Success comes from aligning AI tools with mission needs rather than fitting needs to tools.

Here are a few of the interesting and diverse ways AI is enhancing the strategies at public institutions:

  • Library of Congress. AI is more than just a technology. It’s a cultural change. The Library manages close to 200 PB of data and uses AI for digital accessibility and creating metadata for use in assistive devices, for example. It is helping the Library be more effective in their mission and do more with the limited staff they have. 
  • City of San Jose, California. Albert Gehami, City AI & Privacy Officer says, “All the talk around AI that we have, it’s really still about people. We set up our AI framework and built-up responsible AI governance guardrails to make sure our AI benefits the public good.” The City co-created a 10-week AI training program with San Jose State University to train employees how to use AI to facilitate grant writing, memo writing, and digesting large amounts of public comment or understanding information from data. The program brought people from zero AI knowledge to proficiency. There has been a 10%-20% efficiency gain across the employees trained. One employee, in particular, knew nothing about AI before the program and is now training clerks how to process large amounts of data. 
  • The State of North Carolina. I-Sah Hsieh, Deputy Secretary for AI and Policy, North Carolina Department of Information Technology has been using AI for a while for forecasting and generating fraud models. But he also mentioned an interesting use of multimodal AI in the state’s environmental quality division regarding marine fisheries and how to manage the fisheries more effectively. They don’t just need to know the population of fish, but also their ages. Ear bones in fish are like trees—count the rings and you can tell the age. Using multimodal imaging, biologists used the imaging to look at 9000 samples. It would normally take 1-2 minutes per sample, but AI was able to catalog all 9000 in two minutes. 

Six recommendations for public sector organizations.

Together, Matt Macnak and the panel offered many insights from their lessons learned in the AI space:

  • Focus on real, everyday problems. Don’t fit your problem to the technology, but fit your technology to the problem. 
  • Encourage experimentation and exploration. Create safe sandboxes for your people to play in. 
  • Empower staff through hands-on learning. Give your people the benefit of training. 
  • Build cross-agency collaboration networks. Share your discoveries and lessons learned. 
  • Bake governance and cybersecurity into the AI lifecycle. Create a formal plan of how you’ll address AI-aided decision-making, hallucinations and other aspects of AI usage.
  • Implement a Zero Trust framework with AI.  Trust nothing, verify everything. 

As a leader in decision-support AI, Primer understands both AI and the public sector, from traditional uses to some of the most sensitive and breakthrough applications. Matt’s team is happy to discuss all the different modalities and help you tailor a solution that solves specific challenges you face every day in your agency. Contact us at primer.ai.