What Exactly is AI’s Role in Disinformation?

“Disinformation is one of the most significant challenges facing our country today. 
How can we fight it? Let’s get to it, with Primer CEO Sean Gourley.”
Jason Stoughton, Host, Pulse of AI

In the second of a two-part interview series (here’s the first), Pulse of AI host Jason Stoughton invited Primer CEO Sean Gourley back to the podcast to discuss one of the most complex, disruptive – even existential – issues of our time: AI-powered disinformation.

In “Understanding, Identifying, and Combating Disinformation,” Stoughton cited Gourley’s deep experience, personal passion, and Primer’s work in this space as giving him a “unique perch” from which to view the full landscape. 

Disinformation can “permeate politics, impact elections, and distort discussions,” Stoughton said in his introduction. AI can make it better, and AI can make it worse, he added. “So, the question is, how do we fight it?”

Enter Primer CEO Sean Gourley. He started with the concept of truth.

“One of the reasons disinformation is fast becoming an existential threat is that Western democracies are built upon a sense of shared ground truth. If you can’t agree on how the world is, you can’t run a process to make decisions about how the world should be,” Gourley said. 

However, “absolute truth is a fallacy,” Gourley asserted, “since the whole scientific process involves discovering new things we didn’t previously know to be true.”

The manipulation of ideas, Gourley said, is a new, distinct threat. “It’s less important if an issue is true or not, and more important if people believe it to be true, or not.” 

“Instead of trying to assess ‘what is truth,’ a more tractable and solvable problem is determining if there are active campaigns to influence the way people think,” Gourley continued. “That’s very much in line with how we at Primer think about bringing our technology to play in this game.”

AI’s Role: For better and worse

“We’re on the cusp of entering a world where machines are able to generate content that is indistinguishable from what humans could generate themselves, and they can do it very cheaply, at a scale that is superhuman,” Gourley said. 

We’ve already seen this in deep fake videos, but we’re also now seeing this in the world of language.

“If you think it’s hard to determine if a photo has been generated by a machine, or not, it’s a whole lot harder to know if comments on a social media post have been generated by a bot or a human,” said Gourley. 

“Technology exists today that can create hyper-personalized, computer-generated content to sway your opinions about a key issue in the world, or sow confusion for you about a key issue in the world. That is the technological advancement that has me most worried when we think about operating a coherent democracy,” Gourley said.

There’s an extreme asymmetry, Gourley added, between the speed, ease, and low cost of disseminating disinformation at scale, and the time and expense associated with traditional, often manual, efforts to assess and verify information emerging in real time. 

While technology has created this asymmetry, machines can also help bridge the gap. 

We can train machines to act as an early warning system for identifying disinformation campaigns by continuously monitoring the information landscape and automatically assessing novel claims — where they originate, who is making them, and how they are disseminated and amplified. 

“We’re a long way from being able to actually detect disinformation campaigns early enough to start thinking about defense strategies,” according to Gourley. The critical first step, he believes, is early detection. “We need to detect the emergence of disinformation campaigns at a time when we can actually react and respond. This kind of early detection and defense requires a long-term investment in technology and infrastructure. Primer is proud to be working on this.”

“We have to get very serious about investing in technology that can support this mission of early detection and enable us to react in time to be effective.”
 Sean Gourley, CEO, Primer

Gourley discussed a number of related issues throughout the interview, including the underestimated risks of disinformation on commercial markets. 

It’s increasingly clear that we need to pay significantly more attention to protecting essential drivers of the global economy, such as financial system stability, corporate reputation management, and employee/stakeholder confidence.  

Gourley also believes that citizens in democratic societies have a responsibility to maintain “healthy information diets.”

“If we’re successful with that, it’s going to be a lot harder for disinformation attacks to achieve their objectives,” Gourley concluded. 

Listen to the full Pulse of AI podcast here: Understanding, Identifying, and Combating AI Powered Disinformation

For more information about Primer, its experience with disinformation, and to access product demos, contact Primer here.