• An emerging narrative focusing on harmful chemicals found in food packaging was threatening the brand reputation of one of the world’s largest fast food operators.

• The Yonder by Primer project revealed the influential factions pushing the narrative, and potential impact on the brand.

• The brand responded by announcing a multi-million dollar policy change, avoiding reputation damage.

Primer technology empowers smart decisions for crisis communications

One of the world’s largest fast food chains faced a serious challenge in 2020. A brand-damaging media narrative around the use of PFAS— a harmful “forever chemical” fluorinated compound often found in food packaging, nonstick cookware, and bottled water – was emerging on social media. The company’s internal communications team determined they needed media intelligence to understand the broader narrative and the influential online groups who were driving it in order to make a decision. 

​​The Yonder by Primer solution

The Yonder product is able to look at what groups are engaging with emerging narratives, and then introduce historical links between groups – the key differentiator often needed. In the PFAS case, the fast food chain needed to know if the troublesome narrative was simply a random group of people online, or a group that had historically been trying to push a narrative that could damage the brand. Before the narrative about PFAS ever went viral, Yonder product predicted the brand would become the main protagonist that online groups would organize against, unless action was taken to address the issue. 

The Yonder product was able to quickly answer both real-time and historical questions:

  • Who’s engaging with the narrative?
  • What are the links between the peripheral group and the access they have to other networks? 
  • Do those other networks have factions that are historically successful in introducing new and potentially damaging narratives to the mainstream? 

Yonder is then able to analyze the historical behavior of how the group distributes content. In the PFAS narrative, the app then delivered insights around: 

  • The origins of the narrative
  • The trajectory of post volume
  • Involvement  by influential groups online. 

 Results

The analysis resulted in a recommendation that potentially saved millions in reputation damage. The fast food giant announced a U.S. $6.4M policy change, pledging to stop the use of PFAS in its food packaging globally. The brand quickly aligned its teams around a proactive policy change decision, saving months of time-consuming, back-and-forth debate on the potential impact of the narrative on the brand and how to respond, if at all. This decision thwarted a growing reputation crisis, built trust among consumers, and was recognized by activist groups. 

Brand monitoring 

Primer’s brand and reputation management solution gives marketing and PR teams a real-time view of their brand in the global marketplace. With AI-powered analytics capabilities, Primer separates signal from noise to surface actionable insights that help teams understand their market, identify risks, plan marketing activities, and protect their brand’s reputation. Yonder discovers the hidden groups who control and amplify online narratives. The product analyzes the historical influence of these groups to predict how they will impact narratives in the future. 

Learn more about this Primer solution. Better yet, contact Primer and let’s discuss how our NLP technology can help protect your brand and keep your organization ahead of threats.

“As everyone is aware, misinformation and disinformation are being sown by many of our competitors, and the problem is only growing. We have to be able to see that in real time. But we also have to be able to counter with all elements of statecraft.” – General Richard D. Clarke, Commander of U.S. Special Operations Command, April 2022 

Primer has been deploying core Natural Language Processing (NLP) infrastructure and Engines to help people detect, understand, and respond to disinformation campaigns. This has been a critical aspect of Primer’s work with the U.S. Air Force and U.S. Special Operations Command (SOCOM) and drives us to bring the best Artificial Intelligence (AI) tools to operators who make mission-critical decisions. We are committed to accelerating our work in information operations and today we are excited to announce our acquisition of Yonder, a pioneering company in disinformation analysis. 

Disinformation is changing the course of the world

From the war in Ukraine to market manipulation of cryptocurrency, disinformation changes the course of conflict, impacts elections, degrades brands, and distorts discussions. These narratives develop across the Internet at a pace and scale that humans simply can’t detect or keep up with. A single narrative, even if it’s false or misleading, may impact an organization’s competitive or strategic position in the world – seemingly overnight. Early intelligence about emerging narratives is critical, so organizations can be proactive and avoid an expensive and exhausting game of whack-a-mole. 

But detecting and mitigating the impact of influence operations is hard. Organizations have a small window of time to address narratives before they go viral. Getting in front of these messages is essential, because it can have lasting effects on perceptions, beliefs, and behavior. We saw this play out in the war in Ukraine when there were false messages circulating about the U.S. operating a chemical weapons facility in Ukraine. By detecting and proactively refuting these claims, the U.S. government got in front of the narrative before opinions were formed and solidified. 

Yet even if an organization detects the emerging narrative in time, understanding the risk is often less straightforward. Questions need to be answered. Which groups started or are promoting the narrative, what is their agenda, who are they connected with, how influential are they, what will they do next … is this a momentary blip or something more organized and persistent?

These are exactly the types of questions that Yonder helps customers answer – the who, what, where, why, and how of a disinformation campaign. Fortune 1000 companies, including the world’s largest retailer, a fast food brand, and telco company, use Yonder to proactively monitor and take action to avoid situations that could harm their brands and customers. Yonder provides contextual intelligence about the narrative and factions promulgating or amplifying it. Users can configure their settings in an easy-to-use User Interface (UI) and receive alerts via email if something in their world is changing. Yonder is able to understand evolving narratives and, importantly, does so across multiple languages.

A robust and scalable NLP information operations suite 

Detecting information manipulation operations is mission-critical for the US Department of Defense, Intelligence Community, and strategic international partner organizations. Identifying influence campaigns is increasingly challenging because of advances in synthetic text generation. These technological advancements are making it largely impossible for most tools to detect synthetic text because they lack the predictable “tells.” But this is Primer’s strength. Our Primer NLP Engines automatically detect content that has been algorithmically amplified by bots or even automatically generated by advanced AI programs. 

Recognizing our customers’ diverse needs for understanding fast-breaking events in real time, we launched Primer Command. With Command, organizations can easily monitor and cut through noisy and high-volume social and news media, so they can understand risk and take action. Our advanced NLP algorithms work around the clock, at human-level accuracy and machine scale and speed, to automatically detect and display the origin and source of disputed information – all accessible within a single and intuitive interface. 

By pairing Yonder’s contextual narrative intelligence capabilities with Primer Command and our state-of-the-art NLP/ML models, we can provide our customers with advanced early warning about emerging narratives and agenda-based online networks associated with propaganda. Our combined capabilities also allow us to draw out the factions pushing disinformation on social media for our customers. 

The need for tracking early signals has become increasingly critical as influence operations have become a key element of modern warfare. It was a tactic Russia used before they invaded Ukraine, portraying Ukraine as an aggressor preparing to attack Russia-backed forces on the eastern border. They also began promoting a narrative that the US is abandoning Ukraine by drawing comparisons to the US’s withdrawal from Afghanistan. Spreading disinformation has been a part of the Russian military arsenal ever since. 

What the war in Ukraine has shown is that it is no longer enough for the U.S. military to track intelligence about adversaries’ actions. The same caliber of signals is needed today about competitor influence operations. Having these signals at scale is exactly what USSOCOM Commander General Clarke mentioned as part of his SOFIC keynote in May 2022. He suggested that the US military needs to have sentiment analysis capabilities similar to how major companies track influence operations against their brand. That there is a requirement for the U.S. military to generate alerts warning about information manipulation campaigns so they can respond to it. 

Creating a “monitor” by combining brand terms, sources, exclusionary terms, and geofencing in Primer Command.

In Primer Command, users can get a quick snapshot of the sentiment around their brands.

What’s next

We believe that government and civil society organizations need to have the tools to be successful in their missions. Analysts and operators need early warning capabilities and contextual information from a multitude of sources to help them inform countermeasures and make high-stakes decisions. Every Fortune 1000 company should have the capabilities in place such that when subjected to an information attack, they will be alerted and have the ability to respond quickly to reduce the impact. 

Effective today, Yonder customers can expect the same great insights and services they are used to, boosted by Primer’s resources, infrastructure, and expertise. They will also have access to Primer’s world-class pre-trained NLP models, including more advanced bot detection algorithms, synthetic text detection capabilities, and claim detection and extraction algorithms, to further enhance their ability to identify and understand emerging risks. 

Primer customers who are using Command, our NLP infrastructure, and our suite of pre-trained models to track and understand fast-moving crisis situations will have the ability to integrate Yonder into their workflows. They can get a high-resolution picture of the disinformation space they are operating within, easily understand what information they should trust, and identify what is likely to be an information operations campaign.

Primer offers a warm welcome to the Yonder team. These are the people that have defined and shaped the disinformation detection space and we are thrilled to work with them to accelerate our work on combating this growing and critical challenge.

For more information about Primer and how to request a Yonder demo, contact us here.

Imagine logging into your work computer one Wednesday morning and seeing untrue social media posts claiming the company you work for is a fraud.  Simultaneously you and your colleagues receive a report from an unknown source presented as comprehensive research warning investors against the company because it is a fraud. Several weeks pass before it is discovered that the company who published the report is a shell with no discernible employees and operating from an unknown address across the world.  But why would they write this report? Was the entire company created just to spread false information about your employer? 


Unfortunately the story above is not made-up. It’s also becoming less of an anomaly, especially in the crypto industry. Spreading disinformation in the crypto industry is prevalent and persistent and it often intermingles with real investment concerns.  The promulgation of disinformation with fear, uncertainty, and doubt or FUD,  is intended to confuse investors and potential investors.  Questions around hot button issues can be made intentionally to illicit FUD in an effort to affect the associated token’s price and popularity. The concept of FUD has become so pervasive that crypto sector social media users will use “FUD” as a word to call attention to any posts that negatively portray a crypto project.

AI Machine Learning tools can help to detect disinformation campaigns

New advancements in AI/Machine Learning, specifically Natural Language Processing (NLP), can help detect disinformation and synthetic text as well as partition the claim and the counterclaim of a disinformation campaign. This allows crypto projects to quickly see what is being said on each side of a dispute. 

With Command crypto companies can see the disputed information flagged for each report in the feeds column. They can also get perspective on the amount of FUD they are facing compared to others in the space. Additionally, Command displays FUD trends overtime and categorizes the organizations and people discussed in the posts. This helps in conducting investigations into the targets of the post and who is behind the disinformation campaign.

How pervasive is FUD?

FUD around crypto projects tends to focus on what governments will do about it. This has largely stemmed from China’s decision to ban crypto transactions and mining. This FUD gets recirculated frequently as China reaffirms its decision or cracks down on underground mining noting concerns about energy use. Creating a recent spike in FUD claims is the intensifying scrutiny of blockchain assets by the Securities and Exchange Commission and other U.S. regulators


Disinformation peddlers, in the form of bots or paid influencers, tend to pile on top of these fears with statements like those in the image below.  This social media influencer is known by many in the crypto sector to consistently post negative information about Tether and Bitcoin. He used the press release to support his campaign against both companies. Notably, the statements referenced in the post never mentioned Bitcoin or Tether. They focused on the impact mass adoption of stablecoins would have on traditional financial markets.

Disinformation in the crypto sector tends to skyrocket with any downturn in the token price. Take Ethereum (token: ETH) as an example. The first chart below shows ETH price in December 2021. The second chart shows a spike in FUD statements at the end of December when the price of ETH had its most severe decline.

In looking at the results from a basic Twitter search for the terms “FUD” and any of the top 20 crypto companies over the month of December there are 254 hits. Likewise, for Reddit there were 71 hits. While these numbers might not be alarming it’s important to note that they are only scratching the surface. This is because when social media users post FUD they don’t usually flag the term. This search is most often capturing other users pointing out FUD in other posts. This search also doesn’t cover discussions in threads of posts.

FUD contributes to market volatility, brand bullying

One of the oft-cited reasons for not investing in crypto companies is because of volatility. In November 2021, for example, Beijing reiterated its stance against Bitcoin miners which likely contributed to a crypto selloff over the next several days. The price of Bitcoin dipped 2.9% and Ethereum and Solana dropped 4.6% and 6.7%, respectively, following the statements.

The crypto industry is largely unregulated. And the federal government, for the most part, appears to still be figuring out how it all works. Couple the lack of oversight with the fact that most people interested in this sector shy away from central authorities. As a result many of the victims of FUD do not see legal recourse as an option.

Instead of court battles, they have taken to relying on community advocates to counter the messaging. These are paid and unpaid influencers who are supposed to support the brand and raise awareness about new developments through social media and educational meet-ups. Ripple has its XRPArmy, Chainlink has LINKMarines, and Dogecoin has the DOGEArmy, just to name a few. 

Yet more often these advocates are needed to focus on identifying and squashing false information directed at the brand. Because these are people financially invested in the company they take it too far and can contribute to brand degradation by attacking anyone questioning the project. Thus putting them directly at odds with their original purpose. 

The XRP Army, for example, is known for its scale and organization. If someone posts FUD about Ripple/XRP a foot soldier will spot the tweet and rally the troops by tagging the #XRPArmy. Next a flood of accounts will “brigade” the alleged FUD-monger, posting dozens or hundreds of comments. The attack comes in the form of an inundation of thousands and thousands of angry notifications that lasts for days.

Originators of FUD campaigns are difficult to identify

FUD campaigns are often hard to trace back to the originator because they will use fake companies and bots to amplify their message. And the cost of using bots to synthetically amplify content is relatively cheap. The New York Times in 2018 found that 1,000 high-quality, English-language bots with photos costs a little more than a dollar. See the possible bots intermixed with human posts below intensifying questions about whether it is time to sell Cardano’s ADA token below.

New synthetic text capabilities will make FUD campaigns even harder to trace

Bots are often detectable because they are posting the same message over and over. When you look at a bots profile they often have ‘tells’ such as imperfect use of the language, appear to have a singular theme to their posts, and have numerous bot followers. 

But these ‘tells’ are going to get increasingly difficult to identify with recent advancements in synthetic text generation. Last March researchers in the U.S. open sourced GPT-NEO, for the first time, making available to the public a next-generation language model. With the advent of these new generation language models launching a FUD campaign to try to drag down a competitor’s brand or for a short campaign will be even more difficult to detect. In fact, last summer, ​a team of disinformation experts demonstrated how effectively these algorithms could be used to mislead and misinform. The results are detailed in this WIRED article and suggest that it could amplify some forms of deception that would be especially difficult to spot.

Primer’s NLP Engines can help detect synthetic text and disinformation

Rather than continuing to invest in defensive armies or influencers to detect and flag FUD peddlers, the crypto space could benefit from an automated solution leveraging AI. Primer Command does all of this. Command ingests news and social media feeds and automatically detects, flags, and displays the origin and source of disputed information. This enables users to understand its provenance and evaluate its accuracy. This additional context also provides early warning and a means to constantly monitor the information landscape.

Command can also classify text as likely to have been written by a machine. It does this by automatically evaluating 20 different user signals to flag automated or inauthentic social media accounts.  Signals include how quickly an account is gaining followers, how many accounts they’re following, the age of the account, and even the composition of the account name. This information would allow crypto companies to evaluate the posts’ accuracy.

These tools hold more promise than manual efforts because they are impartial, within the parameters of how they are designed. It is an algorithm that identifies the FUD instead of someone with a stake in the project’s success. This is critical to assist in neutralizing the adversary without stoking the flames with numerous negative posts. By automating the identification of FUD campaigns, the project’s community can get back to focusing on brand promotion and education.  

Learn More

For a free trial of Primer Command or to learn more about Primer’s technology, Ask for Demo or Contact sales  to discuss your specific needs. You can also stay connected on Linkedin and Twitter.

“We create the tools behind the decisions that change the world. ©2022 Primer”

Natural language processing (NLP) automates the work that would previously have required hundreds of researchers. These machines read and write – and they’re changing the future of business.

With NLP, we can now analyze information at machine speed, but with human-level precision. As reading and writing is automated through machine learning, companies are freed up to focus on the challenges that are uniquely human.

Read more here