AI is Needed to Detect Disinformation in the Crypto Sector

Imagine logging into your work computer one Wednesday morning and seeing untrue social media posts claiming the company you work for is a fraud.  Simultaneously you and your colleagues receive a report from an unknown source presented as comprehensive research warning investors against the company because it is a fraud. Several weeks pass before it is discovered that the company who published the report is a shell with no discernible employees and operating from an unknown address across the world.  But why would they write this report? Was the entire company created just to spread false information about your employer? 


Unfortunately the story above is not made-up. It’s also becoming less of an anomaly, especially in the crypto industry. Spreading disinformation in the crypto industry is prevalent and persistent and it often intermingles with real investment concerns.  The promulgation of disinformation with fear, uncertainty, and doubt or FUD,  is intended to confuse investors and potential investors.  Questions around hot button issues can be made intentionally to illicit FUD in an effort to affect the associated token’s price and popularity. The concept of FUD has become so pervasive that crypto sector social media users will use “FUD” as a word to call attention to any posts that negatively portray a crypto project.

AI Machine Learning tools can help to detect disinformation campaigns

New advancements in AI/Machine Learning, specifically Natural Language Processing (NLP), can help detect disinformation and synthetic text as well as partition the claim and the counterclaim of a disinformation campaign. This allows crypto projects to quickly see what is being said on each side of a dispute. 

With Command crypto companies can see the disputed information flagged for each report in the feeds column. They can also get perspective on the amount of FUD they are facing compared to others in the space. Additionally, Command displays FUD trends overtime and categorizes the organizations and people discussed in the posts. This helps in conducting investigations into the targets of the post and who is behind the disinformation campaign.

How pervasive is FUD?

FUD around crypto projects tends to focus on what governments will do about it. This has largely stemmed from China’s decision to ban crypto transactions and mining. This FUD gets recirculated frequently as China reaffirms its decision or cracks down on underground mining noting concerns about energy use. Creating a recent spike in FUD claims is the intensifying scrutiny of blockchain assets by the Securities and Exchange Commission and other U.S. regulators


Disinformation peddlers, in the form of bots or paid influencers, tend to pile on top of these fears with statements like those in the image below.  This social media influencer is known by many in the crypto sector to consistently post negative information about Tether and Bitcoin. He used the press release to support his campaign against both companies. Notably, the statements referenced in the post never mentioned Bitcoin or Tether. They focused on the impact mass adoption of stablecoins would have on traditional financial markets.

Disinformation in the crypto sector tends to skyrocket with any downturn in the token price. Take Ethereum (token: ETH) as an example. The first chart below shows ETH price in December 2021. The second chart shows a spike in FUD statements at the end of December when the price of ETH had its most severe decline.

In looking at the results from a basic Twitter search for the terms “FUD” and any of the top 20 crypto companies over the month of December there are 254 hits. Likewise, for Reddit there were 71 hits. While these numbers might not be alarming it’s important to note that they are only scratching the surface. This is because when social media users post FUD they don’t usually flag the term. This search is most often capturing other users pointing out FUD in other posts. This search also doesn’t cover discussions in threads of posts.

FUD contributes to market volatility, brand bullying

One of the oft-cited reasons for not investing in crypto companies is because of volatility. In November 2021, for example, Beijing reiterated its stance against Bitcoin miners which likely contributed to a crypto selloff over the next several days. The price of Bitcoin dipped 2.9% and Ethereum and Solana dropped 4.6% and 6.7%, respectively, following the statements.

The crypto industry is largely unregulated. And the federal government, for the most part, appears to still be figuring out how it all works. Couple the lack of oversight with the fact that most people interested in this sector shy away from central authorities. As a result many of the victims of FUD do not see legal recourse as an option.

Instead of court battles, they have taken to relying on community advocates to counter the messaging. These are paid and unpaid influencers who are supposed to support the brand and raise awareness about new developments through social media and educational meet-ups. Ripple has its XRPArmy, Chainlink has LINKMarines, and Dogecoin has the DOGEArmy, just to name a few. 

Yet more often these advocates are needed to focus on identifying and squashing false information directed at the brand. Because these are people financially invested in the company they take it too far and can contribute to brand degradation by attacking anyone questioning the project. Thus putting them directly at odds with their original purpose. 

The XRP Army, for example, is known for its scale and organization. If someone posts FUD about Ripple/XRP a foot soldier will spot the tweet and rally the troops by tagging the #XRPArmy. Next a flood of accounts will “brigade” the alleged FUD-monger, posting dozens or hundreds of comments. The attack comes in the form of an inundation of thousands and thousands of angry notifications that lasts for days.

Originators of FUD campaigns are difficult to identify

FUD campaigns are often hard to trace back to the originator because they will use fake companies and bots to amplify their message. And the cost of using bots to synthetically amplify content is relatively cheap. The New York Times in 2018 found that 1,000 high-quality, English-language bots with photos costs a little more than a dollar. See the possible bots intermixed with human posts below intensifying questions about whether it is time to sell Cardano’s ADA token below.

New synthetic text capabilities will make FUD campaigns even harder to trace

Bots are often detectable because they are posting the same message over and over. When you look at a bots profile they often have ‘tells’ such as imperfect use of the language, appear to have a singular theme to their posts, and have numerous bot followers. 

But these ‘tells’ are going to get increasingly difficult to identify with recent advancements in synthetic text generation. Last March researchers in the U.S. open sourced GPT-NEO, for the first time, making available to the public a next-generation language model. With the advent of these new generation language models launching a FUD campaign to try to drag down a competitor’s brand or for a short campaign will be even more difficult to detect. In fact, last summer, ​a team of disinformation experts demonstrated how effectively these algorithms could be used to mislead and misinform. The results are detailed in this WIRED article and suggest that it could amplify some forms of deception that would be especially difficult to spot.

Primer’s NLP Engines can help detect synthetic text and disinformation

Rather than continuing to invest in defensive armies or influencers to detect and flag FUD peddlers, the crypto space could benefit from an automated solution leveraging AI. Primer Command does all of this. Command ingests news and social media feeds and automatically detects, flags, and displays the origin and source of disputed information. This enables users to understand its provenance and evaluate its accuracy. This additional context also provides early warning and a means to constantly monitor the information landscape.

Command can also classify text as likely to have been written by a machine. It does this by automatically evaluating 20 different user signals to flag automated or inauthentic social media accounts.  Signals include how quickly an account is gaining followers, how many accounts they’re following, the age of the account, and even the composition of the account name. This information would allow crypto companies to evaluate the posts’ accuracy.

These tools hold more promise than manual efforts because they are impartial, within the parameters of how they are designed. It is an algorithm that identifies the FUD instead of someone with a stake in the project’s success. This is critical to assist in neutralizing the adversary without stoking the flames with numerous negative posts. By automating the identification of FUD campaigns, the project’s community can get back to focusing on brand promotion and education.  

Learn More

For a free trial of Primer Command or to learn more about Primer’s technology, Ask for Demo or Contact sales  to discuss your specific needs. You can also stay connected on Linkedin and Twitter.

“We create the tools behind the decisions that change the world. ©2022 Primer”