An overview of the Climate Finance Tracker, an initiative launched with OneEarth. Each circle or node on the map represents one private company in the U.S. Users can sort by themes, sectors, investors, and more.

The emerging threat of climate change to our national security

From as early as 2008, the U.S. Department of Defense flagged the increased risk to national security if climate change wasn’t urgently addressed by the federal government.

“We judge global climate change will have wide-ranging implications for U.S. national security interests over the next 20 years,” wrote the National Intelligence Committee during President George W. Bush’s tenure.  The political will to take action to address climate change as a national security issue is slowly growing. Recently the U.S. Army released its first ever climate strategy, outlining its goals to cut emissions in half by 2030, produce electric combat vehicles, and advance officer training for a warmer climate. 

At Primer, we build AI technology that helps subject matter experts and decision-makers identify, understand, and act on emerging threats. We believe that the intersection between climate change and national security is imperative to proactively address. 

“Climate change will affect the Department of Defense’s ability to defend the nation and poses immediate risks to U.S. national security.”

-U.S. Department of Defense, 2014 Climate Change Adaption Roadmap

“Increasing migration and urbanization of populations are also further straining the capacities of governments around the world and are likely to result in further fracturing of societies, potentially creating breeding grounds for radicalization.”

-U.S. National Intelligence Strategy, 2019

Our partnership with OneEarth.org

On the cusp of Climate Week in New York starting Sept. 19, Primer is proud to announce our partnership with OneEarth.org, and the public release of the Climate Finance Tracker that we worked on together this year. Leveraging Primer’s machine learning technology on publicly available data, the tracker “follows the money,” surfacing in one unified view, where U.S. private funding for combating climate change is going. In short, following the money helps everyone collaborate more effectively to tackle the climate crisis. 

For the first time, the Climate Finance Tracker is live and open-source, and uses AI to advance intelligence and decision cycles, to ultimately improve outcomes on solving a major global challenge.  

The Climate Finance Tracker allows anyone to search the funding landscape of all private companies and nonprofits in the U.S. that are addressing climate-related topics. By surfacing topics from organization and project descriptions provided by Crunchbase and Candid, the tracker allows us to better understand the shape of the current response, and with that, its potential gaps. This understanding means funders can better direct their resources, ensuring the highest possible impact of their investment.

The value of following the funding

Eric Berlow, a leading social impact data scientist and researcher, led the project with his nonprofit Vibrant Data Labs and OneEarth.org. Berlow explained how using Primer’s AI technology allows researchers to find the answer to questions they may not even know to ask. 

“We searched for companies working on known climate-related topics, and then the machine pulled out of their language that some also mention national security-related topics,” Berlow explained. “That’s novel. I wouldn’t have even known to look… And the machine found it.”

Text-based machine learning doesn’t just uncover answers – it also helps surface questions researchers may not even know to ask.

A snapshot of 32 organizations that were tagged with ’national security’, ‘unrest, conflicts and war’, or ‘peace and security.’ Explore the full Climate Finance Tracker here.

“We know climate intersects with national security, and now we can easily see about 25 private investors funding companies in that space,” Berlow explained. “It’s only 6 companies out of 6,000. That’s an important gap. Let’s talk to them and see what’s going on.”

Climate week begins

With this week’s launch of the Climate Finance Tracker, along with Climate Week in New York, we expect to see increased conversation around adoption of machine learning solutions to help solve global problems such as the climate crisis, making us more resilient toward emerging threats.

At Primer, we believe that when we empower the ecosystem with better information, intelligence, and decision making, we can truly solve really complex problems together. 

See the power of text-based AI and NLP. 

Request a demo of Primer’s NLP technology. 

“As everyone is aware, misinformation and disinformation are being sown by many of our competitors, and the problem is only growing. We have to be able to see that in real time. But we also have to be able to counter with all elements of statecraft.” – General Richard D. Clarke, Commander of U.S. Special Operations Command, April 2022 

Primer has been deploying core Natural Language Processing (NLP) infrastructure and Engines to help people detect, understand, and respond to disinformation campaigns. This has been a critical aspect of Primer’s work with the U.S. Air Force and U.S. Special Operations Command (SOCOM) and drives us to bring the best Artificial Intelligence (AI) tools to operators who make mission-critical decisions. We are committed to accelerating our work in information operations and today we are excited to announce our acquisition of Yonder, a pioneering company in disinformation analysis. 

Disinformation is changing the course of the world

From the war in Ukraine to market manipulation of cryptocurrency, disinformation changes the course of conflict, impacts elections, degrades brands, and distorts discussions. These narratives develop across the Internet at a pace and scale that humans simply can’t detect or keep up with. A single narrative, even if it’s false or misleading, may impact an organization’s competitive or strategic position in the world – seemingly overnight. Early intelligence about emerging narratives is critical, so organizations can be proactive and avoid an expensive and exhausting game of whack-a-mole. 

But detecting and mitigating the impact of influence operations is hard. Organizations have a small window of time to address narratives before they go viral. Getting in front of these messages is essential, because it can have lasting effects on perceptions, beliefs, and behavior. We saw this play out in the war in Ukraine when there were false messages circulating about the U.S. operating a chemical weapons facility in Ukraine. By detecting and proactively refuting these claims, the U.S. government got in front of the narrative before opinions were formed and solidified. 

Yet even if an organization detects the emerging narrative in time, understanding the risk is often less straightforward. Questions need to be answered. Which groups started or are promoting the narrative, what is their agenda, who are they connected with, how influential are they, what will they do next … is this a momentary blip or something more organized and persistent?

These are exactly the types of questions that Yonder helps customers answer – the who, what, where, why, and how of a disinformation campaign. Fortune 1000 companies, including the world’s largest retailer, a fast food brand, and telco company, use Yonder to proactively monitor and take action to avoid situations that could harm their brands and customers. Yonder provides contextual intelligence about the narrative and factions promulgating or amplifying it. Users can configure their settings in an easy-to-use User Interface (UI) and receive alerts via email if something in their world is changing. Yonder is able to understand evolving narratives and, importantly, does so across multiple languages.

A robust and scalable NLP information operations suite 

Detecting information manipulation operations is mission-critical for the US Department of Defense, Intelligence Community, and strategic international partner organizations. Identifying influence campaigns is increasingly challenging because of advances in synthetic text generation. These technological advancements are making it largely impossible for most tools to detect synthetic text because they lack the predictable “tells.” But this is Primer’s strength. Our Primer NLP Engines automatically detect content that has been algorithmically amplified by bots or even automatically generated by advanced AI programs. 

Recognizing our customers’ diverse needs for understanding fast-breaking events in real time, we launched Primer Command. With Command, organizations can easily monitor and cut through noisy and high-volume social and news media, so they can understand risk and take action. Our advanced NLP algorithms work around the clock, at human-level accuracy and machine scale and speed, to automatically detect and display the origin and source of disputed information – all accessible within a single and intuitive interface. 

By pairing Yonder’s contextual narrative intelligence capabilities with Primer Command and our state-of-the-art NLP/ML models, we can provide our customers with advanced early warning about emerging narratives and agenda-based online networks associated with propaganda. Our combined capabilities also allow us to draw out the factions pushing disinformation on social media for our customers. 

The need for tracking early signals has become increasingly critical as influence operations have become a key element of modern warfare. It was a tactic Russia used before they invaded Ukraine, portraying Ukraine as an aggressor preparing to attack Russia-backed forces on the eastern border. They also began promoting a narrative that the US is abandoning Ukraine by drawing comparisons to the US’s withdrawal from Afghanistan. Spreading disinformation has been a part of the Russian military arsenal ever since. 

What the war in Ukraine has shown is that it is no longer enough for the U.S. military to track intelligence about adversaries’ actions. The same caliber of signals is needed today about competitor influence operations. Having these signals at scale is exactly what USSOCOM Commander General Clarke mentioned as part of his SOFIC keynote in May 2022. He suggested that the US military needs to have sentiment analysis capabilities similar to how major companies track influence operations against their brand. That there is a requirement for the U.S. military to generate alerts warning about information manipulation campaigns so they can respond to it. 

Creating a “monitor” by combining brand terms, sources, exclusionary terms, and geofencing in Primer Command.

In Primer Command, users can get a quick snapshot of the sentiment around their brands.

What’s next

We believe that government and civil society organizations need to have the tools to be successful in their missions. Analysts and operators need early warning capabilities and contextual information from a multitude of sources to help them inform countermeasures and make high-stakes decisions. Every Fortune 1000 company should have the capabilities in place such that when subjected to an information attack, they will be alerted and have the ability to respond quickly to reduce the impact. 

Effective today, Yonder customers can expect the same great insights and services they are used to, boosted by Primer’s resources, infrastructure, and expertise. They will also have access to Primer’s world-class pre-trained NLP models, including more advanced bot detection algorithms, synthetic text detection capabilities, and claim detection and extraction algorithms, to further enhance their ability to identify and understand emerging risks. 

Primer customers who are using Command, our NLP infrastructure, and our suite of pre-trained models to track and understand fast-moving crisis situations will have the ability to integrate Yonder into their workflows. They can get a high-resolution picture of the disinformation space they are operating within, easily understand what information they should trust, and identify what is likely to be an information operations campaign.

Primer offers a warm welcome to the Yonder team. These are the people that have defined and shaped the disinformation detection space and we are thrilled to work with them to accelerate our work on combating this growing and critical challenge.

For more information about Primer and how to request a Yonder demo, contact us here.

Primer partnered with Vibrant Data Labs, a nonprofit that uses data to create the first-ever comprehensive map of climate change funding. Our interview with Eric Berlow of Vibrant Data Labs shows how ‘following the money’ reveals both our biggest opportunities and threats to turn climate change around.

It’s no secret that the future of our planet is in trouble. The recently reported IPCC report concluded that countries aren’t doing nearly enough to protect against significant disasters to come. 

But in order to solve a big problem like climate change, and to understand if our current response is working, we need to see where private funding in the sector is going. What issues are getting money, and which organizations are getting that funding? What other trends might emerge? 

Applying NLP to climate change

That’s where natural language processing comes in. Using Primer’s NLP technology, we partnered with Eric Berlow at Vibrant Data Labs to produce the first-ever climate change funding map. Primer’s Engines analyzed data on over 12,000 companies and nonprofits funded in the last five years. Using organizations’ descriptions provided by Crunchbase, grant applications provided by Candid, and in partnership with the Cisco Foundation, we generated one of the first-ever data-driven hierarchies of climate topics to better understand our current response, alongside any potential gaps. Using this topic hierarchy, we can see what projects organizations are working on – and where. That helps us see what’s missing in the bigger picture. And to solve a problem like climate change, a big picture view is what’s needed.

Watch our interview with Eric Berlow on why following the funding is crucial for the climate’s future. 



“The Coronavirus pandemic was like the trailer to the climate change movie where if the have-nots are really impacted, everybody gets affected by it. Climate change is one of those problems that is an all-hands on deck problem. You cannot just solve a little piece of it. “ – Eric Berlow



Learn more about Primer’s partnership with Vibrant Data Labs here, and learn the technical piece behind the work here and here.

For more information about Primer and to access product demos, contact Primer here.

The NLP revolution kicked off in 2018 when BERT dropped. In 2019 and 2020, it was one mind-bending breakthrough after another. Generative language models began writing long-form text on demand with human-level fluency. Our benchmarks of machine intelligence — SNLI, SQuAD, GLUE — got clobbered. Many wondered if we were measuring the wrong things with the wrong data. (Spoiler: yes.) Suddenly, we no longer needed tens of thousands of training examples to get decent performance on an NLP task. We could pull it off with mere hundreds. And for some NLP tasks, no training data was needed at all. Zero-shot was suddenly a thing.

Over the past year, NLP has cooled down. Transformer architecture, self-supervised pre-training on internet text — it seems like this paradigm is here to stay. Welcome to the normal science phase of NLP. So, what now?

Time to industrialize

Efficiency, efficiency, efficiency, and more efficiency. The cost of training and serving giant language models is outstripping Moore’s law. So far, the bigger the model, the better the performance. But if you can’t fine-tune these models on fresh data for new NLP tasks, then they are only as good as your prompt engineering. (And trust me, prompt “engineering” is more art than science.) The focus on efficiency is all about industrializing NLP.

“The revolution, in the style of Kazimir Malevich” by John Bohannon using CLIP+VQGAN

The NLP benchmarks crisis is another sign that we’re in the normal science phase. Here’s an open dirty secret: The purpose of today’s NLP benchmarks is to publish papers, not to make progress toward practical NLP. We are getting newer, better benchmarks — Big-Bench, MMMLU — but we likely need an entirely different approach. The Dynabench effort is an interesting twist. But until we stop using performance metrics known to be broken — looking at you, ROUGE — industrial NLP will be harder to achieve.

Normal science is applied science. We are moving from “Wow, look at the impressively fluent garbage text my model generates!” to “Wow, I just solved a million-dollar NLP problem for a customer in 2 sprints!”

NLP is eating ML

But meanwhile, another revolution is brewing on the edges of NLP. The phrase “Software is eating the world” became “ML is eating software.” Now it looks like “NLP is eating ML,” starting with computer vision.

The first nibble was DALL-E. Remember when you first saw those AI-generated avocado armchairs? That was cool. But then along came CLIP+VQGAN. And just last month, GLIDE made it possible to iteratively edit images generated by text prompts with yet more text prompts. This could be the future of Photoshop.

The trend is the merging of cognitive modalities into a single latent space. Today, it’s language and photos. Soon, it will be language and video. Why not joint representation of language, vision, and bodily movement? Imagine generating a dancing robot from a text prompt. That’s coming.

Go beyond English

Bilingual language models lost the throne in 2021. For the first time, a multilingual model beat them all at WMT. That was a big moment that surprised no one. Machine translation has caught up with the multi-task trend seen across the rest of NLP. Learning one task often helps a model do others, because no task in NLP is truly orthogonal to the rest.

The bigger deal is the explosion in non-English NLP. Chinese is on top — no surprise given the massive investment. There are now multiple Chinese language models as big or bigger than GPT-3, including Yuan and WuDao. But we also have a Korean language model, along with a Korean version of GLUE (benchmark problems notwithstanding). And most exciting of all is an entirely new approach to multilingual NLP called XLS-R trained on a massive collection of speech recordings in 128 languages. This could become the foundation NLP model for low-resource languages.

Build responsibly

If 2020 was the year that NLP ethics blew up, then 2021 was the year that ethics considerations became not just normal but expected. NeurIPS led the way with ethics guidelines and a call on authors to include a “broader impact” section in their papers. 

But like the rest of the AI industry, NLP is still self-regulating. That will change. The most significant action in AI self-regulation last year was Facebook’s voluntary deletion of its facial recognition data. That move was surely made in anticipation of government regulation and liability. (They also have more than enough surveillance data on their customers to power their targeted advertising.)

Where will the rubber meet the road for NLP regulation? AI companies that work closely with governments will lead the way, by necessity. Here at Primer, we build NLP systems for the US, UK, and Australian governments, helping with tasks as diverse as briefing report generation and disinformation detection. This front-row seat to the NLP revolution is humbling.
One thing is now clear. The machine learning tools that we are building today to process our language will transform our society. An intense research effort is underway to spot potential biases, harms, and mitigations. But every voice of hope and concern for this technology must be heard. Building NLP responsibly is our collective duty.

Read more: The Business Implications of Machines that Read and Write

I have a strange new pandemic ritual. Every Monday, I gather my team of machine learning engineers in a video meeting to kick off the week. But before we get down to business, I use a giant neural network to generate questions that help us go deeper than small talk. We vote on our favorite question and then take turns answering it.

  • What would you do if you knew you could not fail?
  • If you were guaranteed that you would live forever, would you still be afraid of death?
  • What is the number one thing you wish you could change about how people see you?

Those are actual questions the model generated. They indeed lead to deep talk.

We’ve been working remotely at Primer since the earliest days of the pandemic. This ritual is one of our solutions to the problem of feeling disconnected to each other. I can tell you: It really works. 

Here’s how I do it. 

First, I create a prompt consisting of numbered questions. For example:

  1. What is the meaning of life?
  2. If you were the last person on Earth, who would you choose to join you?
  3. What animal would you be for a day?

You can find lists of such questions all over the internet, probably inspired originally by this 2015 New York Times article, 36 questions for a private conversation starter. I find that a list of at least 20 questions works best.

Then I feed this prompt to a big autoregressive language model. I use GPT-J-6B, the open source model created by Eleuther. I just copy-paste the prompt and run the model in Primer’s developer interface. The model’s inference takes about 5 seconds.

Photo credit: John Bohannon, generated with VQGAN+CLIP with the prompt “A beautiful rainbow in the style of van Gogh.”

What comes back is machine-written text that continues where mine left off:

4. What thing would you never let anyone see you do?
5. What color are your dreams?
6. If you could have one superpower, what would it be?

Yes, these neural networks know how to count. My team uses the numbers to vote for their favorite.

I admit it may seem bizarre to use a machine to help people move beyond small talk and connect. But this is artificial intelligence at its best: Working together with humans to help them do what they do best. 

I generated 365 Deep Talk questions so you can try it yourself.

Natural language processing (NLP) automates the work that would previously have required hundreds of researchers. These machines read and write – and they’re changing the future of business.

With NLP, we can now analyze information at machine speed, but with human-level precision. As reading and writing is automated through machine learning, companies are freed up to focus on the challenges that are uniquely human.

Read more here