8-15-18 17:37 UTC
Building Software for Any Infrastructure

Building Software for Any Infrastructure

Enterprise software has typically been a challenge to get in the hands of customers, especially when it involves integration with their own infrastructure. By leveraging cloud native technologies, Primer has seen decreased implementation time and effort with its customers.

8-03-18 11:00 UTC
Machine-Generated Knowledge Bases

Machine-Generated Knowledge Bases

Human-generated knowledge bases like Wikipedia have excellent precision but poor recall. To help the humans, we created a self-updating knowledge base that can describe itself in natural language. We call it Quicksilver.

6-21-18 06:00 UTC
Primer Selected as a WEF 2018 Technology Pioneer

Primer Selected as a WEF 2018 Technology Pioneer

We’re thrilled to be recognized by the World Economic Forum today as a 2018 Technology Pioneer. Primer is one of the 61 companies selected from around the world building technologies that are having a significant impact on both business and society.

5-22-18 22:14 UTC
Russian Natural Language Processing

Russian Natural Language Processing

It has become standard practice in the Natural Language Processing (NLP) community. Release a well-optimized English corpus model, and then procedurally apply it to dozens (or even hundreds) of additional foreign languages. These secondary language models are usually trained in a fully unsupervised manner. They're published a few months after the initial English version on ArXiv, and it all makes a big splash in the tech press. But can English-trained models be naively extended to supplementary non-English languages, or is some native-level understanding of a language required prior to a model update?

1-16-18 19:00 UTC
Building seq-to-seq models in TensorFlow (and training them quickly)

Building seq-to-seq models in TensorFlow (and training them quickly)

Let's get our hands dirty and train a state-of-the-art deep learning model to summarize news articles. We'll introduce TensorFlow, discuss how to set up the training task, and present some tips for implementing a seq-to-seq model using RNNs. Once we have a working model, we'll dive into some insights for how to train the model much more quickly (decreasing time from three days to under half a day).

12-22-17 18:28 UTC
2017: The Year in Color

2017: The Year in Color

We analyzed the things journalists described with color in 33 million English-language news articles published in 2017. This was the year of black holes, white supremacists, and pink hats.

12-13-17 19:00 UTC
TLDR: Re-imagining automatic text summarization

TLDR: Re-imagining automatic text summarization

We love getting more information: from news, social media, and even blog posts. Given the bottleneck of reading long-form text, wouldn't it be amazing if we could immediately grasp their main ideas?

Algorithmically generating human-level summaries is a challenging task: it requires identifying entities, concepts, and relationships, and converting learned information into grammatical sentences. In this post, we'll look at how the latest deep learning methods, using recurrent neural networks and attention mechanisms, try to achieve these tasks to bring you smarter summaries.

11-20-17 20:40 UTC
Chinese Word Vectors

Chinese Word Vectors

At Primer we deploy natural language processing (NLP) pipelines that need to support many different languages, including English, Russian and Chinese. One powerful NLP approach is to apply machine learning techniques on raw text by representing words as vectors. In this post, we look at how to encode Chinese words as vectors. But are algorithms developed for English NLP effective on Chinese text? How can we take advantage of the unique linguistic features of the Chinese language?

10-25-17 00:14 UTC
Mapping the world's attention

Mapping the world's attention

What would a map of the world's attention look like? What if you compared Russian vs. English speakers? We chose the topic of terrorism to test out our first prototype of this visualization. We call it a diff map.

10-24-17 00:00 UTC
Primer: Reducing the Cost of Curiosity

Primer: Reducing the Cost of Curiosity

We are Primer, a machine intelligence company based in San Francisco.