AI is on course to transform society. Recently, attention has focused on generative models, neural networks trained on internet-scale text data. These models are powerful, with some researchers describing them as approaching human-level intelligence.
But can you trust what a generative model generates? If an AI system answers a question—about a real-time situation, about available intelligence, or even about a single document—how can the user validate the credibility of the answer? There is a solution, called grounding, which is fast becoming best practice for deploying generative models.