
Finally, AI can fact-check itself. One large language model-based chatbot can now trace its outputs to the exact original data sources that informed them.
Developed by the Allen Institute for Artificial Intelligence (Ai2), OLMoTrace, a new feature in the Ai2 Playground, pinpoints data sources behind text responses from any model in the OLMo (Open Language Model) project.
OLMoTrace identifies the exact pre-training document behind a response — including full, direct quote matches. It also provides source links. To do so, the underlying technology uses a process called “exact-match search” or “string matching.”
“We introduced OLMoTrace to help people understand why LLMs say the things they do from the lens of their training data,” Jiacheng Liu, a University of Washington Ph.D. candidate and Ai2 researcher, told The New Stack.
“By showing that a lot of things generated by LLMs are traceable back to their training data, we are opening up the black boxes of how LLMs work, increasing transparency and our trust in them,” he added.
To date, no other chatbot on the market provides the ability to trace a model’s response back to specific sources used within its training data. This makes the news a big stride for AI visibility and transparency.
Secret Sauce: The Infini-gram Engine
OLMoTrace’s breakthrough is possible using the academic project infini-gram, an engine developed by Liu for efficiently processing queries to a massive data corpora used to train language models.
Available on Hugging Face, infini-gram conducts an exact-match search over indexes compiled from eight major corpora, representing nearly 5 billion documents or 4 trillion tokens.
Infini-gram doesn’t intervene with the generation process but conducts its referencing after a model has issued its response. This makes it appealing as a plug-and-play add-on for other LLM-based chatbots.
What’s especially compelling is that the engine is queryable via an API that processes queries in just tens of milliseconds. Developers can send HTTP POST requests to the endpoint https://api.infini-gram.io/
with a JSON payload defining the semantics and other parameters, and it’ll spit back the matching source texts.
Although the API doesn’t have a guaranteed uptime, it’s an exciting proof of concept for making LLM outputs more verifiable and trustworthy at scale. It’s well-tested, Liu said, having served 500 million API queries to date.
OLMoTrace uses infini-gram at its core but applies its own algorithmic spice on top that enumerates all possible substrates of the model output and makes parallel queries. It also calls its own version of the API with a dedicated server using higher scalability requirements.
Why LLM Traceability Has Been Hard
Historically, LLM-based chatbots have struggled to trace outputs to exact, referenceable sources.
The problem arises from the fundamental design of language models. They are not indexed and easily searchable like a database — instead, they represent patterns and statistical relationships, making their genesis challenging to decompose.
The sheer scale of LLM pretraining data is the culprit, Liu said. “Modern LLMs are trained on trillions of tokens,” he said. “Existing tracing methods are difficult to scale to this level.”
Another problem is the reluctance of chatbots to expose their training data.
“Traceability tools like OLMoTrace do require chatbot providers to expose at least a fraction of their model’s training data,” Liu said. Since training data is a highly-guarded business secret in the AI world, this could be a hard needle to move, he adds.
String Matching: A Complement to RAG
Over the past year, chatbots such as ChatGPT, DeepSeek, Claude, Microsoft Copilot, Perplexity, and others have been armed with retrieval-augmented generation (RAG) to point to external sources.
Whereas RAG typically retrieves and uses external sources, OLMoTrace adds another layer of verification that looks at the training data of the LLM itself.
This is important since, with RAG, models can still produce unique (or erroneous) claims based on their original training. “There’s really no way to know if the model is really relying on in-context documents versus parameterized knowledge learned from training data,” says Liu.
String matching is complementary to recent improvements like RAG and context size increases. But will Ai2’s OLMoTrace set a precedent for other chatbots to follow suit with training data citations? Time will tell.
“We would love to see more chatbots to support traceability and embrace transparency of data,” says Liu. “This is a gap in many existing chatbot systems, and it is hindering their deployment in more high-stake scenarios.”
Increasing Faith in AI Across Domains
For businesses and developers, better traceability and verifiability for model responses could greatly aid in model debugging.
“OLMoTrace would be particularly useful for businesses, where they often customize models on their domain-specific data and want to make sure the models stay faithful to such data,” says Liu.
For researchers, it could expose helpful conditions, such as why a model privileges one mathematical theorem over another, Liu added.
On the consumer side, tools like OLMoTrace could increase user confidence in the accuracy of generative AI, which has been an elusive issue since the beginning. That said, OLMoTrace’s overall use case is quite academic.
“The (AI2) Playground definitely looks and functions like a tool built by researchers for researchers,” said Brent Phillips, producer of the Humanitarian AI Today podcast, told The New Stack. “For staff from humanitarian organizations, I’m looking forward to being able to use it to interface with repositories like the Humanitarian Data Exchange.”
Closing Current Gaps
While OLMoTrace offers unprecedented checks and balances on generative AI, it’s no panacea for LLM risks, such as training data poisoning, model errors or hallucinations.
Furthermore, traceability has its own limits. For instance, although OLMoTrace can provide exact text matches for simple facts, it’s not possible to trace sources for creative generation, such as poems or stories.
Another downside is that OLMoTrace surfaces some sources with 404-error links. This is indicative of link rot and the historical moment in which web data was used to build training datasets, Liu said.
The hope is that by open sourcing infini-gram, he said, the engine can be added to and refined: “By releasing source code and packages, we can allow other people to build indexes on their own training data so they can build for this gap.”
The post Breakthrough: LLM Traces Outputs to Specific Training Data appeared first on The New Stack.
Ai2’s OLMoTrace uses string matching to reveal the exact sources behind chatbot responses