Use this file to discover all available pages before exploring further.
TrustifAI’s MetricContext accepts documents in several formats out of the box. You do not need an adapter layer or a specific framework installed — the library normalizes whatever you pass into its internal representation. This means you can drop TrustifAI into an existing LangChain or LlamaIndex pipeline, or use it with plain Python strings and dicts, without changing how you retrieve documents.
The most common LangChain pattern is to retrieve documents with a vector store retriever and pass them directly into MetricContext. LangChain Document objects carry page_content and metadata, both of which TrustifAI reads automatically.
from trustifai import Trustifai, MetricContextfrom langchain_core.documents import Document# 1. Define your RAG contextcontext = MetricContext( query="What is the capital of India?", answer="The capital is New Delhi.", documents=[ Document( page_content="New Delhi is the capital of India.", metadata={"source": "wiki.txt"}, ) ],)# 2. Initialize the enginetrust_engine = Trustifai("config_file.yaml")# 3. Score the responseresult = trust_engine.get_trust_score(context)print(f"Trust Score: {result['score']} | Decision: {result['label']}")# 4. Visualize the reasoning graph (saves reasoning_graph.html)graph = trust_engine.build_reasoning_graph(result)trust_engine.visualize(graph, graph_type="pyvis")
TrustifAI understands NodeWithScore objects returned by LlamaIndex query engines and retrievers. Pass the source_nodes list from a query result directly into MetricContext:
from llama_index.core import VectorStoreIndex, SimpleDirectoryReaderfrom trustifai import Trustifai, MetricContext# Build your LlamaIndex indexdocuments = SimpleDirectoryReader("./data").load_data()index = VectorStoreIndex.from_documents(documents)query_engine = index.as_query_engine()query = "What is the boiling point of water?"response = query_engine.query(query)context = MetricContext( query=query, answer=str(response), documents=response.source_nodes, # list[NodeWithScore] — passed directly)trust_engine = Trustifai("config_file.yaml")result = trust_engine.get_trust_score(context)print(f"Score: {result['score']} Label: {result['label']}")
You do not need LangChain or LlamaIndex. Plain strings work for the simplest integration:
from trustifai import Trustifai, MetricContextcontext = MetricContext( query="Who invented the telephone?", answer="Alexander Graham Bell invented the telephone in 1876.", documents=[ "Alexander Graham Bell is credited with inventing the telephone in 1876.", "Bell received the first patent for the telephone on March 7, 1876.", ],)trust_engine = Trustifai("config_file.yaml")result = trust_engine.get_trust_score(context)print(result)
Dicts are supported when your documents carry structured metadata:
context = MetricContext( query="What is the speed of light?", answer="The speed of light in a vacuum is approximately 299,792 km/s.", documents=[ { "page_content": "Light travels at 299,792 kilometres per second in a vacuum.", "metadata": {"source": "physics_textbook.pdf", "page": 14}, }, { "page_content": "The speed of light is a fundamental constant denoted by c.", "metadata": {"source": "nist.gov"}, }, ],)
TrustifAI routes all LLM and embedding calls through LiteLLM, which means it works with any provider LiteLLM supports. Configure the provider in config_file.yaml and export the corresponding API key:
Provider
llm.type in config
Required env variable
OpenAI
openai
OPENAI_API_KEY
Anthropic
anthropic
ANTHROPIC_API_KEY
Google Gemini
gemini
GEMINI_API_KEY
Azure OpenAI
azure_ai
AZURE_API_KEY
Mistral
mistral
MISTRAL_API_KEY
Ollama (local)
ollama
— (no key needed)
NVIDIA NIM
nvidia_nim
NVIDIA_API_KEY
OpenRouter
openrouter
OPENROUTER_API_KEY
HuggingFace
huggingface
HF_TOKEN
LangChain and LlamaIndex are not required by TrustifAI. The library detects their document types at runtime if the packages are installed. You can evaluate RAG responses from any retrieval system by passing plain strings or dicts.
Source diversity scoring relies on source identifiers extracted from document metadata. Populate metadata fields such as source, url, document_id, or filename on your documents so TrustifAI can distinguish between sources and reward synthesis across multiple independent documents.
Configuration
Configure your LLM provider, embedding model, and API keys in config_file.yaml.
Batch evaluation
Scale evaluations across entire datasets with AsyncTrustifai and evaluate_dataset.