Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/TrustifAI/trustifai/llms.txt

Use this file to discover all available pages before exploring further.

TrustifAI’s MetricContext accepts documents in several formats out of the box. You do not need an adapter layer or a specific framework installed — the library normalizes whatever you pass into its internal representation. This means you can drop TrustifAI into an existing LangChain or LlamaIndex pipeline, or use it with plain Python strings and dicts, without changing how you retrieve documents.

Supported document formats

The documents field of MetricContext accepts any combination of the following:
FormatExample
Plain string"Paris is the capital of France."
Dict with a text field{"page_content": "...", "metadata": {...}}
langchain_core.documents.DocumentDocument(page_content="...", metadata={...})
LlamaIndex NodeWithScoreNodeWithScore(node=TextNode(...), score=0.9)
LangChain and LlamaIndex are not required dependencies. If you have neither installed, pass strings or dicts and TrustifAI works identically.

LangChain integration

The most common LangChain pattern is to retrieve documents with a vector store retriever and pass them directly into MetricContext. LangChain Document objects carry page_content and metadata, both of which TrustifAI reads automatically.
from trustifai import Trustifai, MetricContext
from langchain_core.documents import Document

# 1. Define your RAG context
context = MetricContext(
    query="What is the capital of India?",
    answer="The capital is New Delhi.",
    documents=[
        Document(
            page_content="New Delhi is the capital of India.",
            metadata={"source": "wiki.txt"},
        )
    ],
)

# 2. Initialize the engine
trust_engine = Trustifai("config_file.yaml")

# 3. Score the response
result = trust_engine.get_trust_score(context)
print(f"Trust Score: {result['score']} | Decision: {result['label']}")

# 4. Visualize the reasoning graph (saves reasoning_graph.html)
graph = trust_engine.build_reasoning_graph(result)
trust_engine.visualize(graph, graph_type="pyvis")

Connecting to a LangChain retriever

If you are using a LangChain retriever (FAISS, Chroma, Pinecone, etc.), the retrieved Document list can be passed directly without modification:
from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings
from trustifai import Trustifai, MetricContext

# Assume vectorstore is already populated
retriever = vectorstore.as_retriever(search_kwargs={"k": 4})

query = "What causes photosynthesis?"
answer = your_llm.invoke(query)          # your existing generation step
docs = retriever.invoke(query)           # list[langchain_core.documents.Document]

context = MetricContext(
    query=query,
    answer=answer,
    documents=docs,                      # pass directly — no conversion needed
)

trust_engine = Trustifai("config_file.yaml")
result = trust_engine.get_trust_score(context)
print(result["label"])

LlamaIndex integration

TrustifAI understands NodeWithScore objects returned by LlamaIndex query engines and retrievers. Pass the source_nodes list from a query result directly into MetricContext:
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
from trustifai import Trustifai, MetricContext

# Build your LlamaIndex index
documents = SimpleDirectoryReader("./data").load_data()
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()

query = "What is the boiling point of water?"
response = query_engine.query(query)

context = MetricContext(
    query=query,
    answer=str(response),
    documents=response.source_nodes,     # list[NodeWithScore] — passed directly
)

trust_engine = Trustifai("config_file.yaml")
result = trust_engine.get_trust_score(context)
print(f"Score: {result['score']}  Label: {result['label']}")

Plain strings and dicts

You do not need LangChain or LlamaIndex. Plain strings work for the simplest integration:
from trustifai import Trustifai, MetricContext

context = MetricContext(
    query="Who invented the telephone?",
    answer="Alexander Graham Bell invented the telephone in 1876.",
    documents=[
        "Alexander Graham Bell is credited with inventing the telephone in 1876.",
        "Bell received the first patent for the telephone on March 7, 1876.",
    ],
)

trust_engine = Trustifai("config_file.yaml")
result = trust_engine.get_trust_score(context)
print(result)
Dicts are supported when your documents carry structured metadata:
context = MetricContext(
    query="What is the speed of light?",
    answer="The speed of light in a vacuum is approximately 299,792 km/s.",
    documents=[
        {
            "page_content": "Light travels at 299,792 kilometres per second in a vacuum.",
            "metadata": {"source": "physics_textbook.pdf", "page": 14},
        },
        {
            "page_content": "The speed of light is a fundamental constant denoted by c.",
            "metadata": {"source": "nist.gov"},
        },
    ],
)

Supported LLM providers

TrustifAI routes all LLM and embedding calls through LiteLLM, which means it works with any provider LiteLLM supports. Configure the provider in config_file.yaml and export the corresponding API key:
Providerllm.type in configRequired env variable
OpenAIopenaiOPENAI_API_KEY
AnthropicanthropicANTHROPIC_API_KEY
Google GeminigeminiGEMINI_API_KEY
Azure OpenAIazure_aiAZURE_API_KEY
MistralmistralMISTRAL_API_KEY
Ollama (local)ollama— (no key needed)
NVIDIA NIMnvidia_nimNVIDIA_API_KEY
OpenRouteropenrouterOPENROUTER_API_KEY
HuggingFacehuggingfaceHF_TOKEN
LangChain and LlamaIndex are not required by TrustifAI. The library detects their document types at runtime if the packages are installed. You can evaluate RAG responses from any retrieval system by passing plain strings or dicts.
Source diversity scoring relies on source identifiers extracted from document metadata. Populate metadata fields such as source, url, document_id, or filename on your documents so TrustifAI can distinguish between sources and reward synthesis across multiple independent documents.

Configuration

Configure your LLM provider, embedding model, and API keys in config_file.yaml.

Batch evaluation

Scale evaluations across entire datasets with AsyncTrustifai and evaluate_dataset.

Build docs developers (and LLMs) love