Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/TrustifAI/trustifai/llms.txt

Use this file to discover all available pages before exploring further.

MetricContext is the single input type accepted by every TrustifAI evaluation call. You populate it with a query, the LLM’s answer, and the documents retrieved by your RAG pipeline. All four offline metrics and the async batch pipeline operate on this structure. Embeddings are optional: if you omit them, TrustifAI computes them automatically using the embedding model defined in your config file.

Constructor

from trustifai import MetricContext

context = MetricContext(
    query=query,
    answer=answer,
    documents=documents,
)
query
str
required
The user’s original question or prompt. Used as the reference point for semantic drift and epistemic consistency scoring.
answer
str
required
The LLM-generated response to evaluate. Evidence coverage and semantic drift are measured against this text.
documents
List[Any]
required
The retrieved context documents passed to the LLM. TrustifAI accepts four formats interchangeably:
  • LangChain Document — text extracted from .page_content, metadata from .metadata
  • LlamaIndex NodeWithScore — text extracted from .node.text, metadata from .node.metadata
  • Plain strings — used as-is, no metadata available
  • Dicts — text extracted from content, text, or page_content key
query_embeddings
np.ndarray
default:"None"
Pre-computed embedding vector for the query. When None, TrustifAI embeds the query on the first evaluation call. Pass pre-computed embeddings to avoid redundant API calls in batch workloads.
answer_embeddings
np.ndarray
default:"None"
Pre-computed embedding vector for the answer. Used by SemanticDriftMetric and EpistemicConsistencyMetric.
document_embeddings
np.ndarray
default:"None"
Pre-computed embedding vectors for each document in documents. Expected shape: (n_docs, embedding_dim). Used by SourceDiversityMetric to compute per-document relevance.

Construction examples

from langchain_core.documents import Document
from trustifai import MetricContext

docs = [
    Document(
        page_content="The Eiffel Tower is located in Paris, France.",
        metadata={"source": "travel-guide.pdf", "page": 12},
    ),
    Document(
        page_content="It was completed in 1889 and stands 330 metres tall.",
        metadata={"source": "travel-guide.pdf", "page": 12},
    ),
]

context = MetricContext(
    query="Where is the Eiffel Tower and how tall is it?",
    answer="The Eiffel Tower is in Paris. It is 330 metres tall and was finished in 1889.",
    documents=docs,
)
Embeddings are computed on the first get_trust_score call and written back onto the MetricContext object. If you reuse the same context instance for a second call, embeddings will already be populated and no additional embedding API calls are made.

Build docs developers (and LLMs) love