Documentation Index
Fetch the complete documentation index at: https://mintlify.com/TrustifAI/trustifai/llms.txt
Use this file to discover all available pages before exploring further.
MetricContext is the single input type accepted by every TrustifAI evaluation call. You populate it with a query, the LLM’s answer, and the documents retrieved by your RAG pipeline. All four offline metrics and the async batch pipeline operate on this structure. Embeddings are optional: if you omit them, TrustifAI computes them automatically using the embedding model defined in your config file.
Constructor
The user’s original question or prompt. Used as the reference point for semantic drift and epistemic consistency scoring.
The LLM-generated response to evaluate. Evidence coverage and semantic drift are measured against this text.
The retrieved context documents passed to the LLM. TrustifAI accepts four formats interchangeably:
- LangChain
Document— text extracted from.page_content, metadata from.metadata - LlamaIndex
NodeWithScore— text extracted from.node.text, metadata from.node.metadata - Plain strings — used as-is, no metadata available
- Dicts — text extracted from
content,text, orpage_contentkey
Pre-computed embedding vector for the query. When
None, TrustifAI embeds the query on the first evaluation call. Pass pre-computed embeddings to avoid redundant API calls in batch workloads.Pre-computed embedding vector for the answer. Used by
SemanticDriftMetric and EpistemicConsistencyMetric.Pre-computed embedding vectors for each document in
documents. Expected shape: (n_docs, embedding_dim). Used by SourceDiversityMetric to compute per-document relevance.Construction examples
Embeddings are computed on the first
get_trust_score call and written back onto the MetricContext object. If you reuse the same context instance for a second call, embeddings will already be populated and no additional embedding API calls are made.