Use this file to discover all available pages before exploring further.
A Trust Score is only as useful as your ability to understand and explain it. TrustifAI’s Reasoning Graph makes the entire evaluation pipeline visible — it turns the abstract weighted aggregation into a directed acyclic graph (DAG) that shows which metrics fired, what each one found, and how the final decision was reached. You can render it as an interactive HTML visualization or export it as Mermaid syntax for embedding in documentation.
The final RELIABLE / ACCEPTABLE / UNRELIABLE label
Only active metrics (those with a non-zero weight in your config) appear in the graph. Disabled or zero-weight metrics are silently excluded from both computation and visualization.
Node and edge colors communicate trust level at a glance:
Color
Threshold
Meaning
Green (#2ecc71)
Score ≥ 0.85
High trust
Orange (#f39c12)
Score ≥ 0.60
Medium trust
Red (#e74c3c)
Score < 0.60
Low trust
The thresholds used for coloring are pulled from each metric’s own config — for example, STRONG_GROUNDING and PARTIAL_GROUNDING for the Evidence Coverage node — so the colors reflect the same thresholds you configured for the labels.
Call build_reasoning_graph() on the result returned by get_trust_score(). The graph is a pure data structure (ReasoningGraph) — no rendering happens yet.
from trustifai import Trustifai, MetricContextfrom langchain_core.documents import Documentcontext = MetricContext( query="What is the capital of India?", answer="The capital is New Delhi.", documents=[ Document( page_content="New Delhi is the capital of India.", metadata={"source": "wiki.txt"} ) ])trust_engine = Trustifai(config_path="config_file.yaml")result = trust_engine.get_trust_score(context)graph = trust_engine.build_reasoning_graph(result)
The PyVis renderer produces a self-contained HTML file (reasoning_graph.html by default) with a physics-based interactive layout. Metric nodes are arranged in a circle around the central aggregation diamond. You can drag nodes, zoom, and hover over any node to see its score, label, and diagnostic explanation in a tooltip.PyVis requires the optional pyvis package:
pip install pyvis
Serve reasoning_graph.html directly from your evaluation pipeline to give stakeholders a self-explanatory audit trail for every scored response, without requiring them to read code or JSON.
The Mermaid renderer returns a fenced code block ready for embedding in GitHub, Notion, or any documentation site that renders Mermaid diagrams. Each node is styled with the same color coding as the PyVis graph.Example output:
The following snippet shows the full evaluate-then-visualize workflow in both output formats:
from trustifai import Trustifai, MetricContextfrom langchain_core.documents import Document# 1. Define contextcontext = MetricContext( query="What is the capital of India?", answer="The capital is New Delhi.", documents=[ Document( page_content="New Delhi is the capital of India.", metadata={"source": "wiki.txt"} ) ])# 2. Score and build graphtrust_engine = Trustifai(config_path="config_file.yaml")result = trust_engine.get_trust_score(context)graph = trust_engine.build_reasoning_graph(result)# 3a. Interactive HTMLtrust_engine.visualize(graph, graph_type="pyvis")# → Saves reasoning_graph.html# 3b. Mermaid for documentationmermaid = trust_engine.visualize(graph, graph_type="mermaid")print(mermaid)
Each graph is assigned a unique trace_id (UUID4) so you can correlate graphs with specific evaluation runs in logs or tracing systems.
The Reasoning Graph is built purely from the get_trust_score() result dict — it does not make any additional API calls. You can safely rebuild or re-render it as many times as needed from the same result object.