Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/TrustifAI/trustifai/llms.txt

Use this file to discover all available pages before exploring further.

AsyncTrustifai is a thin, thread-safe wrapper around the synchronous Trustifai engine. It offloads every evaluation to a thread-pool worker via asyncio.to_thread, meaning your event loop never blocks. Thread isolation is provided by threading.local() — each OS thread that calls the wrapper gets its own lazily-constructed Trustifai instance, which eliminates the shared-state race conditions that would occur if a single engine were shared across concurrent requests.

Why thread isolation matters

Trustifai.get_trust_score mutates instance state during a call: it initialises metrics from the registry and writes computed embeddings back onto the MetricContext. If multiple coroutines ran the same Trustifai object in parallel threads, these mutations would race. AsyncTrustifai avoids this by letting each worker thread own its own engine. Config loading (a YAML file read) is the only overhead, and it happens at most once per thread.

Constructor

from trustifai import AsyncTrustifai

engine = AsyncTrustifai(config_path="config_file.yaml")
config_path
string
required
Path to the YAML configuration file. Passed through to each thread-local Trustifai instance when it is first constructed. The file must exist and be readable at the time the first evaluation is requested on any given thread.

Methods

get_trust_score

Evaluate a single MetricContext. Non-blocking: the call is dispatched to asyncio.to_thread and awaited, so the event loop remains free to handle other requests.
result = await engine.get_trust_score(context)
context
MetricContext
required
The RAG context to evaluate. Requires query, answer, and documents. See MetricContext.
Returns Dict — the same shape as Trustifai.get_trust_score:
score
float
Weighted aggregate trust score, 0.0–1.0.
label
string
"RELIABLE", "ACCEPTABLE (WITH CAUTION)", or "UNRELIABLE".
details
object
Per-metric results keyed by metric name. Each value is a dict with score, label, details, and optionally execution_metadata.
execution_metadata
object

build_reasoning_graph

Async-safe reasoning graph builder. Dispatches Trustifai.build_reasoning_graph to a thread-pool worker.
graph = await engine.build_reasoning_graph(result)
result
Dict
required
A trust score dict returned by get_trust_score.
Returns ReasoningGraph — a dataclass with trace_id, nodes, and edges. Pass it directly to visualize.

visualize

Async-safe visualizer. Dispatches Trustifai.visualize to a thread-pool worker.
await engine.visualize(graph, graph_type="pyvis")
mermaid_str = await engine.visualize(graph, graph_type="mermaid")
graph
ReasoningGraph
required
A ReasoningGraph returned by build_reasoning_graph.
graph_type
string
default:"pyvis"
"pyvis" saves reasoning_graph.html to disk. "mermaid" returns a Mermaid diagram string.
Returns AnyNone for "pyvis", or a str for "mermaid".

.sync property

Returns the Trustifai engine for the current thread. The engine is created lazily on first access. Use this when you are already running inside a thread-pool worker and want to call synchronous engine methods directly without a redundant asyncio.to_thread hop.
sync_engine = engine.sync
result = sync_engine.get_trust_score(context)
Returns Trustifai — the thread-local engine instance.
Accessing .sync from the event-loop thread gives you a Trustifai instance, but calling blocking methods on it from that thread will still block the loop. Only use .sync from within a function already dispatched to a thread via asyncio.to_thread or a ThreadPoolExecutor.

Thread-safety model

The diagram below shows how three concurrent FastAPI requests each get their own isolated engine: Each thread constructs its own Trustifai instance on first use and caches it for subsequent requests assigned to the same thread. There is no locking and no shared mutable state between threads.

Usage in FastAPI

from fastapi import FastAPI
from trustifai import AsyncTrustifai, MetricContext

app    = FastAPI()
engine = AsyncTrustifai("config_file.yaml")

@app.post("/evaluate")
async def evaluate(query: str, answer: str, documents: list[str]):
    context = MetricContext(query=query, answer=answer, documents=documents)
    return await engine.get_trust_score(context)
For large-scale batch workloads, use evaluate_dataset instead of calling get_trust_score in a manual loop. It adds a concurrency semaphore, an optional token-bucket rate limiter, automatic retry with exponential backoff, and an ordered BatchResult with aggregate statistics.

Build docs developers (and LLMs) love