TrustifAI reads all of its runtime settings from a single YAML file — by defaultDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/TrustifAI/trustifai/llms.txt
Use this file to discover all available pages before exploring further.
config_file.yaml in your working directory. You pass the path to this file when you instantiate Trustifai or AsyncTrustifai, so different environments (development, staging, production) can each carry their own config without touching application code.
File structure overview
config_file.yaml has seven top-level sections. Each section is described below, followed by a complete reference example.
env_file
Points to a .env file containing API keys and secrets. This keeps credentials out of your YAML and out of source control.
.env file follows standard KEY=value syntax. TrustifAI uses LiteLLM under the hood, so any key LiteLLM recognises is valid here:
env_file is optional. If your keys are already exported as shell environment variables, you can omit this field entirely.
tracing
Controls MLflow experiment tracking. Tracing is disabled by default and is an optional feature — install trustifai[trace] to enable it.
| Field | Description |
|---|---|
enabled | Set to true to activate MLflow logging |
tracking_uri | URI of your MLflow tracking server. Leave null to use the local ./mlruns default |
experiment_name | Name of the MLflow experiment that runs are grouped under |
llm
Configures the language model used for metric evaluation (evidence coverage uses LLM-based NLI by default, and epistemic consistency samples multiple generations).
type field is the LiteLLM provider prefix. TrustifAI supports any model that LiteLLM can route, including:
| Provider | type value | Example model_name |
|---|---|---|
| OpenAI | openai | gpt-4o, gpt-4o-mini |
| Anthropic | anthropic | claude-3-5-sonnet-20241022 |
| Google Gemini | gemini | gemini/gemini-1.5-pro |
| Mistral | mistral | mistral/mistral-large-latest |
| Ollama (local) | ollama | ollama/llama3 |
| Azure AI | azure_ai | azure_ai/gpt-4o |
| NVIDIA NIM | nvidia_nim | nvidia_nim/meta/llama-3.1-8b-instruct |
| HuggingFace | huggingface | huggingface/mistralai/Mistral-7B-v0.1 |
| OpenRouter | openrouter | openrouter/anthropic/claude-3.5-sonnet |
Set
logprobs: true in kwargs if you want the online Confidence Score metric. Models that do not support log probabilities will return a zeroed confidence result.embeddings
Configures the embedding model used to compute query, answer, and document vector representations.
reranker
An optional reranker model used by the evidence coverage metric when strategy: "reranker" is set. Omit this section entirely if you use the default "llm" strategy.
cohere, together_ai, azure_ai, fireworks_ai, and voyage.
metrics
A list of metric configurations. Each entry controls whether the metric is active and sets its classification thresholds.
enabled: false. Its weight is automatically zeroed out and the remaining weights are re-normalized.
score_weights
Controls how each metric contributes to the final Trust Score. Weights are combined by weighted sum; the library validates that they do not exceed 1.0 and normalizes them automatically after disabled metrics are removed.
Complete example
The following is the full defaultconfig_file.yaml shipped with TrustifAI:
Loading the config
Pass the path toTrustifai or AsyncTrustifai at instantiation time:
Config.from_yaml parses the YAML, flattens all metric thresholds into a single MetricThresholds object, and normalizes the weights — all before your first evaluation call.
Batch evaluation
Run concurrent evaluations over large datasets with AsyncTrustifai.
Custom metrics
Add custom metric types and configure their weights in this file.
MLflow tracing
Enable and configure the tracing section for experiment tracking.
Integrations
Connect TrustifAI to LangChain, LlamaIndex, and other frameworks.