Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/TrustifAI/trustifai/llms.txt

Use this file to discover all available pages before exploring further.

TrustifAI is distributed as a Python package on PyPI and supports Python 3.10 and later. You can install the base package, opt in to tracing support via MLflow, or clone the repository to run from source. After installation, you configure your LLM provider by setting API keys in a .env file and pointing TrustifAI at a config_file.yaml that controls model selection, metric thresholds, and score weights.
TrustifAI requires Python 3.10 or later. It is tested against Python 3.10, 3.11, 3.12, and 3.13.

Install the package

pip install trustifai
The trace extra adds MLflow for experiment tracking. The test extra adds pytest, langchain-core, and llama-index for running the test suite.

Install from source

If you want to run the latest unreleased code or contribute to TrustifAI, clone the repository and install dependencies directly:
git clone https://github.com/Trustifai/trustifai.git
cd trustifai
pip install -r requirements.txt

Set up environment variables

TrustifAI uses LiteLLM under the hood, which means it works with any provider that LiteLLM supports — OpenAI, Anthropic, Gemini, Azure, Mistral, Groq, Ollama, OpenRouter, Cohere, and more. You configure access by setting the appropriate API keys as environment variables. Create a .env file in your project root (or export the keys in your shell):
.env
OPENAI_API_KEY=<your-api-key>
ANTHROPIC_API_KEY=<your-api-key>
GEMINI_API_KEY=<your-api-key>
AZURE_API_KEY=<your-api-key>
MISTRAL_API_KEY=<your-api-key>
GROQ_API_KEY=<your-api-key>
OPENROUTER_API_KEY=<your-api-key>
COHERE_API_KEY=<your-api-key>
HF_TOKEN=<your-api-key>
# Any other API key supported by LiteLLM
You only need keys for the providers you actually use. TrustifAI also accepts an env_file path directly in config_file.yaml if you want to keep credentials in a separate file:
config_file.yaml
env_file: "creds.env"

Configure your models and metrics

TrustifAI is driven by a YAML configuration file. By default it looks for config_file.yaml in the working directory, but you can pass any path to the Trustifai constructor:
trust_engine = Trustifai(config_path="path/to/config_file.yaml")
A complete configuration file covers the LLM, embedding model, reranker, metric thresholds, and score weights:
config_file.yaml
env_file: "creds.env"  # optional — path to your .env file

tracing:
  type: "default"
  params:
    enabled: false
    tracking_uri: null              # set your MLflow tracking URI if available
    experiment_name: "trustifai_experiment"

llm:
  type: "openai"                    # openai, anthropic, gemini, mistral, ollama, azure_ai, etc.
  params:
    model_name: "gpt-4o"
    api_type: "chat_completion"     # "chat_completion" (default) or "responses"
  kwargs:
    temperature: 0.01
    max_tokens: 2048
    top_p: 0.95
    logprobs: true

embeddings:
  type: "openai"                    # any LiteLLM-supported embedding provider
  params:
    model_name: "text-embedding-3-small"

reranker:
  type: "cohere"                    # cohere, together_ai, azure_ai, voyage, etc.
  params:
    model_name: "rerank-v4.0-fast"

metrics:
  - type: "evidence_coverage"
    enabled: true
    params:
      strategy: "llm"               # "llm" or "reranker"
      STRONG_GROUNDING: 0.85        # threshold for "Trusted" label
      PARTIAL_GROUNDING: 0.60

  - type: "consistency"
    enabled: true
    params:
      STABLE_CONSISTENCY: 0.85      # requires 0.85 cosine similarity to be "Stable"
      FRAGILE_CONSISTENCY: 0.60

  - type: "source_diversity"
    enabled: true
    params:
      HIGH_DIVERSITY: 0.85
      MODERATE_DIVERSITY: 0.60

  - type: "semantic_drift"
    enabled: true
    params:
      STRONG_ALIGNMENT: 0.85
      PARTIAL_ALIGNMENT: 0.60

  - type: "trust_score"
    params:
      RELIABLE_TRUST: 0.80          # final score >= 0.80 → RELIABLE
      ACCEPTABLE_TRUST: 0.60        # final score >= 0.60 → ACCEPTABLE (WITH CAUTION)

score_weights:
  - type: "evidence_coverage"
    params:
      weight: 0.40                  # highest priority — factual accuracy
  - type: "consistency"
    params:
      weight: 0.20
  - type: "source_diversity"
    params:
      weight: 0.10
  - type: "semantic_drift"
    params:
      weight: 0.30
Any metric can be disabled by setting enabled: false. Disabled metrics are excluded from the weighted aggregation entirely. Weights do not need to be renormalized manually — TrustifAI only initializes metrics that have a non-zero weight configured under score_weights.

Verify your installation

Run the following snippet to confirm TrustifAI is installed and importable:
from trustifai import Trustifai, MetricContext

print("TrustifAI imported successfully")
If you see no errors, you’re ready to score your first response. Head to the Quickstart to run a complete example.

Next steps

Quickstart

Score a RAG response and visualize the reasoning graph in under five minutes.

Configuration

Learn how to tune thresholds, swap providers, and enable MLflow tracing.

Build docs developers (and LLMs) love