Use this file to discover all available pages before exploring further.
TrustifAI is distributed as a Python package on PyPI and supports Python 3.10 and later. You can install the base package, opt in to tracing support via MLflow, or clone the repository to run from source. After installation, you configure your LLM provider by setting API keys in a .env file and pointing TrustifAI at a config_file.yaml that controls model selection, metric thresholds, and score weights.
TrustifAI requires Python 3.10 or later. It is tested against Python 3.10, 3.11, 3.12, and 3.13.
TrustifAI uses LiteLLM under the hood, which means it works with any provider that LiteLLM supports — OpenAI, Anthropic, Gemini, Azure, Mistral, Groq, Ollama, OpenRouter, Cohere, and more. You configure access by setting the appropriate API keys as environment variables.Create a .env file in your project root (or export the keys in your shell):
.env
OPENAI_API_KEY=<your-api-key>ANTHROPIC_API_KEY=<your-api-key>GEMINI_API_KEY=<your-api-key>AZURE_API_KEY=<your-api-key>MISTRAL_API_KEY=<your-api-key>GROQ_API_KEY=<your-api-key>OPENROUTER_API_KEY=<your-api-key>COHERE_API_KEY=<your-api-key>HF_TOKEN=<your-api-key># Any other API key supported by LiteLLM
You only need keys for the providers you actually use. TrustifAI also accepts an env_file path directly in config_file.yaml if you want to keep credentials in a separate file:
TrustifAI is driven by a YAML configuration file. By default it looks for config_file.yaml in the working directory, but you can pass any path to the Trustifai constructor:
Any metric can be disabled by setting enabled: false. Disabled metrics are excluded from the weighted aggregation entirely. Weights do not need to be renormalized manually — TrustifAI only initializes metrics that have a non-zero weight configured under score_weights.