Skip to main content
Drako is configured via a .drako.yaml file in your project root. Generate an initial config from your scan results with drako init.

Generating the config

1

Run a scan

Drako uses your scan results to pre-populate the config with your real agents, tools, and recommended policies.
drako scan .
2

Run drako init

Choose a governance level. All levels start from the same autopilot base — --balanced and --strict apply progressively stricter overrides.
drako init                     # autopilot (default) — audit-first
drako init --balanced          # enforcement active with escape hatches
drako init --strict            # maximum governance for enterprise
drako init --manual            # full YAML with all sections
drako init --template fintech  # start from an industry template
3

Add governance to your code

Wrap your agent with one line:
from drako import govern
crew = govern(crew)
See Autopilot mode for how drako init generates smart defaults from your scan, and Policy templates for available industry presets.

Top-level fields

version: "1.0"                     # Config schema version
governance_level: autopilot        # autopilot | balanced | strict | custom
extends: fintech                   # Inherit from a policy template (optional)
tenant_id: your_tenant_id          # Required for runtime enforcement
api_key_env: DRAKO_API_KEY         # Env var name for the API key
endpoint: https://api.getdrako.com # Drako API endpoint
framework: crewai                  # crewai | langgraph | autogen | generic
version
string
required
Config schema version. Use "1.0".
governance_level
string
default:"custom"
Controls the upgrade path when running drako upgrade.
ValueBehavior
autopilotAudit mode. Logs all violations, blocks nothing. Upgrade path: → balanced → strict
balancedDLP enforce, ODD enforce, HITL rejects on timeout
strict+ intent verification, cryptographic audit, magnitude enforce
customNo managed upgrade path. You control every field.
extends
string
Inherit all policy settings from a named template, then override only what you need. Available values: base · startup · fintech · healthcare · eu-ai-act · enterprise.
extends: fintech
governance_level: balanced

# Override just this one setting:
policies:
  hitl:
    approval_timeout_minutes: 60
tenant_id
string
required
Your Drako tenant identifier. Required for runtime enforcement. Automatically populated by drako init.
api_key_env
string
default:"DRAKO_API_KEY"
The name of the environment variable Drako reads for the API key. For CI/CD, set this as a secret and omit the api_key field from the YAML entirely.Priority order:
  1. Environment variable named by api_key_env
  2. api_key field stored directly in .drako.yaml
endpoint
string
default:"https://api.getdrako.com"
The Drako API endpoint. Override for self-hosted deployments.
framework
string
default:"generic"
The agent framework in use. Drako auto-detects this during drako init. Accepted values: crewai · langgraph · autogen · generic.

agents

Declares the agents in your project. Populated automatically by drako init from scan results.
agents:
  researcher:
    source: agents/researcher.py
    description: "Searches the web and reads documents"
  writer:
    source: agents/writer.py
    description: "Drafts reports and sends emails"
FieldTypeDescription
sourcestringPath to the agent’s source file
descriptionstringHuman-readable description (optional)

tools

Declares tools and their access types. Used for ODD enforcement and scan reporting.
tools:
  web_search:
    type: read
  file_reader:
    type: read
  send_email:
    type: write
  code_runner:
    type: execute
  pay_invoice:
    type: payment
TypeRisk levelDescription
readLowRead-only operations
writeMediumCreates or modifies data
executeHighRuns code or shell commands
networkMediumMakes external HTTP calls
paymentCriticalInitiates financial transactions

policies

Restrict which tools each agent can use.
policies:
  odd:
    enforcement_mode: audit      # audit | enforce | off
    default_policy: allow        # allow | deny
    agents:
      researcher:
        permitted_tools: [web_search, file_reader]
        forbidden_tools: [code_runner, send_email]
      writer:
        permitted_tools: [send_email, file_reader]
FieldTypeDefaultDescription
enforcement_modestringauditaudit logs violations; enforce blocks them
default_policystringallowWhat to do when no agent rule matches
agents.<name>.permitted_toolslist[string][]Allowlist — any tool not listed is blocked
agents.<name>.forbidden_toolslist[string][]Blocklist — listed tools are always blocked
When both permitted_tools and forbidden_tools are set for an agent, forbidden_tools takes precedence.
Scan tool inputs and outputs for PII/PCI data. Detected entity types (Presidio-based): SSN, credit card numbers, email addresses, phone numbers, passport numbers, and more.
policies:
  dlp:
    mode: enforce        # audit | enforce | off
    sensitivity: high    # low | medium | high
FieldTypeDefaultDescription
modestringauditaudit logs PII; enforce blocks the call
sensitivitystringmediumDLP sensitivity level — higher values reduce false negatives but increase false positives
Prevents one failing agent from cascading failures to the rest of the system.
policies:
  circuit_breaker:
    agent_level:
      failure_threshold: 5        # Open circuit after N failures
      time_window_seconds: 60     # Sliding window
      recovery_timeout_seconds: 30 # Cooldown before half-opening
FieldTypeDefaultDescription
failure_thresholdint10Number of failures before opening the circuit
time_window_secondsint300Sliding window for failure counting
recovery_timeout_secondsint60Cooldown before allowing trial requests
Pause agent execution and require human approval before proceeding. Implements EU AI Act Article 14.
policies:
  hitl:
    mode: enforce                  # audit | enforce | off
    triggers:
      tool_types: [write, execute, payment]
      tools: [delete_database, send_wire_transfer]
      trust_score_below: 60
      spend_above_usd: 100.00
      records_above: 1000
      first_time_tool: false
      first_time_action: false
    notification:
      webhook_url: https://hooks.slack.com/...
      email: [email protected]
    approval_timeout_minutes: 30
    timeout_action: reject         # reject | allow
FieldTypeDefaultDescription
modestringoffenforce pauses execution; audit logs without pausing
triggers.tool_typeslist[string][]Trigger HITL for any tool of these types
triggers.toolslist[string][]Trigger HITL for specific named tools
triggers.trust_score_belowfloat|nullnullTrigger when agent trust score drops below this value
triggers.spend_above_usdfloat|nullnullTrigger when session spend exceeds this amount
triggers.records_aboveint|nullnullTrigger when a tool accesses more than N records
triggers.first_time_toolboolfalseTrigger on first-ever use of any tool
triggers.first_time_actionboolfalseTrigger on first action in a new session
approval_timeout_minutesint30How long to wait for human response
timeout_actionstringrejectWhat to do if no response arrives: reject (safe) or allow (permissive)
Setting timeout_action: allow means unanswered approval requests let the action proceed. Use reject in production environments.
Cap how much an agent can spend or how many records it can access in a single action or session.
policies:
  magnitude:
    max_spend_per_action_usd: 10.00
    max_spend_per_session_usd: 100.00
    max_records_per_action: 50
    enforcement_mode: enforce       # audit | enforce
FieldTypeDefaultDescription
max_spend_per_action_usdfloatMax cost of a single tool call
max_spend_per_session_usdfloatMax cumulative session spend
max_records_per_actionintMax records returned by a single tool call
enforcement_modestringauditenforce blocks calls that exceed limits
Configure the tamper-evident audit log.
policies:
  audit:
    enabled: true
    cryptographic: true            # SHA-256 hash chain + Ed25519 signatures
    retention_days: 365
FieldTypeDefaultDescription
enabledbooltrueEnable audit logging
cryptographicboolfalseEnable SHA-256 hash chain + Ed25519 digital signatures
retention_daysint7How long to retain audit records
Enable cryptographic: true for any compliance regime that requires tamper-evident records (SOX, MiFID II, HIPAA, EU AI Act Art. 12).
Require a signed intent token before allowing high-risk tool calls. Prevents prompt injection from hijacking approved actions.
policies:
  intent_verification:
    mode: enforce                  # audit | enforce | off
    required_for:
      tool_types: [payment, write, execute]
      tools: [delete_record]
    anti_replay: true
    intent_ttl_seconds: 300
FieldTypeDefaultDescription
modestringoffenforce blocks calls without a valid intent token
required_for.tool_typeslist[string][payment, write, execute]Tool types that require intent tokens
required_for.toolslist[string][]Specific named tools that require intent tokens
anti_replaybooltrueReject reused intent tokens
intent_ttl_secondsint300Token validity window in seconds
Run custom scripts at governance checkpoints.
policies:
  hooks:
    pre_action:
      - name: validate_input
        condition: "tool_type == 'execute'"
        script: scripts/validate.py
        timeout_ms: 5000
        action_on_fail: block   # block | allow
        priority: 0
    post_action:
      - name: log_to_siem
        script: scripts/siem_export.py
    on_error:
      - name: alert_oncall
        script: scripts/alert.py
    on_session_end:
      - name: cost_report
        script: scripts/cost_report.py
Hook entry fields:
FieldTypeDefaultDescription
namestringHook identifier
conditionstring|nullnullExpression that must be true to trigger the hook
scriptstring|nullnullPath to the hook script
timeout_msint5000Max execution time before the hook is skipped
action_on_failstringallowblock or allow when the hook fails or times out
priorityint0Execution order when multiple hooks match (lower runs first)
Hook types: pre_action · post_action · on_error · on_session_end
Track, route, cache, and budget LLM spending.
policies:
  finops:
    tracking:
      enabled: true
      model_costs:
        gpt-4o:
          input: 0.0025
          output: 0.01
    routing:
      enabled: true
      default_model: gpt-4o
      rules:
        - condition: "task_complexity == 'low'"
          model: gpt-4o-mini
          reason: "Use cheaper model for simple tasks"
    cache:
      enabled: true
      similarity_threshold: 0.92
      ttl_hours: 24
    budgets:
      daily_usd: 50.00
      weekly_usd: 250.00
      monthly_usd: 1000.00
      alert_at_percent: [50, 80, 95]
tracking fields:
FieldTypeDefaultDescription
enabledbooltrueEnable cost tracking
model_costsdict{}Per-model input/output costs (USD per 1K tokens)
routing fields:
FieldTypeDefaultDescription
enabledboolfalseEnable model routing based on rules
default_modelstringgpt-4oModel used when no routing rule matches
ruleslist[]Routing rules: condition, model, reason
cache fields:
FieldTypeDefaultDescription
enabledboolfalseEnable semantic response caching
similarity_thresholdfloat0.92Cosine similarity threshold for cache hits
ttl_hoursint24Cache entry expiry
budgets fields:
FieldTypeDefaultDescription
daily_usdfloat|nullnullDaily spend budget
weekly_usdfloat|nullnullWeekly spend budget
monthly_usdfloat|nullnullMonthly spend budget
alert_at_percentlist[int][50, 80, 95]Trigger alerts at these budget consumption percentages
Authenticate and authorize inter-agent message passing. Enterprise feature.
policies:
  a2a:
    mode: enforce                  # audit | enforce | off
    auth:
      method: did_exchange         # did_exchange | mtls | shared_secret
      auto_rotate: true
      rotation_hours: 24
    channels:
      - from: researcher
        to: writer
        allowed_message_types: [task_result, context_update]
        max_payload_size_kb: 500
        require_intent_verification: false
      - from: "*"
        to: payment_agent
        policy: deny               # Explicit deny rule
    worm_detection:
      enabled: true
      scan_inter_agent_messages: true
      max_propagation_depth: 3
      circular_reference_block: true
auth fields:
FieldTypeDefaultDescription
methodstringdid_exchangeAuthentication method: did_exchange, mtls, or shared_secret
auto_rotatebooltrueAutomatically rotate credentials
rotation_hoursint24Credential rotation interval
channels entry fields:
FieldTypeDefaultDescription
fromstring*Source agent name, or "*" for any
tostring*Destination agent name, or "*" for any
allowed_message_typeslist[string][]Permitted message types
max_payload_size_kbint500Maximum message payload size
require_intent_verificationboolfalseRequire intent tokens for this channel
policystring|nullnullSet to deny for an explicit block rule
worm_detection fields:
FieldTypeDefaultDescription
enabledbooltrueEnable worm/cascade detection
scan_inter_agent_messagesbooltrueScan messages for injection payloads
max_propagation_depthint3Maximum message chain depth before blocking
circular_reference_blockbooltrueBlock circular agent call chains
Detect dangerous interaction patterns between agents. Enterprise feature.
policies:
  topology:
    enabled: true
    conflict_detection:
      resource_contention: true
      contradictory_actions: true
      cascade_amplification: true
      resource_exhaustion: true
    alert_on:
      - circular_dependency
      - resource_contention
conflict_detection fields:
FieldTypeDefaultDescription
resource_contentionbooltrueDetect multiple agents competing for the same resource
contradictory_actionsbooltrueDetect agents taking conflicting actions
cascade_amplificationbooltrueDetect amplifying cascade patterns
resource_exhaustionbooltrueDetect agents consuming resources to exhaustion
alert_on accepts a list of pattern names: circular_dependency · resource_contention · cascade_amplification · resource_exhaustion.
Define what to do when a tool fails or a circuit breaker opens.
policies:
  fallback:
    mode: enforce                  # audit | enforce | off
    tools:
      web_search:
        fallback_agent: researcher_backup
        fallback_action: escalate_human
        triggers: [circuit_breaker_open]
    default:
      fallback_action: escalate_human
      preserve_state: true
      state_ttl_hours: 24
Per-tool fallback fields:
FieldTypeDefaultDescription
fallback_agentstring|nullnullDelegate to this agent on failure
fallback_actionstringescalate_humanAction to take: escalate_human, or a custom script
triggerslist[string][circuit_breaker_open]Conditions that activate this fallback
default fallback fields:
FieldTypeDefaultDescription
fallback_actionstringescalate_humanDefault action when no tool-specific rule matches
preserve_statebooltrueSave session state so it can be resumed
state_ttl_hoursint24How long to retain preserved state
Inject controlled failures to test fallback and recovery behavior. Enterprise feature.
policies:
  chaos:
    safety:
      max_blast_radius: 1
      auto_rollback_on_failure: true
      require_approval: true
    experiments:
      - name: web_search_latency
        description: "Simulate slow search API"
        target_tool: web_search
        fault_type: latency
        latency_ms: 2000
        duration_seconds: 60
      - name: deny_code_runner
        target_tool: code_runner
        fault_type: tool_deny
        duration_seconds: 120
safety fields:
FieldTypeDefaultDescription
max_blast_radiusint1Maximum number of simultaneous experiments
auto_rollback_on_failurebooltrueAutomatically stop experiments that cause real failures
require_approvalbooltrueRequire human approval before starting an experiment
experiments entry fields:
FieldTypeDescription
namestringExperiment identifier
descriptionstringHuman-readable description
target_toolstring|nullTool to target
target_agentstring|nullAgent to target
fault_typestringlatency, tool_deny, or other fault types
latency_msint|nullInjected latency in milliseconds (for latency fault type)
duration_secondsintHow long the experiment runs
Run chaos experiments only in non-production environments unless require_approval: true and your team has an incident response plan in place.

Complete example

version: "1.0"
governance_level: balanced
tenant_id: ten_abc123
api_key_env: DRAKO_API_KEY
framework: crewai

agents:
  researcher:
    source: agents/researcher.py
  writer:
    source: agents/writer.py

tools:
  web_search:
    type: read
  file_reader:
    type: read
  send_email:
    type: write
  code_runner:
    type: execute

policies:
  odd:
    enforcement_mode: enforce
    default_policy: deny
    agents:
      researcher:
        permitted_tools: [web_search, file_reader]
      writer:
        permitted_tools: [send_email, file_reader]

  dlp:
    mode: enforce

  hitl:
    mode: enforce
    triggers:
      tool_types: [write, execute, payment]
      spend_above_usd: 100.00
    timeout_action: reject
    approval_timeout_minutes: 30

  circuit_breaker:
    agent_level:
      failure_threshold: 5
      time_window_seconds: 60
      recovery_timeout_seconds: 30

  audit:
    enabled: true
    cryptographic: true
    retention_days: 90

  finops:
    tracking:
      enabled: true
    budgets:
      daily_usd: 20.00
      alert_at_percent: [80, 95]

Build docs developers (and LLMs) love