Skip to main content
Drako generates structured compliance gap reports from real scan data. Each report maps findings to regulatory articles, shows your governance score, and includes fix snippets for every failed check.

What a compliance report contains

Every report includes:
  • Governance and determinism scores — letter grade (A–F) and numeric score (0–100)
  • Per-article pass/fail status — EU AI Act Articles 9, 11, 12, and 14
  • Findings — each finding includes the rule ID, severity, file location, article reference, and a fix snippet
  • Compliance summary — total findings by severity (CRITICAL, HIGH, MEDIUM, LOW)
Compliance rules (COM-001 through COM-006) are included in every scan alongside security and governance findings.

Generating reports

1

Run a scan

Run drako scan from your project root. Drako analyzes your Python source files using AST-based static analysis — no network connection required.
drako scan .
2

Review findings

The terminal report surfaces compliance findings alongside security and governance findings. Each COM rule shows the EU AI Act article it maps to and a ready-to-apply fix.
COM-001  No automatic logging                (src/main.py)
         EU AI Act Art. 12 (Record-keeping)
         Fix: from drako import with_compliance

COM-006  No HITL for high-risk actions       (.drako.yaml)
         EU AI Act Art. 14 (Human oversight)
         Fix: policies.hitl.mode: enforce
3

Export for auditors

Export machine-readable reports for your compliance team or regulators:
# JSON — for custom dashboards and automated pipelines
drako scan . --format json > compliance-report.json

# SARIF — for GitHub Code Scanning and security tooling
drako scan . --format sarif > compliance-report.sarif

Compliance rules

COM-001 — No automatic logging

Severity: HIGH | EU AI Act: Article 12 (Record-keeping) High-risk AI systems must keep logs automatically. Logs must be retained for at least 6 months unless other law requires longer retention. What Drako checks: Scans Python source files for logging infrastructure patterns — audit_log, audit_trail, with_compliance, drako, GovernanceMiddleware, ComplianceMiddleware, structlog, and logging.getLogger. Fails when: No logging infrastructure is detected in any Python source file. Regulatory exposure: Fines up to €15M or 3% of worldwide annual revenue.
from drako import with_compliance

# Drako middleware provides EU AI Act compliant audit logging automatically.
crew = with_compliance(my_crew)

COM-002 — No human oversight mechanism

Severity: HIGH | EU AI Act: Article 14 (Human oversight) High-risk AI systems must be designed to allow effective human oversight. Humans must be able to intervene and override decisions. What Drako checks: Scans Python source files for human oversight patterns — human_in_the_loop, hitl, require_approval, human_approval, ask_human, manual_review, review_queue, and supervisor. Fails when: Agents exist in the project but no human oversight mechanism is detected.
# .drako.yaml
policies:
  hitl:
    mode: enforce
    triggers:
      tool_types: [write, execute, payment]

COM-003 — No technical documentation

Severity: MEDIUM | EU AI Act: Article 11 (Technical documentation) Before placing a high-risk AI system on the market, providers must draw up technical documentation demonstrating the system meets requirements. What Drako checks: Looks for a non-empty docs/ directory, README.md referencing AI components, or ARCHITECTURE.md.
mkdir -p docs
# Add docs/architecture.md, docs/agents.md, docs/risk-assessment.md
drako bom .   # generate agent inventory automatically

COM-004 — No risk management documentation

Severity: MEDIUM | EU AI Act: Article 9 (Risk management system) Providers of high-risk AI systems must implement a risk management system covering the entire lifecycle. What Drako checks: Looks for RISK_ASSESSMENT.md, docs/risk-assessment.md, docs/risks.md, and config or doc content referencing risk_assessment, risk_management, risk_level, or threat_model. Create RISK_ASSESSMENT.md covering:
  1. Known and foreseeable risks (misuse, technical failures, safety)
  2. Risk estimation and evaluation
  3. Risk mitigation measures
  4. Residual risk after mitigation
  5. Agent-specific risks (tool access, data handling, autonomous decisions)

COM-005 — No agent BOM / inventory

Severity: MEDIUM | Reference: OWASP LLM Top 10 Without a component inventory, you cannot track which AI models, tools, and permissions your agents use — making vulnerability response impossible. What Drako checks: Looks for .drako.yaml, agent-bom.json, or AGENT_BOM.md.
pip install drako
drako init        # creates .drako.yaml with agent inventory
drako bom .       # standalone BOM in text, JSON, or Markdown

COM-006 — No HITL for high-risk actions

Severity: CRITICAL | EU AI Act: Article 14 (Human oversight) Humans must retain meaningful control over high-risk AI decisions. Autonomous execution of destructive actions without a checkpoint is a direct Art. 14 violation. What Drako checks: Identifies tools with side-effect names (delete, write, send, pay, execute, deploy, publish, etc.) and checks whether HITL is configured for them. Regulatory exposure: Liability for autonomous AI harm. Enforcement actions under EU AI Act.
# .drako.yaml
policies:
  hitl:
    mode: enforce
    triggers:
      tool_types:
        - write
        - execute
        - payment
      trust_score_below: 60
      spend_above_usd: 100.00
    approval_timeout_minutes: 30
    timeout_action: reject

Compliance scoring

Compliance findings contribute to the overall governance score:
SeverityScore deduction
CRITICAL20 points
HIGH10 points
MEDIUM5 points
LOW2 points
Grades: A (90–100) · B (75–89) · C (60–74) · D (40–59) · F (0–39) A project failing COM-001 (HIGH) and COM-006 (CRITICAL) loses 30 points before counting any security or governance findings.

SARIF format and GitHub Code Scanning

SARIF output is compatible with GitHub Code Scanning. Upload the results file to get inline PR annotations on the exact lines where compliance issues are found.
# .github/workflows/drako.yml
name: Drako Governance
on: [push, pull_request]
jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with: { python-version: "3.12" }
      - run: pip install drako
      - run: drako scan . --format sarif > results.sarif
      - run: drako scan . --fail-on critical
      - uses: github/codeql-action/upload-sarif@v3
        with: { sarif_file: results.sarif }
        if: always()
Baselined findings appear with "baselineState": "unchanged" in SARIF output — they won’t block CI but are still visible in Code Scanning.

CI/CD integration

Gate deployments on compliance status:
# Fail on any critical findings (including COM-006)
drako scan . --fail-on critical

# Fail if determinism score drops below 60
drako scan . --threshold-det 60

# Both — fail on critical findings OR low determinism score
drako scan . --fail-on critical --threshold-det 60

Exporting for auditors and regulators

The JSON output includes a compliance field with per-article status that auditors and regulators can read directly:
{
  "score": 72,
  "grade": "C",
  "compliance": {
    "eu_ai_act": {
      "art_9": "FAIL",
      "art_11": "PASS",
      "art_12": "FAIL",
      "art_14": "FAIL"
    }
  },
  "findings": [...]
}
For existing projects with pre-existing compliance gaps, use the baseline workflow to focus remediation efforts on new issues:
# Save current state as baseline (known gaps)
drako scan . --baseline

# From now on, only new gaps are flagged in CI
drako scan .

# Governance score still reflects ALL findings — real posture, not filtered
The eu-ai-act template pre-configures all four Article requirements. Run drako init --template eu-ai-act to reach a compliant baseline immediately.

Build docs developers (and LLMs) love