Skip to main content
Drako ships with 25 security advisories in the DRAKO-ABSS format — Agent Behavioral Security Standard. Advisories describe vulnerabilities that emerge from how agents are configured and behave, not just from bugs in code.

What DRAKO-ABSS is

CVE tracks software vulnerabilities. DRAKO-ABSS tracks agent behavioral vulnerabilities — problems that emerge from how an agent uses its tools, processes untrusted input, or interacts with other agents. These risks live in configuration and behavior, not necessarily in a single exploitable line of code. Each advisory includes:
  • Affected frameworks and exploitable conditions
  • IOC patterns with normalized SHA-256 hashes for runtime matching
  • Taint path: source → via → sink
  • References to OWASP, MITRE ATLAS, and CVEs
  • Mapping to Drako scan rules that detect the pattern
  • Remediation guidance with effort estimates
DRAKO-ABSS advisories are published under CC-BY-4.0. To contribute a new advisory, submit a YAML file following the schema to the Drako repository.

Coverage

OWASP Top 10 for LLMs

All 10 categories from the OWASP Top 10 for LLM Applications 2025 — LLM01 (Prompt Injection) through LLM10 (Unbounded Consumption).

MITRE ATLAS

Adversarial ML tactics and techniques including AML.T0051 (Prompt Injection), AML.T0054 (LLM Jailbreak), AML.T0040 (Model Extraction), AML.T0020 (Training Data Poisoning).

Framework CVEs

Real CVEs and documented vulnerabilities in CrewAI, LangChain (CVE-2023-36258, CVE-2023-29374, CVE-2023-36189), and AutoGen.

Prompt injection patterns

Documented injection attack patterns: DAN jailbreaks, system prompt extraction, indirect injection via external data, multi-turn context manipulation.

How advisories appear in scan output

Advisories are linked inline to the relevant scan finding:
SEC-007  Prompt injection vulnerability       (agents/researcher.py)
         Related: DRAKO-ABSS-2026-001 — Prompt Injection via Direct and Indirect Instruction Override
         Ref: OWASP LLM01:2025, MITRE AML.T0051
Each advisory is matched to findings via the drako_rules field in its YAML — when a finding’s rule ID appears in that list, the advisory surfaces in output.

ABSS format

Each advisory is a YAML file with the following schema:
id: DRAKO-ABSS-2026-001          # Unique advisory ID (year-sequential)
title: "Short descriptive title"
category: owasp-llm               # owasp-llm | mitre-atlas | framework-cve | prompt-injection
severity: 9                       # 1–10 (10 = most severe)
confidence: 0.95                  # 0.0–1.0 match confidence

affected:
  frameworks: [crewai, langchain] # Affected AI agent frameworks
  conditions:                     # Conditions that make the vulnerability exploitable
    - "Condition description"

ioc:
  type: PROMPT_INJECTION          # IOC category
  patterns:                       # Detectable patterns (strings, regexes)
    - "pattern string"
  pattern_hashes:                 # SHA-256 of normalized patterns (lowercase, stripped)
    - "hex_hash_string"

taint_path:
  source: "user_input"            # Where the attack originates
  sink: "system_prompt_disclosure" # What gets compromised
  via: ["step1", "step2"]         # Attack chain steps

references:
  - type: owasp                   # owasp | mitre_atlas | cve | research | github_issue
    id: "LLM01:2025"
    url: "https://..."

mitigation:
  drako_rules: [SEC-007, SEC-008] # Drako scan rules that detect this
  description: "How to remediate"
  remediation_effort: low         # low | moderate | significant

metadata:
  published: "2026-03-20"
  updated: "2026-03-20"
  author: "Drako Security Research"

IOC types

DRAKO-ABSS defines six AI-native Indicator of Compromise (IOC) types for runtime matching:
IOC typeDescription
PROMPT_INJECTIONPatterns that attempt to override agent instructions or system prompts
JAILBREAKPatterns designed to remove or bypass model safety constraints
INDIRECT_INJECTIONAdversarial instructions embedded in external content retrieved by tools
CONTEXT_MANIPULATIONPatterns that exploit multi-turn conversation history to drift agent behavior
TOOL_ABUSEPatterns that manipulate agents into invoking tools with dangerous arguments
DATA_POISONINGPatterns that introduce malicious content into training or knowledge stores
OUTPUT_INJECTIONMalicious content in LLM output targeting downstream rendering or execution
SUPPLY_CHAINIndicators of compromised dependencies, plugins, or model artifacts
RESOURCE_EXHAUSTIONInputs designed to consume unbounded compute, tokens, or cost
API_ABUSESystematic query patterns targeting model extraction or inference abuse
TOOL_INJECTIONAdversarial payloads embedded in tool return values
DELEGATION_ABUSEPatterns that exploit agent delegation to escalate privileges
EXCESSIVE_AGENCYConfigurations granting agents unrestricted autonomous execution
DATA_LEAKAGEPatterns that trigger accidental exposure of credentials or sensitive data
ADVERSARIAL_INPUTUnicode and encoding exploits that evade content filters
CODE_EXECUTIONPatterns that trigger arbitrary code execution via deserialization or injection
SQL_INJECTIONPatterns that exploit natural-language-to-SQL conversion
OVERRELIANCEConfigurations where LLM output is used without verification
INSECURE_PLUGINPlugin or tool designs enabling arbitrary execution

Advisory catalogue

These advisories map to all 10 categories of the OWASP Top 10 for LLM Applications 2025.
Severity: 9/10 · Confidence: 0.95 · Category: owasp-llmAffected frameworks: crewai, langchain, autogen, llama-index, semantic-kernelUser input concatenated directly into system or agent prompts without sanitization. An attacker can override agent instructions by injecting directives into the user-controlled portion of the prompt.Taint path: user_inputagent_contextllm_completionsystem_prompt_disclosureIOC type: PROMPT_INJECTIONReferences: OWASP LLM01:2025, MITRE AML.T0051, AML.T0054, ARXIV-2302.12173Drako rules: SEC-007, SEC-008, SEC-010Remediation effort: moderate
Severity: 8/10 · Confidence: 0.92 · Category: owasp-llmAffected frameworks: crewai, langchain, autogen, llama-indexLLM output rendered in a web UI without sanitization, or passed to eval()/exec() without validation. Downstream systems that consume LLM responses as trusted data are vulnerable to script injection and code execution.Taint path: llm_outputunvalidated_renderingdownstream_systemIOC type: OUTPUT_INJECTIONReferences: OWASP LLM02:2025, MITRE AML.T0048, CWE-79, CWE-94Drako rules: SEC-006, BP-002Remediation effort: moderate
Severity: 7/10 · Confidence: 0.88 · Category: owasp-llmAffected frameworks: crewai, langchain, autogen, llama-index, semantic-kernelFine-tuning datasets include user-generated or web-scraped content without validation. RAG knowledge bases populated from unmoderated sources introduce adversarial samples that shift model behavior.Taint path: external_training_datadata_ingestion_pipelinefine_tuning_jobmodel_weightsIOC type: DATA_POISONINGReferences: OWASP LLM03:2025, MITRE AML.T0020, AML.T0019, ARXIV-2401.05566Drako rules: GOV-001, COM-003Remediation effort: significant
Severity: 7/10 · Confidence: 0.90 · Category: owasp-llmAffected frameworks: crewai, langchain, autogen, llama-index, semantic-kernelNo input length validation or token counting before LLM API calls. Agent loops lack iteration limits or timeout mechanisms. Recursive prompt patterns consume unbounded compute and cost.Taint path: user_inputtoken_expansionrecursive_prompt_loopllm_api_resource_poolIOC type: RESOURCE_EXHAUSTIONReferences: OWASP LLM04:2025, MITRE AML.T0029, CWE-400, CWE-770Drako rules: MAG-001, MAG-002, GOV-007Remediation effort: low
Severity: 8/10 · Confidence: 0.91 · Category: owasp-llmAffected frameworks: crewai, langchain, autogen, llama-index, semantic-kernelModel artifacts loaded from public hubs without integrity verification. Plugin dependencies use unpinned versions. No software bill of materials exists for the agent pipeline.Taint path: external_package_registrydependency_resolutiondynamic_importagent_runtimeIOC type: SUPPLY_CHAINReferences: OWASP LLM05:2025, MITRE AML.T0010, CWE-829, CWE-1357Drako rules: COM-005, SEC-001Remediation effort: moderate
Severity: 9/10 · Confidence: 0.94 · Category: owasp-llmAffected frameworks: crewai, langchain, autogen, llama-index, semantic-kernelAPI keys or credentials embedded in system prompts or agent configurations. PII included in LLM context without redaction. RAG retrieval surfaces confidential documents without access control filtering.Taint path: sensitive_data_storeprompt_context / rag_retrieval / agent_memoryllm_outputIOC type: DATA_LEAKAGEReferences: OWASP LLM06:2025, MITRE AML.T0024, AML.T0044, CWE-200Drako rules: SEC-001, COM-001, COM-002Remediation effort: moderate
Severity: 8/10 · Confidence: 0.93 · Category: owasp-llmAffected frameworks: crewai, langchain, autogen, llama-index, semantic-kernelAgent tools use exec() or eval() with LLM-generated arguments. File system access tools lack path traversal protections. Tools accept user-controlled input without schema validation.Taint path: llm_generated_argumentstool_dispatchunvalidated_parameter_passingsystem_execution_contextIOC type: INSECURE_PLUGINReferences: OWASP LLM07:2025, MITRE AML.T0040, CWE-78, CWE-95Drako rules: SEC-003, SEC-005, SEC-006, GOV-002Remediation effort: moderate
Severity: 9/10 · Confidence: 0.94 · Category: owasp-llmAffected frameworks: crewai, langchain, autogen, llama-index, semantic-kernelAgent has simultaneous access to filesystem, network, and code execution tools with no human-in-the-loop approval. Wildcard tool permission grants instead of explicit allowlists.Taint path: agent_autonomy_configurationtool_orchestrationunsupervised_execution_loopunrestricted_system_actionsIOC type: EXCESSIVE_AGENCYReferences: OWASP LLM08:2025, MITRE AML.T0048, CWE-250, CWE-269Drako rules: GOV-005, GOV-006, SEC-003, SEC-005Remediation effort: moderate
Severity: 5/10 · Confidence: 0.85 · Category: owasp-llmAffected frameworks: crewai, langchain, autogen, llama-index, semantic-kernelAgent output used for decision-making without validation against ground truth. LLM-generated code executed without review or static analysis. No fact-checking or cross-referencing mechanism in the pipeline.Taint path: llm_generated_outputunchecked_acceptancemissing_validation_layerbusiness_decision_or_actionIOC type: OVERRELIANCEReferences: OWASP LLM09:2025, MITRE AML.T0048, ARXIV-2309.01219Drako rules: GOV-001, BP-001Remediation effort: low
Severity: 7/10 · Confidence: 0.91 · Category: owasp-llmAffected frameworks: crewai, langchain, autogen, llama-index, semantic-kernelNo maximum token limit configured for LLM API calls. Agent execution loops lack iteration caps or cost budgets. Multi-agent orchestrations have no aggregate cost ceiling.Taint path: agent_configurationunmetered_api_callsrunaway_agent_loopcloud_billing_accountIOC type: UNBOUNDED_CONSUMPTIONReferences: OWASP LLM10:2025, MITRE AML.T0029, CWE-770, CWE-799Drako rules: MAG-001, MAG-002, MAG-003, GOV-007Remediation effort: low

IOC pattern hashes

Advisory IOC patterns are stored as SHA-256 hashes of normalized (lowercase, stripped) strings. This lets Drako perform runtime matching without distributing raw injection patterns:
import hashlib

def compute_pattern_hash(pattern: str) -> str:
    normalized = pattern.strip().lower()
    return hashlib.sha256(normalized.encode("utf-8")).hexdigest()
You can retrieve all IOC hashes programmatically:
from drako.advisories import get_ioc_hashes

hashes = get_ioc_hashes()  # set[str] — all hashes across all advisories

Collective intelligence

When runtime enforcement is enabled, Drako participates in collective IOC sharing across deployments:
  • A detection on one deployment propagates to all connected tenants in under 5 seconds
  • Only normalized pattern hashes are shared — never raw payloads, prompts, or user data
  • Sharing is anonymous and opt-in; configure via collective_intelligence.enabled in .drako.yaml
  • New IOC hashes are validated against the ABSS schema before distribution
Collective intelligence requires a Drako platform connection (api_key_env and endpoint in .drako.yaml). It is not available in offline-only scan mode.

Contributing an advisory

To submit a new advisory:
  1. Fork the Drako repository.
  2. Create a new YAML file in src/drako/data/advisories/ following the ABSS schema.
  3. Include at least one external reference (CVE, paper, GitHub issue, or blog post).
  4. Open a pull request with a brief description of the vulnerability.
All submitted advisories are reviewed by Drako Security Research before inclusion. Published advisories are released under CC-BY-4.0.

Build docs developers (and LLMs) love