Skip to main content
The runSkill() function is the main entry point for executing Warden analysis. It runs a skill definition against an event context, analyzing each code hunk with Claude and aggregating the findings.

Function Signature

async function runSkill(
  skill: SkillDefinition,
  context: EventContext,
  options?: SkillRunnerOptions
): Promise<SkillReport>
skill
SkillDefinition
required
The skill to execute. Load from disk using resolveSkillAsync():
import { resolveSkillAsync } from '@sentry/warden';
const skill = await resolveSkillAsync('security-review', repoPath);
context
EventContext
required
Event context containing repository info and code changes. Build using buildEventContext():
const context = await buildEventContext(
  'pull_request',
  webhookPayload,
  repoPath,
  octokit
);
options
SkillRunnerOptions
Execution options (see below)

SkillRunnerOptions

Control skill execution behavior:
apiKey
string
Anthropic API key. Falls back to WARDEN_ANTHROPIC_API_KEY env var. If not provided, uses Claude Code subscription (requires claude login).
model
string
Model ID to use (e.g., 'claude-sonnet-4-20250514'). Defaults to SDK’s latest Sonnet model.
maxTurns
number
default:"50"
Maximum agentic turns (API round-trips) per hunk analysis.
contextLines
number
default:"3"
Lines of context to include around each hunk.
parallel
boolean
default:"true"
Process files in parallel. Set to false for sequential processing.
concurrency
number
default:"5"
Max concurrent file analyses when parallel=true.
batchDelayMs
number
default:"0"
Delay in milliseconds between batch starts for rate limiting.
pathToClaudeCodeExecutable
string
Path to claude CLI. Required in CI environments where the CLI isn’t in PATH.
abortController
AbortController
Abort controller for cancellation (e.g., on SIGINT).
const controller = new AbortController();
process.on('SIGINT', () => controller.abort());

await runSkill(skill, context, { abortController: controller });
callbacks
SkillRunnerCallbacks
Progress callbacks for UI updates:
callbacks: {
  onFileStart: (file, index, total) => {
    console.log(`Analyzing ${file} (${index+1}/${total})`);
  },
  onHunkComplete: (file, hunkNum, findings, usage) => {
    console.log(`  Hunk ${hunkNum}: ${findings.length} findings`);
  },
}
retry
RetryConfig
Retry configuration for transient API failures:
retry: {
  maxRetries: 3,
  initialDelayMs: 1000,
  backoffMultiplier: 2,
  maxDelayMs: 30000,
}
auxiliaryMaxRetries
number
default:"5"
Max retries for auxiliary Haiku calls (extraction repair, merging, fix evaluation).

Return Value

Returns a SkillReport with findings, usage stats, and metadata:
skill
string
Skill name
summary
string
Auto-generated summary (e.g., “security-review: Found 3 issues (2 high, 1 medium)”)
findings
Finding[]
Array of findings. Each finding has:
  • id: Short unique identifier
  • severity: 'high' | 'medium' | 'low'
  • confidence: 'high' | 'medium' | 'low' (optional)
  • title: Short description
  • description: Detailed explanation
  • location: File path and line range (optional)
  • suggestedFix: Diff patch (optional)
  • verification: How the issue was verified (optional)
usage
UsageStats
Token usage and cost:
  • inputTokens: Input tokens (non-cached portion)
  • outputTokens: Generated tokens
  • cacheReadInputTokens: Cache hits
  • cacheCreationInputTokens: Cache writes
  • costUSD: Total cost in USD
durationMs
number
Total execution time in milliseconds
model
string
Model used for analysis
files
FileReport[]
Per-file breakdown of findings, timing, and usage
skippedFiles
SkippedFile[]
Files skipped due to chunking patterns
failedHunks
number
Number of hunks that failed to analyze (SDK errors, API errors)
failedExtractions
number
Number of hunks where findings extraction failed
auxiliaryUsage
AuxiliaryUsageMap
Usage from auxiliary Haiku calls, keyed by agent name:
  • extraction: JSON repair
  • merge: Cross-location merging
  • fix_gate: Suggested fix quality checks

Example: Basic Usage

import { runSkill, resolveSkillAsync, buildEventContext } from '@sentry/warden';
import { Octokit } from '@octokit/rest';

const octokit = new Octokit({ auth: process.env.GITHUB_TOKEN });
const repoPath = '/path/to/repo';

// Load skill
const skill = await resolveSkillAsync('security-review', repoPath);

// Build context from webhook
const context = await buildEventContext(
  'pull_request',
  webhookPayload,
  repoPath,
  octokit
);

// Run analysis
const report = await runSkill(skill, context, {
  apiKey: process.env.WARDEN_ANTHROPIC_API_KEY,
  model: 'claude-sonnet-4-20250514',
  parallel: true,
  concurrency: 5,
});

console.log(`Found ${report.findings.length} issues`);
console.log(`Cost: $${report.usage.costUSD.toFixed(4)}`);
console.log(`Duration: ${(report.durationMs / 1000).toFixed(1)}s`);

Example: With Progress Callbacks

const report = await runSkill(skill, context, {
  apiKey: process.env.WARDEN_ANTHROPIC_API_KEY,
  callbacks: {
    onFileStart: (file, index, total) => {
      console.log(`[${index+1}/${total}] Analyzing ${file}...`);
    },
    onHunkComplete: (file, hunkNum, findings, usage) => {
      if (findings.length > 0) {
        console.log(`  Found ${findings.length} issue(s)`);
      }
    },
    onFileComplete: (file) => {
      console.log(`  ✓ ${file}`);
    },
    onLargePrompt: (file, lineRange, chars, estTokens) => {
      console.warn(`  ⚠ Large prompt at ${file}:${lineRange} (~${estTokens} tokens)`);
    },
  },
});

Example: With Abort Controller

const controller = new AbortController();

// Cancel on SIGINT
process.on('SIGINT', () => {
  console.log('\nAborting analysis...');
  controller.abort();
});

try {
  const report = await runSkill(skill, context, {
    apiKey: process.env.WARDEN_ANTHROPIC_API_KEY,
    abortController: controller,
  });
} catch (error) {
  if (controller.signal.aborted) {
    console.log('Analysis cancelled');
  } else {
    throw error;
  }
}

analyzeFile()

For finer control, analyze a single file’s hunks:
import { analyzeFile, prepareFiles } from '@sentry/warden';

const { files } = prepareFiles(context, {
  contextLines: 3,
});

for (const file of files) {
  const result = await analyzeFile(
    skill,
    file,
    repoPath,
    { apiKey, model },
    {
      onHunkStart: (hunkNum, totalHunks, lineRange) => {
        console.log(`  Hunk ${hunkNum}/${totalHunks} (${lineRange})`);
      },
    }
  );
  
  console.log(`${file.filename}: ${result.findings.length} findings`);
}

Error Handling

runSkill() throws SkillRunnerError on failures:
import { runSkill, SkillRunnerError } from '@sentry/warden';

try {
  const report = await runSkill(skill, context, options);
} catch (error) {
  if (error instanceof SkillRunnerError) {
    if (error.message.includes('authentication')) {
      console.error('Auth failed. Set WARDEN_ANTHROPIC_API_KEY or run: claude login');
    } else if (error.message.includes('All') && error.message.includes('failed')) {
      console.error('All hunks failed. Check API key and network connectivity.');
    } else {
      console.error('Skill execution failed:', error.message);
    }
  } else {
    throw error;
  }
}

Performance Tips

Parallel Processing

By default, files are analyzed in parallel with concurrency=5:
// Fast: parallel with sliding window
await runSkill(skill, context, {
  parallel: true,    // default
  concurrency: 5,    // default
});

// Slower: sequential
await runSkill(skill, context, {
  parallel: false,
});

Rate Limiting

Add delays between batches to respect API rate limits:
await runSkill(skill, context, {
  parallel: true,
  concurrency: 5,
  batchDelayMs: 1000,  // 1s delay between batches
});

Context Lines

Reduce context lines for faster analysis of large diffs:
await runSkill(skill, context, {
  contextLines: 1,  // minimal context (default: 3)
});

Build docs developers (and LLMs) love