Vercel AI SDK Integration
Shield provides two integration approaches for the Vercel AI SDK:
shieldLanguageModelMiddleware - Automatic protection with AI SDK middleware (recommended)
shieldMiddleware - Manual wrapParams + sanitizeOutput approach
Installation
npm install @zeroleaks/shield ai
Automatic Protection (Recommended)
Use shieldLanguageModelMiddleware with wrapLanguageModel for automatic hardening, injection detection, and output sanitization:
import { wrapLanguageModel, generateText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import { shieldLanguageModelMiddleware } from "@zeroleaks/shield/ai-sdk";
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const model = wrapLanguageModel({
model: openai("gpt-5.3-codex"),
middleware: shieldLanguageModelMiddleware({
systemPrompt: "You are helpful.",
}),
});
const result = await generateText({ model, prompt: "Hi" });
// result.text is automatically sanitized
With Streaming
import { streamText } from "ai";
const result = streamText({
model, // wrapped model from above
prompt: userInput,
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
// Automatically sanitized chunks
}
Manual Approach
For fine-grained control, use shieldMiddleware with manual wrapParams and sanitizeOutput calls:
generateText
streamText
With Messages
import { generateText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import { shieldMiddleware } from "@zeroleaks/shield/ai-sdk";
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const shield = shieldMiddleware({ systemPrompt: "You are helpful." });
const result = await generateText({
model: openai("gpt-5.3-codex"),
...shield.wrapParams({
system: "You are a helpful assistant.",
prompt: userInput,
}),
});
const safeOutput = shield.sanitizeOutput(result.text);
import { streamText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import { shieldMiddleware } from "@zeroleaks/shield/ai-sdk";
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const shield = shieldMiddleware({ systemPrompt: "You are helpful." });
const result = streamText({
model: openai("gpt-5.3-codex"),
...shield.wrapParams({
system: "You are a helpful assistant.",
prompt: userInput,
}),
});
// Accumulate full output before sanitizing
let accumulated = "";
for await (const chunk of result.textStream) {
accumulated += chunk;
}
const safeOutput = shield.sanitizeOutput(accumulated);
import { generateText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import { shieldMiddleware } from "@zeroleaks/shield/ai-sdk";
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const shield = shieldMiddleware({ systemPrompt: "You are helpful." });
const result = await generateText({
model: openai("gpt-5.3-codex"),
...shield.wrapParams({
system: "You are a helpful assistant.",
messages: [
{ role: "user", content: "Hello" },
{ role: "assistant", content: "Hi there!" },
{ role: "user", content: userInput },
],
}),
});
const safeOutput = shield.sanitizeOutput(result.text);
Configuration Options
Basic Options
System prompt for sanitization. When omitted, derived from the system parameter in the request.
onDetection
'block' | 'warn'
default:"block"
"block": Throws InjectionDetectedError when injection is detected
"warn": Only invokes onInjectionDetected callback without blocking
When true, throws LeakDetectedError instead of redacting leaked content.
Feature Flags
Options for system prompt hardening. Set to false to disable hardening entirely.
Options for injection detection. Set to false to disable detection entirely.
Options for output sanitization. Set to false to disable sanitization entirely.
Streaming Options
streamingSanitize
'buffer' | 'chunked' | 'passthrough'
default:"buffer"
Controls how streaming responses are sanitized (only for shieldLanguageModelMiddleware):
"buffer": Accumulates full response then sanitizes (more accurate)
"passthrough": Skip sanitization for streams
Chunk size in bytes for "chunked" mode.
Callbacks
onInjectionDetected
(result: DetectResult) => void
Invoked when injection is detected. Receives detection result with risk level and matched patterns.
onLeakDetected
(result: SanitizeResult) => void
Invoked when a prompt leak is detected in the output. Receives sanitization result with confidence score.
Multi-Provider Support
The Vercel AI SDK integration works with any provider supported by the AI SDK:
OpenAI
Anthropic
Google
Mistral
import { wrapLanguageModel, generateText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import { shieldLanguageModelMiddleware } from "@zeroleaks/shield/ai-sdk";
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const model = wrapLanguageModel({
model: openai("gpt-5.3-codex"),
middleware: shieldLanguageModelMiddleware({
systemPrompt: "You are helpful.",
}),
});
const result = await generateText({ model, prompt: "Hi" });
import { wrapLanguageModel, generateText } from "ai";
import { createAnthropic } from "@ai-sdk/anthropic";
import { shieldLanguageModelMiddleware } from "@zeroleaks/shield/ai-sdk";
const anthropic = createAnthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
const model = wrapLanguageModel({
model: anthropic("claude-sonnet-4-6"),
middleware: shieldLanguageModelMiddleware({
systemPrompt: "You are helpful.",
}),
});
const result = await generateText({ model, prompt: "Hi" });
import { wrapLanguageModel, generateText } from "ai";
import { createGoogleGenerativeAI } from "@ai-sdk/google";
import { shieldLanguageModelMiddleware } from "@zeroleaks/shield/ai-sdk";
const google = createGoogleGenerativeAI({ apiKey: process.env.GOOGLE_API_KEY });
const model = wrapLanguageModel({
model: google("gemini-2.0-flash-exp"),
middleware: shieldLanguageModelMiddleware({
systemPrompt: "You are helpful.",
}),
});
const result = await generateText({ model, prompt: "Hi" });
import { wrapLanguageModel, generateText } from "ai";
import { createMistral } from "@ai-sdk/mistral";
import { shieldLanguageModelMiddleware } from "@zeroleaks/shield/ai-sdk";
const mistral = createMistral({ apiKey: process.env.MISTRAL_API_KEY });
const model = wrapLanguageModel({
model: mistral("mistral-large-latest"),
middleware: shieldLanguageModelMiddleware({
systemPrompt: "You are helpful.",
}),
});
const result = await generateText({ model, prompt: "Hi" });
Error Handling
import {
shieldLanguageModelMiddleware,
InjectionDetectedError,
LeakDetectedError,
} from "@zeroleaks/shield/ai-sdk";
import { wrapLanguageModel, generateText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
try {
const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const model = wrapLanguageModel({
model: openai("gpt-5.3-codex"),
middleware: shieldLanguageModelMiddleware({
systemPrompt: "You are helpful.",
throwOnLeak: true,
}),
});
const result = await generateText({ model, prompt: userInput });
} catch (error) {
if (error instanceof InjectionDetectedError) {
console.error(`Injection detected: ${error.risk} risk`);
console.error(`Categories: ${error.categories.join(", ")}`);
}
if (error instanceof LeakDetectedError) {
console.error(`Leak detected: ${error.confidence} confidence`);
console.error(`Fragments: ${error.fragmentCount}`);
}
}
Advanced Usage
const model = wrapLanguageModel({
model: openai("gpt-5.3-codex"),
middleware: shieldLanguageModelMiddleware({
systemPrompt: "You are helpful.",
detect: {
threshold: "high",
customPatterns: [
{
category: "custom_command",
regex: /execute order \d+/i,
risk: "high",
},
],
},
}),
});
const model = wrapLanguageModel({
model: openai("gpt-5.3-codex"),
middleware: shieldLanguageModelMiddleware({
systemPrompt: "You are a financial advisor.",
harden: {
customRules: [
"Never share specific investment recommendations.",
"Always include risk disclaimers.",
],
position: "prepend",
},
}),
});
const model = wrapLanguageModel({
model: openai("gpt-5.3-codex"),
middleware: shieldLanguageModelMiddleware({
systemPrompt: "You are helpful.",
onInjectionDetected: (result) => {
console.warn(`Injection attempt blocked:`, {
risk: result.risk,
categories: result.matches.map((m) => m.category),
timestamp: new Date().toISOString(),
});
},
onLeakDetected: (result) => {
console.warn(`Prompt leak detected:`, {
confidence: result.confidence,
fragmentCount: result.fragments.length,
timestamp: new Date().toISOString(),
});
},
}),
});
Streaming Considerations
Automatic Middleware
With shieldLanguageModelMiddleware, streaming is handled automatically:
- Uses
"buffer" mode by default (accumulates full response, then sanitizes)
- Set
streamingSanitize: "passthrough" to skip sanitization for performance
const model = wrapLanguageModel({
model: openai("gpt-5.3-codex"),
middleware: shieldLanguageModelMiddleware({
systemPrompt: "You are helpful.",
streamingSanitize: "buffer", // default
}),
});
const result = streamText({ model, prompt: userInput });
for await (const chunk of result.textStream) {
process.stdout.write(chunk); // Already sanitized
}
Manual Middleware
With shieldMiddleware, you must accumulate the stream manually before sanitizing:
const shield = shieldMiddleware({ systemPrompt: "You are helpful." });
const result = streamText({
model: openai("gpt-5.3-codex"),
...shield.wrapParams({
system: "You are a helpful assistant.",
prompt: userInput,
}),
});
let accumulated = "";
for await (const chunk of result.textStream) {
accumulated += chunk;
// Don't use chunks directly - wait for full output
}
const safeOutput = shield.sanitizeOutput(accumulated);
console.log(safeOutput); // Now safe to use