Skip to main content

Vercel AI SDK Integration

Shield provides two integration approaches for the Vercel AI SDK:
  1. shieldLanguageModelMiddleware - Automatic protection with AI SDK middleware (recommended)
  2. shieldMiddleware - Manual wrapParams + sanitizeOutput approach

Installation

npm install @zeroleaks/shield ai
Use shieldLanguageModelMiddleware with wrapLanguageModel for automatic hardening, injection detection, and output sanitization:
import { wrapLanguageModel, generateText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import { shieldLanguageModelMiddleware } from "@zeroleaks/shield/ai-sdk";

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const model = wrapLanguageModel({
  model: openai("gpt-5.3-codex"),
  middleware: shieldLanguageModelMiddleware({
    systemPrompt: "You are helpful.",
  }),
});

const result = await generateText({ model, prompt: "Hi" });
// result.text is automatically sanitized

With Streaming

import { streamText } from "ai";

const result = streamText({
  model, // wrapped model from above
  prompt: userInput,
});

for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
  // Automatically sanitized chunks
}

Manual Approach

For fine-grained control, use shieldMiddleware with manual wrapParams and sanitizeOutput calls:
import { generateText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import { shieldMiddleware } from "@zeroleaks/shield/ai-sdk";

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const shield = shieldMiddleware({ systemPrompt: "You are helpful." });

const result = await generateText({
model: openai("gpt-5.3-codex"),
...shield.wrapParams({
system: "You are a helpful assistant.",
prompt: userInput,
}),
});

const safeOutput = shield.sanitizeOutput(result.text);

Configuration Options

Basic Options

systemPrompt
string
System prompt for sanitization. When omitted, derived from the system parameter in the request.
onDetection
'block' | 'warn'
default:"block"
  • "block": Throws InjectionDetectedError when injection is detected
  • "warn": Only invokes onInjectionDetected callback without blocking
throwOnLeak
boolean
default:false
When true, throws LeakDetectedError instead of redacting leaked content.

Feature Flags

harden
HardenOptions | false
Options for system prompt hardening. Set to false to disable hardening entirely.
detect
DetectOptions | false
Options for injection detection. Set to false to disable detection entirely.
sanitize
SanitizeOptions | false
Options for output sanitization. Set to false to disable sanitization entirely.

Streaming Options

streamingSanitize
'buffer' | 'chunked' | 'passthrough'
default:"buffer"
Controls how streaming responses are sanitized (only for shieldLanguageModelMiddleware):
  • "buffer": Accumulates full response then sanitizes (more accurate)
  • "passthrough": Skip sanitization for streams
streamingChunkSize
number
default:8192
Chunk size in bytes for "chunked" mode.

Callbacks

onInjectionDetected
(result: DetectResult) => void
Invoked when injection is detected. Receives detection result with risk level and matched patterns.
onLeakDetected
(result: SanitizeResult) => void
Invoked when a prompt leak is detected in the output. Receives sanitization result with confidence score.

Multi-Provider Support

The Vercel AI SDK integration works with any provider supported by the AI SDK:
import { wrapLanguageModel, generateText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";
import { shieldLanguageModelMiddleware } from "@zeroleaks/shield/ai-sdk";

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
const model = wrapLanguageModel({
  model: openai("gpt-5.3-codex"),
  middleware: shieldLanguageModelMiddleware({
    systemPrompt: "You are helpful.",
  }),
});

const result = await generateText({ model, prompt: "Hi" });

Error Handling

import {
  shieldLanguageModelMiddleware,
  InjectionDetectedError,
  LeakDetectedError,
} from "@zeroleaks/shield/ai-sdk";
import { wrapLanguageModel, generateText } from "ai";
import { createOpenAI } from "@ai-sdk/openai";

try {
  const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY });
  const model = wrapLanguageModel({
    model: openai("gpt-5.3-codex"),
    middleware: shieldLanguageModelMiddleware({
      systemPrompt: "You are helpful.",
      throwOnLeak: true,
    }),
  });

  const result = await generateText({ model, prompt: userInput });
} catch (error) {
  if (error instanceof InjectionDetectedError) {
    console.error(`Injection detected: ${error.risk} risk`);
    console.error(`Categories: ${error.categories.join(", ")}`);
  }
  if (error instanceof LeakDetectedError) {
    console.error(`Leak detected: ${error.confidence} confidence`);
    console.error(`Fragments: ${error.fragmentCount}`);
  }
}

Advanced Usage

const model = wrapLanguageModel({
  model: openai("gpt-5.3-codex"),
  middleware: shieldLanguageModelMiddleware({
    systemPrompt: "You are helpful.",
    detect: {
      threshold: "high",
      customPatterns: [
        {
          category: "custom_command",
          regex: /execute order \d+/i,
          risk: "high",
        },
      ],
    },
  }),
});

Streaming Considerations

Automatic Middleware

With shieldLanguageModelMiddleware, streaming is handled automatically:
  • Uses "buffer" mode by default (accumulates full response, then sanitizes)
  • Set streamingSanitize: "passthrough" to skip sanitization for performance
const model = wrapLanguageModel({
  model: openai("gpt-5.3-codex"),
  middleware: shieldLanguageModelMiddleware({
    systemPrompt: "You are helpful.",
    streamingSanitize: "buffer", // default
  }),
});

const result = streamText({ model, prompt: userInput });
for await (const chunk of result.textStream) {
  process.stdout.write(chunk); // Already sanitized
}

Manual Middleware

With shieldMiddleware, you must accumulate the stream manually before sanitizing:
const shield = shieldMiddleware({ systemPrompt: "You are helpful." });

const result = streamText({
  model: openai("gpt-5.3-codex"),
  ...shield.wrapParams({
    system: "You are a helpful assistant.",
    prompt: userInput,
  }),
});

let accumulated = "";
for await (const chunk of result.textStream) {
  accumulated += chunk;
  // Don't use chunks directly - wait for full output
}

const safeOutput = shield.sanitizeOutput(accumulated);
console.log(safeOutput); // Now safe to use

Build docs developers (and LLMs) love