shieldLanguageModelMiddleware()
Function Signature
wrapLanguageModel. This is the recommended approach for Vercel AI SDK integration.
Why Use This?
WithshieldLanguageModelMiddleware, you get:
- Automatic sanitization: No need to manually call
sanitizeOutput - Both generate and stream: Works with
generateText,streamText, and all AI SDK functions - Clean API: Just wrap your model once and use it everywhere
Examples
Basic Usage with generateText
Streaming with streamText
Custom Detection Callbacks
shieldMiddleware()
Function Signature
wrapParams and sanitizeOutput methods. Use this for legacy code or when you need fine-grained control.
Why Use This?
UseshieldMiddleware when:
- You need to manually control when sanitization happens
- You’re migrating existing code incrementally
- You want to apply different Shield configs per-request
shieldLanguageModelMiddleware instead.
Examples
Manual wrapParams + sanitizeOutput
Streaming with Manual Accumulation
ShieldAISdkOptions
BothshieldLanguageModelMiddleware and shieldMiddleware accept the same options:
System prompt used for sanitization. When omitted, Shield automatically derives it from the
system parameter or prompt array.Options for prompt hardening. Set to
false to disable hardening. See harden() for available options.Options for injection detection. Set to
false to disable detection. See detect() for available options.Options for output sanitization. Set to
false to disable sanitization. See sanitize() for available options.Streaming sanitization strategy (only applies to
shieldLanguageModelMiddleware):"buffer": Accumulate the full stream, then sanitize (higher memory, more accurate)"chunked": Process in 8KB chunks (lower memory for long streams)"passthrough": Skip sanitization entirely (use when you accept the risk)
Chunk size in bytes for
"chunked" mode. Only applies when streamingSanitize is set to "chunked".Behavior when injection is detected:
"block": ThrowInjectionDetectedError(request fails)"warn": Only invokeonInjectionDetectedcallback (request continues)
When
true, throw LeakDetectedError instead of redacting leaked content. Use for strict security policies where any leak should abort the request.Callback invoked when an injection is detected. Receives the full
DetectResult with risk level and matched patterns.Callback invoked when a prompt leak is detected in the output. Receives the full
SanitizeResult with confidence score and leaked fragments.Comparison: Middleware vs. Manual
| Feature | shieldLanguageModelMiddleware | shieldMiddleware |
|---|---|---|
| Automatic sanitization | ✅ Yes | ❌ Manual |
| Streaming support | ✅ Built-in | ⚠️ Manual accumulation |
| API simplicity | ✅ Wrap once, use everywhere | ⚠️ Call per request |
| Recommended for | New code | Legacy code |
Notes
- Auto-derived system prompt: When
systemPromptis not provided, Shield extracts it from thesystemparameter or the first system message in the prompt array. - Multi-part messages: AI SDK supports
contentasstring | MessagePart[]. Shield extracts text from all parts for injection detection. - Streaming: With
shieldLanguageModelMiddleware, streaming is handled automatically. WithshieldMiddleware, you must accumulate the stream and callsanitizeOutputmanually.