Skip to main content
Prism supports 12 different LLM providers, each with unique capabilities and features. This page helps you choose the right provider for your needs.

Supported Providers

Prism integrates with the following providers:

Anthropic

Claude models with extended thinking, prompt caching, and citations

OpenAI

GPT models with reasoning, image generation, and audio processing

Google Gemini

Multimodal models with search grounding and video analysis

Ollama

Local model deployment for privacy and offline use

Mistral

Magistral reasoning models with OCR capabilities

Groq

Ultra-fast LPU inference for real-time applications

DeepSeek

Efficient models optimized for coding tasks

xAI

Grok models with extended thinking capabilities

OpenRouter

Access multiple providers through a single API

ElevenLabs

Speech-to-text with diarization and event tagging

VoyageAI

Specialized embeddings for search and retrieval

Feature Comparison

Compare capabilities across providers to find the best fit for your use case:

Core Features

ProviderText GenerationStreamingStructured OutputTool CallingEmbeddings
Anthropic✅ (Native + Tool)
OpenAI✅ (Strict mode)
Gemini✅ (Native)
Ollama✅ (Prompt-based)
Mistral
Groq
DeepSeek
xAI✅ (Strict mode)
OpenRouter
ElevenLabs
VoyageAI

Multimodal Support

ProviderImagesAudio (STT)Audio (TTS)VideoDocumentsImage Generation
Anthropic✅ (PDF)
OpenAI✅ (Whisper)✅ (DALL-E)
Gemini✅ (YouTube)✅ (Imagen)
Ollama
Mistral✅ (Voxtral)✅ (URL)
Groq✅ (Whisper)✅ (PlayAI)
DeepSeek
xAI
OpenRouter
ElevenLabs✅ (Scribe)
VoyageAI

Advanced Capabilities

ProviderReasoning/ThinkingPrompt CachingCitationsCode ExecutionSearch GroundingModeration
Anthropic✅ (Extended)✅ (Ephemeral)
OpenAI✅ (GPT-5)✅ (Auto)
Gemini✅ (Thinking)✅ (Google)
Ollama
Mistral✅ (Magistral)
Groq
DeepSeek
xAI✅ (Extended)
OpenRouter✅ (Model-dependent)
ElevenLabs
VoyageAI

Configuration

All providers are configured in config/prism.php:
return [
    'providers' => [
        'openai' => [
            'url' => env('OPENAI_URL', 'https://api.openai.com/v1'),
            'api_key' => env('OPENAI_API_KEY', ''),
            'organization' => env('OPENAI_ORGANIZATION', null),
            'project' => env('OPENAI_PROJECT', null),
        ],
        'anthropic' => [
            'api_key' => env('ANTHROPIC_API_KEY', ''),
            'version' => env('ANTHROPIC_API_VERSION', '2023-06-01'),
            'url' => env('ANTHROPIC_URL', 'https://api.anthropic.com/v1'),
            'default_thinking_budget' => env('ANTHROPIC_DEFAULT_THINKING_BUDGET', 1024),
            'anthropic_beta' => env('ANTHROPIC_BETA', null),
        ],
        'gemini' => [
            'api_key' => env('GEMINI_API_KEY', ''),
            'url' => env('GEMINI_URL', 'https://generativelanguage.googleapis.com/v1beta/models'),
        ],
        // ... other providers
    ],
];

Choosing a Provider

For Production Applications

Anthropic Claude - Industry-leading reasoning, long context windows, and reliable structured output with prompt caching for cost optimization.
Prism::text()
    ->using('anthropic', 'claude-sonnet-4-5-20250929')
    ->withPrompt('Your prompt')
    ->asText();

For Specific Use Cases

Use providers with reasoning/thinking capabilities:
  • Anthropic Claude 3.7 - Extended thinking with budget control
  • OpenAI GPT-5 - Reasoning effort levels (low/medium/high)
  • Mistral Magistral - Efficient reasoning for complex problems
  • xAI Grok - Extended thinking mode
Prism::text()
    ->using('anthropic', 'claude-3-7-sonnet-latest')
    ->withProviderOptions(['thinking' => ['enabled' => true]])
    ->withPrompt('Solve this complex problem...')
    ->asText();
Speech-to-Text:
  • OpenAI Whisper (via OpenAI/Groq) - Industry standard
  • Mistral Voxtral - Multilingual with 30min support
  • ElevenLabs Scribe - Diarization and event tagging
Text-to-Speech:
  • OpenAI TTS - Natural voices with HD quality
  • Groq PlayAI - Fast TTS with Arabic support
  • OpenAI DALL-E 3 - Highest quality with prompt revision
  • OpenAI GPT-Image-1 - Advanced editing with masks
  • Gemini Imagen 4 - HD generation with aspect ratio control
  • VoyageAI - Specialized for search/retrieval tasks
  • OpenAI - General purpose embeddings
  • Gemini - Task-specific embeddings
  • Anthropic - Prompt caching (5m/1h TTL)
  • OpenAI - Automatic caching for structured output
  • Gemini - Content caching with custom TTL
OpenRouter - Access multiple providers through a single API with automatic fallback routing.
Prism::text()
    ->using('openrouter', 'openai/gpt-4o')
    ->withProviderOptions([
        'models' => ['anthropic/claude-sonnet-4.5', 'openai/gpt-4o-mini']
    ])
    ->asText();

Provider Reliability

Structured Output Reliability

Native Support (Most Reliable):
  • Anthropic (Claude Sonnet 4.5+, Opus 4.1+)
  • OpenAI (with strict mode)
  • Gemini (native structured output)
  • xAI (strict mode)
Tool Calling Mode (Reliable):
  • Anthropic (non-native models)
  • OpenRouter
Prompt-based (Less Reliable):
  • Ollama
  • DeepSeek

Message Order Requirements

Anthropic is strict about message order: must alternate User → Assistant → User. Other providers are more flexible.

Quick Start by Provider

use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;

// Anthropic Claude
$response = Prism::text()
    ->using(Provider::Anthropic, 'claude-sonnet-4-5-20250929')
    ->withPrompt('Hello!')
    ->asText();

// OpenAI GPT
$response = Prism::text()
    ->using(Provider::OpenAI, 'gpt-4o')
    ->withPrompt('Hello!')
    ->asText();

// Google Gemini
$response = Prism::text()
    ->using(Provider::Gemini, 'gemini-2.5-flash')
    ->withPrompt('Hello!')
    ->asText();

// Ollama (Local)
$response = Prism::text()
    ->using(Provider::Ollama, 'llama3.2')
    ->withClientOptions(['timeout' => 60])
    ->withPrompt('Hello!')
    ->asText();

// Mistral
$response = Prism::text()
    ->using(Provider::Mistral, 'mistral-large-latest')
    ->withPrompt('Hello!')
    ->asText();

// Groq (Fast)
$response = Prism::text()
    ->using(Provider::Groq, 'llama-3.3-70b-versatile')
    ->withPrompt('Hello!')
    ->asText();

// DeepSeek
$response = Prism::text()
    ->using(Provider::DeepSeek, 'deepseek-chat')
    ->withPrompt('Hello!')
    ->asText();

// xAI Grok
$response = Prism::text()
    ->using(Provider::XAI, 'grok-4')
    ->withPrompt('Hello!')
    ->asText();

// OpenRouter (Multi-provider)
$response = Prism::text()
    ->using(Provider::OpenRouter, 'openai/gpt-4o')
    ->withPrompt('Hello!')
    ->asText();

Next Steps

Provider Details

Explore detailed documentation for each provider

Core Concepts

Learn about text generation, streaming, and tools

Structured Output

Generate type-safe structured data

Multimodal Input

Work with images, audio, video, and documents

Build docs developers (and LLMs) love