Skip to main content
This guide will walk you through creating your first AI-powered feature with Prism. We’ll build a simple example that generates text, handles multi-modal input, and uses tools.

Prerequisites

Make sure you’ve completed the installation steps and have at least one provider API key configured in your .env file.

Basic text generation

Let’s start with the simplest possible example - generating text from a prompt:
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;

$response = Prism::text()
    ->using(Provider::Anthropic, 'claude-3-5-sonnet-20241022')
    ->withPrompt('Tell me a short story about a brave knight.')
    ->asText();

echo $response->text;

Breaking it down

1

Import the necessary classes

use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
2

Start a text generation request

Prism::text()
This creates a pending text generation request.
3

Choose your provider and model

->using(Provider::Anthropic, 'claude-3-5-sonnet-20241022')
Select which AI provider and model to use. You can easily swap providers!
4

Add your prompt

->withPrompt('Tell me a short story about a brave knight.')
The prompt is what you want the AI to respond to.
5

Execute the request

->asText();
This sends the request and returns a response object.

Adding system prompts

System prompts help set the behavior and context for the AI:
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;

$response = Prism::text()
    ->using(Provider::Anthropic, 'claude-3-5-sonnet-20241022')
    ->withSystemPrompt('You are a helpful coding assistant who explains concepts clearly.')
    ->withPrompt('What is dependency injection?')
    ->asText();

echo $response->text;
You can use Laravel views for complex prompts: ->withSystemPrompt(view('prompts.system'))

Working with images

Prism makes it easy to analyze images with multi-modal models:
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
use Prism\Prism\ValueObjects\Media\Image;

$response = Prism::text()
    ->using(Provider::Anthropic, 'claude-3-5-sonnet-20241022')
    ->withPrompt(
        'What objects do you see in this image?',
        [Image::fromLocalPath('/path/to/image.jpg')]
    )
    ->asText();

echo $response->text;

Multiple media types

You can include multiple images and other media types:
use Prism\Prism\ValueObjects\Media\Image;
use Prism\Prism\ValueObjects\Media\Document;

$response = Prism::text()
    ->using(Provider::Anthropic, 'claude-3-5-sonnet-20241022')
    ->withPrompt(
        'Compare this image with the information in this document',
        [
            Image::fromLocalPath('/path/to/chart.png'),
            Document::fromLocalPath('/path/to/report.pdf')
        ]
    )
    ->asText();

Building conversations

Create interactive conversations by chaining messages:
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
use Prism\Prism\ValueObjects\Messages\UserMessage;
use Prism\Prism\ValueObjects\Messages\AssistantMessage;

$response = Prism::text()
    ->using(Provider::Anthropic, 'claude-3-5-sonnet-20241022')
    ->withMessages([
        new UserMessage('What is JSON?'),
        new AssistantMessage('JSON is a lightweight data format...'),
        new UserMessage('Can you show me an example?')
    ])
    ->asText();

echo $response->text;

Accessing response details

The response object provides rich information beyond just the text:
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;

$response = Prism::text()
    ->using(Provider::Anthropic, 'claude-3-5-sonnet-20241022')
    ->withPrompt('Explain quantum computing.')
    ->asText();

// Access the generated text
echo $response->text;

// Check why the generation stopped
echo $response->finishReason->name; // 'stop', 'length', 'tool_calls', etc.

// Get token usage statistics
echo "Prompt tokens: {$response->usage->promptTokens}";
echo "Completion tokens: {$response->usage->completionTokens}";

// Access the raw API response
$rawResponse = $response->raw;

Tuning generation parameters

Control how the AI generates text with various parameters:
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;

$response = Prism::text()
    ->using(Provider::Anthropic, 'claude-3-5-sonnet-20241022')
    ->withMaxTokens(500)
    ->usingTemperature(0.7)
    ->withPrompt('Write a creative story')
    ->asText();

Available parameters

  • withMaxTokens(int): Maximum number of tokens to generate
  • usingTemperature(float): Randomness (0 = deterministic, higher = more random)
  • usingTopP(float): Nucleus sampling for controlling diversity
  • withClientOptions(array): Pass Guzzle request options for timeout, retries, etc.
It’s recommended to set either temperature or topP, but not both.

Error handling

Always wrap your Prism calls in try-catch blocks for production code:
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
use Prism\Prism\Exceptions\PrismException;
use Illuminate\Support\Facades\Log;
use Throwable;

try {
    $response = Prism::text()
        ->using(Provider::Anthropic, 'claude-3-5-sonnet-20241022')
        ->withPrompt('Generate text...')
        ->asText();
    
    echo $response->text;
} catch (PrismException $e) {
    Log::error('Prism error:', ['message' => $e->getMessage()]);
    // Handle Prism-specific errors
} catch (Throwable $e) {
    Log::error('Unexpected error:', ['message' => $e->getMessage()]);
    // Handle other errors
}

Complete example

Here’s a complete example showing a Laravel controller using Prism:
namespace App\Http\Controllers;

use Illuminate\Http\Request;
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
use Prism\Prism\Exceptions\PrismException;

class AiAssistantController extends Controller
{
    public function generate(Request $request)
    {
        $validated = $request->validate([
            'prompt' => 'required|string|max:1000',
        ]);
        
        try {
            $response = Prism::text()
                ->using(Provider::Anthropic, 'claude-3-5-sonnet-20241022')
                ->withSystemPrompt('You are a helpful assistant.')
                ->withPrompt($validated['prompt'])
                ->withMaxTokens(500)
                ->usingTemperature(0.7)
                ->asText();
            
            return response()->json([
                'text' => $response->text,
                'usage' => [
                    'prompt_tokens' => $response->usage->promptTokens,
                    'completion_tokens' => $response->usage->completionTokens,
                ],
            ]);
        } catch (PrismException $e) {
            return response()->json([
                'error' => 'AI generation failed',
                'message' => $e->getMessage(),
            ], 500);
        }
    }
}

Next steps

Now that you’ve got the basics down, explore more advanced features:

Configuration

Learn how to configure providers and customize Prism settings

Text generation

Dive deeper into text generation capabilities

Structured output

Generate type-safe structured data instead of plain text

Tools & function calling

Extend AI with custom tools and function calling

Streaming

Stream responses in real-time for better UX

Testing

Test your AI-powered features with confidence

Build docs developers (and LLMs) love