Skip to main content
You can generate text completions using any supported provider through the Prism Vertex package. The provider uses the Prism text() method with the Vertex provider constants.

Basic usage

Generate a text response using the Prism::text() method with a Vertex provider constant:
use Prism\Prism\Prism;
use Prism\Vertex\Enums\Vertex;

$response = Prism::text()
    ->using(Vertex::Gemini, 'gemini-2.5-flash')
    ->withPrompt('Explain quantum computing in simple terms')
    ->asText();

echo $response->text;
The using() method accepts two parameters:
  • Provider constant - One of the Vertex::* constants (e.g., Vertex::Gemini, Vertex::Anthropic)
  • Model string - The specific model identifier

Supported providers

Prism Vertex supports multiple AI providers through a single configuration:
$response = Prism::text()
    ->using(Vertex::Gemini, 'gemini-2.5-flash')
    ->withPrompt('Explain quantum computing in simple terms')
    ->asText();

Available provider constants

All provider constants are available in the Prism\Vertex\Enums\Vertex class:
ConstantPublisherExample Models
Vertex::Geminigooglegemini-2.5-flash, gemini-pro
Vertex::Anthropicanthropicclaude-3-5-sonnet@20241022, claude-3-5-haiku@20241022
Vertex::Mistralmistralaimistral-small-2503, codestral-2501
Vertex::Metametallama-4-scout-17b-16e-instruct-maas
Vertex::DeepSeekdeepseekdeepseek-v3-0324-maas
Vertex::AI21ai21jamba-1.5-mini@001, jamba-1.5-large@001
Vertex::Kimikimikimi-k2-0711-maas
Vertex::MiniMaxminimaxminimax-m1-40k-0709-maas
Vertex::OpenAIopenaigpt-oss-4o-mini-maas
Vertex::Qwenqwenqwen2.5-72b-instruct-maas
Vertex::ZAIzaiorgglm-4-plus-maas

Response object

The asText() method returns a Prism\Prism\Text\Response object with the following properties:
$response = Prism::text()
    ->using(Vertex::Gemini, 'gemini-2.5-flash')
    ->withPrompt('Hello, world!')
    ->asText();

// Access the generated text
echo $response->text;

// Access usage statistics (when available)
echo $response->usage->inputTokens;
echo $response->usage->outputTokens;

// Access the raw response
var_dump($response->response);

API schema selection

Prism Vertex automatically selects the correct API schema based on the provider constant and model string. Three schemas are supported:
The schema is selected automatically - you don’t need to specify it unless you want to override the default behavior.

Gemini schema

Used for Google Gemini models. Uses the native generateContent endpoint.
$response = Prism::text()
    ->using(Vertex::Gemini, 'gemini-2.5-flash')
    ->withPrompt('Hello')
    ->asText();

Anthropic schema

Used for Anthropic Claude models. Uses the :rawPredict endpoint with Anthropic Messages API format.
$response = Prism::text()
    ->using(Vertex::Anthropic, 'claude-3-5-sonnet@20241022')
    ->withPrompt('Hello')
    ->asText();

OpenAI schema

Used for partner models (Mistral, Meta, DeepSeek, etc.). Uses :rawPredict or :chatCompletions with OpenAI-compatible format.
$response = Prism::text()
    ->using(Vertex::Mistral, 'mistral-small-2503')
    ->withPrompt('Hello')
    ->asText();

Overriding the API schema

You can override the automatically selected schema using withProviderOptions():
use Prism\Vertex\Enums\VertexSchema;

$response = Prism::text()
    ->using(Vertex::Gemini, 'some-model')
    ->withProviderOptions(['apiSchema' => VertexSchema::Anthropic])
    ->withPrompt('Hello')
    ->asText();
Overriding the schema is an advanced feature. The automatic selection works for all standard use cases.

Express mode vs Standard mode

Prism Vertex supports two authentication modes:

Express mode (API key only)

When project_id and location are omitted, the package automatically uses Vertex AI Express Mode endpoints:
config/prism.php
'vertex' => [
    'api_key' => env('VERTEX_API_KEY'),
    // project_id and location omitted — triggers Express mode
],
Express mode only supports Google Gemini models. Partner models require Standard mode.

Standard mode (project + location)

Standard mode supports all providers and authentication methods:
config/prism.php
'vertex' => [
    'project_id'  => env('VERTEX_PROJECT_ID'),
    'location'    => env('VERTEX_LOCATION', 'us-central1'),
    'credentials' => env('VERTEX_CREDENTIALS'), // path to service-account.json
    // OR use api_key instead of credentials:
    // 'api_key' => env('VERTEX_API_KEY'),
],

Error handling

The package handles Vertex AI errors and converts them to Prism exceptions:
use Prism\Prism\Exceptions\PrismException;
use Prism\Prism\Exceptions\PrismRateLimitedException;
use Prism\Prism\Exceptions\PrismProviderOverloadedException;

try {
    $response = Prism::text()
        ->using(Vertex::Gemini, 'gemini-2.5-flash')
        ->withPrompt('Hello')
        ->asText();
} catch (PrismRateLimitedException $e) {
    // Handle rate limiting (HTTP 429)
    echo 'Rate limited: ' . $e->getMessage();
} catch (PrismProviderOverloadedException $e) {
    // Handle provider overload (HTTP 503, 529)
    echo 'Provider overloaded: ' . $e->getMessage();
} catch (PrismException $e) {
    // Handle other errors
    echo 'Error: ' . $e->getMessage();
}

Next steps

Structured output

Generate JSON responses with schema validation

Embeddings

Create text embeddings for semantic search

Multi-provider

Use multiple providers with different configurations

Build docs developers (and LLMs) love