Skip to main content
Google Gemini models use the native Vertex AI generateContent and predict endpoints, providing full support for text generation, structured output, and embeddings.

Provider constant

Use the Vertex::Gemini constant to access Google Gemini models:
use Prism\Vertex\Enums\Vertex;

Vertex::Gemini  // 'vertex-gemini'

Configuration

Gemini models use the shared vertex configuration block:
'vertex' => [
    'project_id'  => env('VERTEX_PROJECT_ID'),
    'location'    => env('VERTEX_LOCATION', 'us-central1'),
    'credentials' => env('VERTEX_CREDENTIALS'), // path to service-account.json
    // OR use api_key instead:
    // 'api_key' => env('VERTEX_API_KEY'),
],

Express mode support

Gemini is the only provider that supports Express mode, which allows you to use an API key without providing a project ID or location:
'vertex' => [
    'api_key' => env('VERTEX_API_KEY'),
    // project_id and location omitted — triggers Express mode
],
Express mode only works with Google Gemini models. Partner models require Standard mode with project ID and location.

API schema

Gemini models use the Gemini schema, which provides:
  • Native generateContent endpoint for text generation
  • Native predict endpoint for embeddings
  • Native structured output via response_mime_type: application/json and response_schema
  • Publisher: google

Example models

  • gemini-2.5-flash
  • gemini-2.5-pro
  • gemini-2.0-flash-exp
  • gemini-1.5-pro
  • gemini-1.5-flash

Usage examples

Text generation

use Prism\Prism\Prism;
use Prism\Vertex\Enums\Vertex;

$response = Prism::text()
    ->using(Vertex::Gemini, 'gemini-2.5-flash')
    ->withPrompt('Explain quantum computing in simple terms')
    ->asText();

echo $response->text;

Structured output

Gemini has native structured output support, which constrains the model to produce valid JSON matching your schema:
use Prism\Prism\Prism;
use Prism\Vertex\Enums\Vertex;
use Prism\Prism\Schema\ObjectSchema;
use Prism\Prism\Schema\StringSchema;
use Prism\Prism\Schema\ArraySchema;

$schema = new ObjectSchema(
    name: 'languages',
    description: 'Top programming languages',
    properties: [
        new ArraySchema(
            'languages',
            'List of programming languages',
            items: new ObjectSchema(
                name: 'language',
                description: 'Programming language details',
                properties: [
                    new StringSchema('name', 'The language name'),
                    new StringSchema('popularity', 'Popularity description'),
                ]
            )
        )
    ]
);

$response = Prism::structured()
    ->using(Vertex::Gemini, 'gemini-2.5-flash')
    ->withSchema($schema)
    ->withPrompt('List the top 3 programming languages')
    ->asStructured();

$data = $response->structured;
Under the hood, Gemini structured output uses:
  • response_mime_type: application/json
  • response_schema with your JSON schema
  • The model is constrained to only produce valid JSON matching the schema

Embeddings

Gemini is the only provider in Prism Vertex that supports embeddings:
use Prism\Prism\Prism;
use Prism\Vertex\Enums\Vertex;

$response = Prism::embeddings()
    ->using(Vertex::Gemini, 'text-embedding-005')
    ->fromInput('The sky is blue')
    ->asEmbeddings();

$embeddings = $response->embeddings;

Multi-turn conversation

use Prism\Prism\Prism;
use Prism\Vertex\Enums\Vertex;
use Prism\Prism\ValueObjects\Messages\UserMessage;
use Prism\Prism\ValueObjects\Messages\AssistantMessage;

$response = Prism::text()
    ->using(Vertex::Gemini, 'gemini-2.5-flash')
    ->withMessages([
        new UserMessage('What is the capital of France?'),
        new AssistantMessage('The capital of France is Paris.'),
        new UserMessage('What is its population?'),
    ])
    ->asText();

echo $response->text;

Capabilities

Text generation

Full support for text generation with multi-turn conversations

Structured output

Native structured output with schema constraints

Embeddings

Generate embeddings for text

Next steps

Structured output

Learn more about structured output

Configuration

See all configuration options

Build docs developers (and LLMs) love