Skip to main content
Mistral AI models are available through Vertex AI using the :rawPredict endpoint with OpenAI-compatible request and response format.

Provider constant

Use the Vertex::Mistral constant to access Mistral AI models:
use Prism\Vertex\Enums\Vertex;

Vertex::Mistral  // 'vertex-mistral'

Configuration

Mistral models use the shared vertex configuration block:
'vertex' => [
    'project_id'  => env('VERTEX_PROJECT_ID'),
    'location'    => env('VERTEX_LOCATION', 'us-central1'),
    'credentials' => env('VERTEX_CREDENTIALS'), // path to service-account.json
],
Express mode is not supported for Mistral models. You must provide a project ID and location.

API schema

Mistral models use the OpenAI schema, which provides:
  • :rawPredict endpoint with OpenAI-compatible format
  • Publisher: mistralai
  • Supports text generation and structured output
  • No native embeddings support

Example models

  • mistral-small-2503
  • mistral-medium-3
  • mistral-large-2407
  • codestral-2501
  • mixtral-8x7b-instruct-v01

Usage examples

Text generation

use Prism\Prism\Prism;
use Prism\Vertex\Enums\Vertex;

$response = Prism::text()
    ->using(Vertex::Mistral, 'mistral-small-2503')
    ->withPrompt('Explain quantum computing in simple terms')
    ->asText();

echo $response->text;

Structured output

Mistral models support structured output via response_format: { type: "json_object" } combined with a schema instruction:
use Prism\Prism\Prism;
use Prism\Vertex\Enums\Vertex;
use Prism\Prism\Schema\ObjectSchema;
use Prism\Prism\Schema\StringSchema;
use Prism\Prism\Schema\ArraySchema;

$schema = new ObjectSchema(
    name: 'languages',
    description: 'Top programming languages',
    properties: [
        new ArraySchema(
            'languages',
            'List of programming languages',
            items: new ObjectSchema(
                name: 'language',
                description: 'Programming language details',
                properties: [
                    new StringSchema('name', 'The language name'),
                    new StringSchema('popularity', 'Popularity description'),
                ]
            )
        )
    ]
);

$response = Prism::structured()
    ->using(Vertex::Mistral, 'mistral-small-2503')
    ->withSchema($schema)
    ->withPrompt('List the top 3 programming languages')
    ->asStructured();

$data = $response->structured;

Custom JSON instruction

You can customize the JSON schema instruction message:
$response = Prism::structured()
    ->using(Vertex::Mistral, 'mistral-small-2503')
    ->withSchema($schema)
    ->withProviderOptions([
        'jsonModeMessage' => 'Return JSON matching this schema: {schema}',
    ])
    ->withPrompt('List the top 3 programming languages')
    ->asStructured();

Code generation with Codestral

use Prism\Prism\Prism;
use Prism\Vertex\Enums\Vertex;

$response = Prism::text()
    ->using(Vertex::Mistral, 'codestral-2501')
    ->withPrompt('Write a PHP function to calculate Fibonacci numbers')
    ->asText();

echo $response->text;

Multi-turn conversation

use Prism\Prism\Prism;
use Prism\Vertex\Enums\Vertex;
use Prism\Prism\ValueObjects\Messages\UserMessage;
use Prism\Prism\ValueObjects\Messages\AssistantMessage;

$response = Prism::text()
    ->using(Vertex::Mistral, 'mistral-medium-3')
    ->withMessages([
        new UserMessage('What is the capital of France?'),
        new AssistantMessage('The capital of France is Paris.'),
        new UserMessage('What is its population?'),
    ])
    ->asText();

echo $response->text;

Capabilities

Text generation

Full support for text generation with multi-turn conversations

Structured output

Structured output via JSON mode with schema instruction

Endpoint details

Mistral models use the :rawPredict endpoint action, where the model name is included in the URL path rather than the request body.

Next steps

Structured output

Learn more about structured output

Other providers

Explore other MaaS providers

Build docs developers (and LLMs) love