Skip to main content
Prism Vertex supports text embeddings through Google’s embedding models. Embeddings are vector representations of text that enable semantic search, similarity comparison, and clustering.

Basic usage

Generate embeddings for a text input:
use Prism\Prism\Prism;
use Prism\Vertex\Enums\Vertex;

$response = Prism::embeddings()
    ->using(Vertex::Gemini, 'text-embedding-005')
    ->fromInput('The sky is blue')
    ->asEmbeddings();

$embeddings = $response->embeddings;
The embeddings property contains an array of floating-point numbers representing the text in vector space.

Supported models

Only Google Gemini models support embeddings:
ModelDimensionsUse Case
text-embedding-005768General-purpose text embeddings
text-embedding-004768Previous generation embeddings
textembedding-gecko@003768Gecko model family
textembedding-gecko-multilingual@001768Multilingual support
Embeddings are only supported for the Gemini schema. Partner models (Anthropic, Mistral, Meta, etc.) do not support embeddings through Vertex AI.

Complete example

Here’s a complete example showing how to generate and use embeddings:
1
Generate embeddings
2
Create embeddings for your text:
3
use Prism\Prism\Prism;
use Prism\Vertex\Enums\Vertex;

$response = Prism::embeddings()
    ->using(Vertex::Gemini, 'text-embedding-005')
    ->fromInput('The sky is blue')
    ->asEmbeddings();

$embeddings = $response->embeddings;
4
Store embeddings
5
Store the embeddings in your database for later comparison:
6
use Illuminate\Support\Facades\DB;

DB::table('documents')->insert([
    'content' => 'The sky is blue',
    'embedding' => json_encode($embeddings),
    'created_at' => now(),
]);
7
Compare similarity
8
Compare embeddings using cosine similarity:
9
function cosineSimilarity(array $a, array $b): float
{
    $dotProduct = 0;
    $normA = 0;
    $normB = 0;
    
    for ($i = 0; $i < count($a); $i++) {
        $dotProduct += $a[$i] * $b[$i];
        $normA += $a[$i] * $a[$i];
        $normB += $b[$i] * $b[$i];
    }
    
    return $dotProduct / (sqrt($normA) * sqrt($normB));
}

// Generate embeddings for a query
$queryResponse = Prism::embeddings()
    ->using(Vertex::Gemini, 'text-embedding-005')
    ->fromInput('What color is the sky?')
    ->asEmbeddings();

$queryEmbedding = $queryResponse->embeddings;

// Compare with stored embeddings
$similarity = cosineSimilarity($embeddings, $queryEmbedding);

echo "Similarity: {$similarity}"; // High similarity score (close to 1.0)

Schema support

Embeddings are only supported for the Gemini schema:
SchemaEmbeddings Support
GeminiYes
AnthropicNo
OpenAINo
Attempting to generate embeddings with Anthropic or OpenAI schemas will throw a PrismException.

Error handling

Handle embeddings errors appropriately:
use Prism\Prism\Exceptions\PrismException;
use Prism\Vertex\Enums\Vertex;

try {
    $response = Prism::embeddings()
        ->using(Vertex::Gemini, 'text-embedding-005')
        ->fromInput('The sky is blue')
        ->asEmbeddings();
} catch (PrismException $e) {
    // Handle errors (e.g., unsupported schema, rate limits)
    echo 'Error: ' . $e->getMessage();
}

Unsupported schema error

Using embeddings with an unsupported schema throws an exception:
// This will throw PrismException
$response = Prism::embeddings()
    ->using(Vertex::Anthropic, 'claude-3-5-sonnet@20241022')
    ->fromInput('The sky is blue')
    ->asEmbeddings();

// Error: "Prism Vertex does not support embeddings for the anthropic apiSchema."

Response object

The asEmbeddings() method returns a Prism\Prism\Embeddings\Response object:
$response = Prism::embeddings()
    ->using(Vertex::Gemini, 'text-embedding-005')
    ->fromInput('The sky is blue')
    ->asEmbeddings();

// Access the embeddings array
$embeddings = $response->embeddings;

// Check dimensions
echo count($embeddings); // 768 for text-embedding-005

// Access usage statistics (when available)
echo $response->usage->inputTokens ?? 'N/A';

// Access the raw response
var_dump($response->response);

Use cases

Embeddings enable several powerful use cases: Find documents similar to a query:
// Index documents
$documents = [
    'The sky is blue',
    'Grass is green',
    'The ocean is blue',
];

$documentEmbeddings = [];
foreach ($documents as $doc) {
    $response = Prism::embeddings()
        ->using(Vertex::Gemini, 'text-embedding-005')
        ->fromInput($doc)
        ->asEmbeddings();
    
    $documentEmbeddings[] = [
        'content' => $doc,
        'embedding' => $response->embeddings,
    ];
}

// Search
$queryResponse = Prism::embeddings()
    ->using(Vertex::Gemini, 'text-embedding-005')
    ->fromInput('What is the color of the ocean?')
    ->asEmbeddings();

// Find most similar document
$similarities = [];
foreach ($documentEmbeddings as $doc) {
    $similarities[] = [
        'content' => $doc['content'],
        'similarity' => cosineSimilarity($queryResponse->embeddings, $doc['embedding']),
    ];
}

usort($similarities, fn($a, $b) => $b['similarity'] <=> $a['similarity']);

echo "Most similar: " . $similarities[0]['content'];
// Output: "Most similar: The ocean is blue"

Document clustering

Group similar documents together:
$documents = [
    'Machine learning is a subset of AI',
    'Deep learning uses neural networks',
    'The weather is nice today',
    'It might rain tomorrow',
];

$embeddings = [];
foreach ($documents as $doc) {
    $response = Prism::embeddings()
        ->using(Vertex::Gemini, 'text-embedding-005')
        ->fromInput($doc)
        ->asEmbeddings();
    
    $embeddings[] = $response->embeddings;
}

// Use clustering algorithms (k-means, hierarchical, etc.)
// to group similar documents based on embeddings

Recommendation systems

Recommend similar content based on user preferences:
// Get embeddings for user's liked content
$likedContent = 'Articles about machine learning';

$userEmbedding = Prism::embeddings()
    ->using(Vertex::Gemini, 'text-embedding-005')
    ->fromInput($likedContent)
    ->asEmbeddings();

// Compare with available content
$availableContent = [
    'Deep learning tutorial',
    'Cooking recipes',
    'Neural network architectures',
];

foreach ($availableContent as $content) {
    $contentEmbedding = Prism::embeddings()
        ->using(Vertex::Gemini, 'text-embedding-005')
        ->fromInput($content)
        ->asEmbeddings();
    
    $similarity = cosineSimilarity(
        $userEmbedding->embeddings,
        $contentEmbedding->embeddings
    );
    
    echo "{$content}: {$similarity}\n";
}

Best practices

1
Use consistent models
2
Always use the same embedding model for indexing and querying:
3
// Index with text-embedding-005
$indexResponse = Prism::embeddings()
    ->using(Vertex::Gemini, 'text-embedding-005')
    ->fromInput('Document text')
    ->asEmbeddings();

// Query with the same model
$queryResponse = Prism::embeddings()
    ->using(Vertex::Gemini, 'text-embedding-005')
    ->fromInput('Query text')
    ->asEmbeddings();
4
Normalize text before embedding
5
Clean and normalize text for better results:
6
function normalizeText(string $text): string
{
    // Remove extra whitespace
    $text = preg_replace('/\s+/', ' ', $text);
    
    // Trim
    $text = trim($text);
    
    // Convert to lowercase (optional, depending on use case)
    $text = strtolower($text);
    
    return $text;
}

$response = Prism::embeddings()
    ->using(Vertex::Gemini, 'text-embedding-005')
    ->fromInput(normalizeText('  The Sky   is  BLUE  '))
    ->asEmbeddings();
7
Cache embeddings
8
Cache embeddings to avoid redundant API calls:
9
use Illuminate\Support\Facades\Cache;

function getEmbeddings(string $text): array
{
    $cacheKey = 'embedding:' . md5($text);
    
    return Cache::remember($cacheKey, now()->addDays(30), function () use ($text) {
        $response = Prism::embeddings()
            ->using(Vertex::Gemini, 'text-embedding-005')
            ->fromInput($text)
            ->asEmbeddings();
        
        return $response->embeddings;
    });
}
10
Batch processing
11
Process multiple texts in batches for better performance:
12
$documents = [
    'Document 1',
    'Document 2',
    'Document 3',
];

$embeddings = [];
foreach ($documents as $doc) {
    $response = Prism::embeddings()
        ->using(Vertex::Gemini, 'text-embedding-005')
        ->fromInput($doc)
        ->asEmbeddings();
    
    $embeddings[] = $response->embeddings;
    
    // Add delay to avoid rate limits
    usleep(100000); // 100ms
}

Authentication requirements

Embeddings support both Express and Standard modes:

Express mode

config/prism.php
'vertex' => [
    'api_key' => env('VERTEX_API_KEY'),
],

Standard mode

config/prism.php
'vertex' => [
    'project_id'  => env('VERTEX_PROJECT_ID'),
    'location'    => env('VERTEX_LOCATION', 'us-central1'),
    'credentials' => env('VERTEX_CREDENTIALS'),
],

Next steps

Text generation

Generate text responses using Vertex AI

Structured output

Generate JSON responses with schema validation

Multi-provider

Use multiple providers with different configurations

Build docs developers (and LLMs) love