The OpenRouter service provides functions for interacting with OpenRouter’s API for chat completions, streaming responses, and AI image generation.
Core Functions
fetchAIResponse
Fetches an AI response from OpenRouter API with support for text and image outputs.
fetchAIResponse(
messages: Message[],
apiKey: string,
model: string,
systemPrompt?: string
): Promise<string | MessageContent[]>
Array of conversation messages with role and content
OpenRouter API key for authentication
Model ID (e.g., ‘openai/gpt-4o’, ‘anthropic/claude-4.5-sonnet’)
Optional system prompt to guide model behavior
response
string | MessageContent[]
Returns string for text-only responses, or MessageContent array when response includes images
Example:
const messages = [
{
id: '1',
role: 'user',
content: 'Explain quantum computing',
timestamp: new Date()
}
];
const response = await fetchAIResponse(
messages,
'sk-or-v1-xxx',
'openai/gpt-4o',
'You are a helpful assistant.'
);
streamAIResponse
Streams AI responses in real-time with support for abort signals.
streamAIResponse(
messages: Message[],
apiKey: string,
model: string,
onChunk: (delta: string) => void,
systemPrompt?: string,
abortController?: AbortController
): Promise<string | MessageContent[]>
Array of conversation messages
Model ID to use for generation
onChunk
(delta: string) => void
required
Callback function invoked for each text chunk received
Optional controller to cancel the streaming request
fullResponse
string | MessageContent[]
Complete accumulated response after streaming completes
Example:
const abortController = new AbortController();
const fullText = await streamAIResponse(
messages,
apiKey,
'anthropic/claude-4.5-sonnet',
(chunk) => {
console.log('Received:', chunk);
},
'You are a helpful assistant.',
abortController
);
// To cancel: abortController.abort();
Image Generation
generateImage
Generates images using AI models that support image generation.
generateImage(
prompt: string,
apiKey: string,
model?: string,
options?: {
size?: '1024x1024' | '512x512' | '256x256';
style?: 'natural' | 'vivid' | 'digital_art';
quality?: 'standard' | 'hd';
}
): Promise<string | MessageContent[]>
Description of the image to generate
model
string
default:"google/gemini-2.5-flash-image-preview:free"
Image generation model ID
Image dimensions (1024x1024, 512x512, or 256x256)
Visual style: natural, vivid, or digital_art
Output quality: standard or hd
Example:
const result = await generateImage(
'A futuristic city at sunset',
'sk-or-v1-xxx',
'google/gemini-2.5-flash-image-preview',
{
size: '1024x1024',
style: 'vivid',
quality: 'hd'
}
);
generateImageReliable
Robust image generation with automatic retries and fallback models.
generateImageReliable(
prompt: string,
apiKey: string,
primaryModel?: string,
options?: {
maxRetries?: number;
fallbackModels?: string[];
size?: '1024x1024' | '512x512' | '256x256';
style?: 'natural' | 'vivid' | 'digital_art';
quality?: 'standard' | 'hd';
}
): Promise<string | MessageContent[]>
Preferred model to try first
Maximum retry attempts per model
Array of fallback model IDs to try if primary fails
Example:
const result = await generateImageReliable(
'Abstract digital artwork',
apiKey,
'google/gemini-2.5-flash-image-preview',
{
maxRetries: 3,
fallbackModels: [
'google/gemini-2.5-flash-image-preview:free',
'openai/gpt-4o'
],
quality: 'hd'
}
);
Model Discovery
getImageModels
Retrieves available models that support image generation.
getImageModels(): Promise<Array<{
id: string;
name: string;
desc: string;
emoji: string;
}>>
Example:
const imageModels = await getImageModels();
// Returns:
// [
// {
// id: 'google/gemini-2.5-flash-image-preview:free',
// name: 'Gemini 2.5 Flash Image',
// desc: "Génération d'images IA avancée",
// emoji: '🎨'
// },
// ...
// ]
getTopWeeklyModels
Fetches trending models for general chat use.
getTopWeeklyModels(): Promise<Array<{
id: string;
name: string;
desc: string;
emoji: string;
isFree?: boolean;
}>>
Example:
const trendingModels = await getTopWeeklyModels();
// Returns latest trending models from OpenRouter API
Utility Functions
validateApiKey
Validates an OpenRouter API key.
validateApiKey(apiKey: string): Promise<boolean>
true if the API key is valid, false otherwise
Example:
const isValid = await validateApiKey('sk-or-v1-xxx');
if (!isValid) {
console.error('Invalid API key');
}
isImageGenerationModel
Checks if a model supports image generation.
isImageGenerationModel(modelId: string): boolean
Example:
const canGenerate = isImageGenerationModel('google/gemini-2.5-flash-image-preview');
// Returns: true
optimizeImagePrompt
Enhances image prompts with quality descriptors.
optimizeImagePrompt(prompt: string): string
Example:
const enhanced = optimizeImagePrompt('a cat');
// Returns: 'a cat, highly detailed, professional quality, vibrant colors...'
createAdvancedImagePrompt
Creates detailed image prompts with customizable parameters.
createAdvancedImagePrompt(
basePrompt: string,
options?: {
style?: 'natural' | 'vivid' | 'digital_art' | 'photorealistic' | 'anime' | 'oil_painting' | 'watercolor';
mood?: 'bright' | 'dark' | 'serene' | 'dramatic' | 'playful' | 'mysterious';
lighting?: 'natural' | 'studio' | 'dramatic' | 'soft' | 'neon' | 'golden_hour';
composition?: 'centered' | 'rule_of_thirds' | 'wide_angle' | 'close_up' | 'birds_eye';
quality?: 'standard' | 'hd' | 'ultra_hd';
}
): string
Example:
const prompt = createAdvancedImagePrompt(
'mountain landscape',
{
style: 'photorealistic',
mood: 'serene',
lighting: 'golden_hour',
composition: 'rule_of_thirds',
quality: 'ultra_hd'
}
);
Type Definitions
Message
interface Message {
id: string;
role: 'user' | 'assistant' | 'system';
content: string | MessageContent[];
timestamp: Date;
}
MessageContent
type MessageContent =
| { type: 'text'; text: string }
| { type: 'image_url'; image_url: { url: string } };
OpenRouterModel
interface OpenRouterModel {
id: string;
name: string;
created: number;
description?: string;
context_length: number;
architecture: {
modality: string;
tokenizer: string;
instruct_type?: string;
};
pricing: {
prompt: string;
completion: string;
image?: string;
request?: string;
};
top_provider: {
context_length: number;
max_completion_tokens?: number;
is_moderated: boolean;
};
}