Configuration
Ollama Options
Ollama allows you to customize how the model is run via options. These options can be passed via the->withProviderOptions() method:
Using
withProviderOptions will override settings like topP and temperatureStreaming
Ollama supports streaming responses from your local models:Remember to increase the timeout for local models to prevent premature disconnection.
Considerations
Timeouts
Depending on your configuration, responses tend to time out. You may need to extend the client’s timeout using->withClientOptions(['timeout' => $seconds]):
Structured Output
Ollama doesn’t have native JSON mode or structured output like some providers. Prism implements a robust workaround:- We automatically append instructions to your prompt that guide the model to output valid JSON matching your schema
- If the response isn’t valid JSON, Prism will raise a PrismException
Limitations
Image URL
Ollama does not support images usingImage::fromUrl().
Tool Choice
Ollama does not currently support tool choice / required tools.Running Local Models
Ollama is perfect for running models locally with complete privacy:Popular Models
Some popular models available through Ollama:- llama3.2 - Meta’s latest Llama model
- gemma3 - Google’s Gemma models
- mistral - Mistral AI models
- phi3 - Microsoft’s Phi-3 models
- codellama - Code-specialized Llama
Installation
To use Ollama:- Install Ollama from ollama.ai
- Pull a model:
ollama pull llama3.2 - Start the Ollama service
- Configure Prism with your local URL