Ollama lets you run open-weight models locally without any external API calls or API keys. Flock connects to your local Ollama instance over HTTP, so queries stay entirely on your machine. This makes Ollama a good choice for development, offline use, or privacy-sensitive workloads. Both text completion and vision models are supported.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/dais-polymtl/flock/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
Before configuring Flock, you need Ollama installed and running with at least one model downloaded:- Install Ollama — download from ollama.com/download
- Pull a model — for example:
ollama pull llama3.2 - Confirm Ollama is running — the default address is
127.0.0.1:11434
Configure your secret
Ollama does not require an API key. You only need to tell Flock where Ollama is listening:API_URL field is required and must point to your running Ollama instance. If you’re running Ollama on a different host or port, update the URL accordingly.
Create a model
Register a named model in Flock using the exact model name you pulled with Ollama:| Argument | Description |
|---|---|
'QuackingModel' | Unique name you reference in queries |
'llama3.2' | Ollama model name (must be already pulled) |
'ollama' | Provider name |
{...} | Config: batch size, tuple format, and model parameters |
Run a query
With your secret and model in place, callllm_complete:
Supported model types
- Text models
- Vision models
Any Ollama text/chat model works with
See the full catalog at ollama.com/library.
llm_complete, llm_filter, and aggregate functions. Popular choices include:| Model | Pull command |
|---|---|
| Llama 3.2 (3B) | ollama pull llama3.2 |
| Llama 3.1 (8B) | ollama pull llama3.1 |
| Mistral 7B | ollama pull mistral |
| Gemma 2 (9B) | ollama pull gemma2 |
| Phi-3 Mini | ollama pull phi3 |
Next steps
Image support
Analyze images with vision models in SQL queries
Scalar functions
Full reference for llm_complete, llm_filter, llm_embedding, and more