Flock supports OpenAI as a first-class provider. Once you store your API key withDocumentation Index
Fetch the complete documentation index at: https://mintlify.com/dais-polymtl/flock/llms.txt
Use this file to discover all available pages before exploring further.
CREATE SECRET, every Flock function — llm_complete, llm_filter, llm_embedding, and the rest — can use any OpenAI model. Flock also supports any OpenAI-compatible API (such as Groq or a self-hosted vLLM server) by supplying a custom BASE_URL.
Prerequisites
You need an OpenAI API key. Create one at platform.openai.com/api-keys. Make sure Flock is installed and loaded before continuing — see the Quickstart if you haven’t done that yet.Configure your secret
Store your API key using DuckDB’sCREATE SECRET command with the openai type:
API_KEY field is required. Flock will raise an error if it is missing or empty.
Create a model
Register a named model that references the provider and model ID you want to use:| Argument | Description |
|---|---|
'QuackingModel' | Unique name you reference in queries |
'gpt-4o' | OpenAI model ID |
'openai' | Provider name |
{...} | Config: batch size, tuple format, and model parameters |
Run a query
With your secret and model in place, callllm_complete:
Available OpenAI models
The table below lists commonly used models. For the full and up-to-date list, see the OpenAI models documentation.| Model ID | Type | Notes |
|---|---|---|
gpt-4o | Chat completion | Best overall performance |
gpt-4o-mini | Chat completion | Fast and cost-effective |
gpt-4-turbo | Chat completion | Previous flagship model |
text-embedding-3-small | Embedding | Recommended for most embedding tasks |
text-embedding-3-large | Embedding | Higher-dimensional embeddings |
text-embedding-ada-002 | Embedding | Legacy embedding model |
Use embedding models with
llm_embedding and chat models with llm_complete, llm_filter, and aggregate functions.OpenAI-compatible providers
Any provider with an OpenAI-compatible API works with theopenai secret type — just add a BASE_URL. The example below configures Groq:
Next steps
Structured output
Enforce JSON schemas on llm_complete responses
Scalar functions
Full reference for llm_complete, llm_filter, llm_embedding, and more