Flock supports Azure OpenAI as a dedicated provider. You connect Flock to your Azure deployment by storing your API key, resource name, and API version as a DuckDB secret, then registering a model that maps to a specific deployment. All standard Flock functions work identically once the provider is configured.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/dais-polymtl/flock/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
You need the following from your Azure OpenAI resource:- API key — found under Keys and Endpoint in the Azure portal
- Resource name — the subdomain of your endpoint (for example,
my-resourceif your endpoint ishttps://my-resource.openai.azure.com) - API version — the Azure OpenAI REST API version string (for example,
2024-02-01) - Deployment name — the name you gave to the model deployment in Azure AI Studio
Configure your secret
Store your Azure credentials usingCREATE SECRET with the azure_llm type:
https://{RESOURCE_NAME}.openai.azure.com internally.
Create a model
Register a named model pointing to your Azure deployment. The second argument is your deployment name as configured in Azure AI Studio:| Argument | Description |
|---|---|
'QuackingModel' | Unique name you reference in queries |
'gpt-4o' | Azure deployment name |
'azure' | Provider name |
{...} | Config: batch size, tuple format, and model parameters |
Run a query
With your secret and model in place, callllm_complete:
Azure-specific configuration details
Endpoint reconstruction
Endpoint reconstruction
Flock builds the Azure endpoint from You do not need to supply the full URL — only the resource name subdomain.
RESOURCE_NAME automatically:API version
API version
Azure OpenAI requires an explicit API version with every request. Pass it as
API_VERSION when creating your secret. Common values include 2024-02-01 and 2024-05-01-preview. Check the Azure OpenAI API reference for the current stable version.Deployment name vs. model ID
Deployment name vs. model ID
In Azure, you deploy a base model (such as
gpt-4o) under a custom deployment name you choose. The second argument to CREATE MODEL should be your deployment name, not necessarily the underlying model ID — they may differ depending on how you named your deployment.Next steps
Structured output
Enforce JSON schemas on llm_complete responses
Scalar functions
Full reference for llm_complete, llm_filter, llm_embedding, and more