Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/dais-polymtl/flock/llms.txt

Use this file to discover all available pages before exploring further.

Flock supports Anthropic’s Claude family of models through a dedicated provider. Claude models are available for text completion, filtering, and aggregation. Note that Anthropic does not provide an embeddings API — if you need embeddings for RAG pipelines or similarity search, use the OpenAI or Ollama provider alongside Claude.

Prerequisites

You need an Anthropic API key. Create one at console.anthropic.com. Make sure Flock is installed and loaded before continuing — see the Quickstart if you haven’t done that yet.

Configure your secret

Store your API key using CREATE SECRET with the anthropic type. You can optionally specify an API version — Flock defaults to 2023-06-01:
CREATE SECRET (
    TYPE anthropic,
    API_KEY 'your-api-key'
);
The API_KEY field is required. API_VERSION is optional — the default 2023-06-01 works with all current Claude models.
DuckDB’s secret manager automatically redacts api_key values in output. You can inspect your secrets with FROM duckdb_secrets() without exposing the key.

Create a model

Register a named model using a Claude model ID and the anthropic provider. Anthropic requires a max_tokens parameter — Flock defaults to 4096 if you omit it.
CREATE MODEL(
    'ClaudeModel',
    'claude-sonnet-4-5',
    'anthropic',
    {"tuple_format": "json", "batch_size": 32, "model_parameters": {"temperature": 0.7, "max_tokens": 1024}}
);
The four arguments are:
ArgumentDescription
'ClaudeModel'Unique name you reference in queries
'claude-sonnet-4-5'Anthropic model ID
'anthropic'Provider name
{...}Config: batch size, tuple format, and model parameters

Run a query

With your secret and model in place, call llm_complete:
SELECT llm_complete(
    {'model_name': 'ClaudeModel'},
    {'prompt': 'Explain what a database is in simple terms.'}
);
To use column data as context:
SELECT llm_complete(
    {'model_name': 'ClaudeModel'},
    {
        'prompt': 'Summarize this support ticket in one sentence: {ticket}',
        'context_columns': [{'data': ticket_text, 'name': 'ticket'}]
    }
) AS summary
FROM support_tickets;

Available Claude models

Flock works with all current Claude models. The adapter automatically selects the correct API method based on the model version.
Model IDDescriptionStructured output method
claude-opus-4-5Most capableoutput_format
claude-sonnet-4-5Best balance of speed and capabilityoutput_format
claude-haiku-4-5Fast and cost-effectiveoutput_format
claude-3-5-sonnet-20241022Previous Sonnet generationtool_use
claude-3-haiku-20240307Previous Haiku generationtool_use
For the full and current list, see the Anthropic models documentation.

Structured output

Flock uses a hybrid approach for structured JSON output depending on the Claude model generation:
  • Claude 4.x models use the native output_format API, which guarantees strict schema compliance
  • Claude 3.x models fall back to tool_use, which provides structured output on models that predate the output_format API
You do not need to configure this — Flock detects the model version and applies the right method automatically. For Claude 4.x models, you can pass a custom JSON schema:
SELECT llm_complete(
    {
        'model_name': 'ClaudeModel',
        'model_parameters': '{
            "output_format": {
                "type": "json_schema",
                "schema": {
                    "type": "object",
                    "properties": {
                        "sentiment": {"type": "string"},
                        "confidence": {"type": "number"}
                    },
                    "required": ["sentiment", "confidence"],
                    "additionalProperties": false
                }
            }
        }'
    },
    {'prompt': 'Analyze the sentiment of this text: I love this product!'}
) AS analysis;
Custom schemas must include "additionalProperties": false on all objects — this is required by the Anthropic output_format API.

Model parameters

You can customize Claude’s behavior with additional parameters in the model config:
CREATE MODEL(
    'AnalystClaude',
    'claude-sonnet-4-5',
    'anthropic',
    {
        "model_parameters": {
            "temperature": 0.5,
            "max_tokens": 2048,
            "system": "You are an expert data analyst. Always provide structured, actionable insights."
        }
    }
);
Supported parameters include:
ParameterDescription
temperatureControls randomness, 0.0 to 1.0
max_tokensMaximum response length (required by Anthropic, default: 4096)
systemSystem prompt for context and instructions
top_pNucleus sampling threshold
top_kLimits token selection to top K options
Flock forwards all parameters to the Anthropic Messages API unchanged.
Anthropic does not provide an embeddings API. Calling llm_embedding with an Anthropic model returns an error. Use the OpenAI or Ollama provider for embeddings instead.

Next steps

Structured output

Full guide to enforcing JSON schemas on LLM responses

Scalar functions

Reference for llm_complete, llm_filter, and aggregate functions

Build docs developers (and LLMs) love