Define JSON schemas inline in SQL to constrain LLM responses to a predictable structure, then extract fields with dot notation using DuckDB’s JSON extension.
Use this file to discover all available pages before exploring further.
Structured output lets you specify exactly which fields an LLM must return and what types they must have. Instead of parsing free-form text after the fact, you embed a JSON schema directly in your SQL query and get back a validated object every time. This works with all six Flock LLM functions: llm_complete, llm_filter, llm_reduce, llm_rerank, llm_first, and llm_last.
To extract fields from structured responses using dot notation (e.g., response.category), load the DuckDB JSON extension first:
Each supported provider has its own schema syntax. The structure is different, but the effect is the same: the model is constrained to return a JSON object that matches your schema.
OpenAI
Ollama
Anthropic / Claude
OpenAI enforces schema compliance through the response_format parameter with type: "json_schema" and strict: true. The schema is nested inside a json_schema object that also carries a name field used for identification.
Anthropic support is split across model generations. Claude 4.x uses an output_format parameter. Claude 3.x relies on the tool_use mechanism to enforce structure. See the Anthropic getting started guide for the full setup and version-specific syntax.
Make sure you are targeting the right Claude generation. Using output_format with a Claude 3.x model, or tool_use with Claude 4.x, will produce unexpected results.
After loading the JSON extension, you can access fields in the structured response using dot notation. Cast to the appropriate DuckDB type to use the values downstream: