Resource management: models, prompts, and API keys
Manage LLM models, reusable versioned prompts, and provider API keys as SQL-native resources using Flock’s built-in CREATE, GET, UPDATE, and DELETE commands.
Use this file to discover all available pages before exploring further.
Flock treats models, prompts, and API credentials as first-class SQL resources. You create and manage them with dedicated SQL commands — no config files, no environment variables to juggle across sessions. Each resource type has its own storage table that is initialized per database when Flock loads, and every resource can be scoped as either local (the current database) or global (shared across all databases in the session).
A model record maps a short model_name alias to the underlying provider model, its serialization format, batching behavior, and any provider-specific parameters. The fields stored for each model are:
By default, a model is local to the current database. Use CREATE GLOBAL MODEL to make it available across all databases:
-- Global model — available in all databasesCREATE GLOBAL MODEL( 'shared-gpt4o', 'gpt-4o', 'openai', { "tuple_format": "JSON", "batch_size": 8, "model_parameters": { "temperature": 0.2, "top_p": 0.95 } });-- Local model — explicit keyword, same as the defaultCREATE LOCAL MODEL('local-llama', 'llama3.1', 'ollama');-- Promote or demote an existing modelUPDATE MODEL 'shared-gpt4o' TO LOCAL;UPDATE MODEL 'local-llama' TO GLOBAL;
Each prompt record stores a name, the prompt text, a version number, and a last-updated timestamp. Flock always resolves to the latest version unless you specify one explicitly.
Field
Description
prompt_name
Unique identifier for the prompt
prompt
Instruction content passed to the model
version
Auto-incremented integer; increases on each update
Like models, prompts are local by default. Use CREATE GLOBAL PROMPT to share across databases:
-- Global prompt — available in all databasesCREATE GLOBAL PROMPT('shared-summarizer', 'Summarize the following text in three bullet points.');-- Local prompt — explicit keyword, same as the defaultCREATE LOCAL PROMPT('local-classifier', 'Classify the following review as positive, negative, or neutral.');CREATE PROMPT('local-classifier', 'Classify the following review as positive, negative, or neutral.');-- Promote or demote an existing promptUPDATE PROMPT 'local-classifier' TO GLOBAL;UPDATE PROMPT 'shared-summarizer' TO LOCAL;
Each UPDATE PROMPT call creates a new version while keeping the previous versions accessible by their version number:
-- Update prompt text (increments version)UPDATE PROMPT('product-description', 'Write a compelling two-sentence product description for the following item.');-- Delete a promptDELETE PROMPT 'product-description';
Reference a prompt by name in any Flock LLM function. Flock resolves to the latest version unless you specify one:
-- Use the latest version of a promptSELECT llm_complete( {'model_name': 'gpt-4o'}, {'prompt_name': 'product-description'}, {'input_text': product_description}) AS generated_descriptionFROM products;-- Pin to a specific versionSELECT llm_complete( {'model_name': 'semantic_search_model'}, {'prompt_name': 'customer-review-summary', 'version': 3}, {'customer_review': review_text}) AS review_summaryFROM reviews;
Pin to a specific version in production queries to avoid prompt changes silently affecting downstream results. Use the latest-version shorthand during experimentation.
Flock uses DuckDB’s Secrets Manager for all provider credentials. Secrets are typed by provider and can be temporary (in-memory, lost when the session ends) or persistent (written to disk and reloaded automatically).Supported secret types:
Secret type
Provider
OPENAI
OpenAI
OLLAMA
Ollama (self-hosted)
AZURE_LLM
Azure OpenAI
ANTHROPIC
Anthropic / Claude
When no secret name is specified, DuckDB assigns a default name in the format __default_<provider> (e.g., __default_openai). Flock resolves this default automatically when you reference a model backed by that provider.
If you have multiple secrets for the same provider (e.g., different API keys for different projects), create the secret with a custom name and reference it via secret_name when calling an LLM function:
-- Create a named secretCREATE SECRET my_project_key ( TYPE OPENAI, API_KEY 'your-project-specific-api-key');-- Reference it in a querySELECT llm_complete( { 'model_name': 'gpt-4o', 'secret_name': 'my_project_key' }, {'prompt': 'Summarize this text.'}, {'text': body}) AS summaryFROM articles;
-- List all secretsFROM duckdb_secrets();-- Delete a temporary secretDROP TEMPORARY SECRET __default_openai;-- Delete a persistent secretDROP PERSISTENT SECRET __default_openai;
Deleting a secret that a model depends on will cause queries using that model to fail at runtime. Make sure any model referencing a secret has an alternative credential available before dropping it.