Documentation Index
Fetch the complete documentation index at: https://mintlify.com/dlt-hub/dlt/llms.txt
Use this file to discover all available pages before exploring further.
Destinations are the data warehouses, databases, and storage systems where dlt loads your data. Each destination provides a factory function that returns a configured destination instance.
Available Destinations
dlt supports the following destinations out of the box:
SQL Databases
PostgreSQL
from dlt.destinations import postgres
pipeline = dlt.pipeline(
destination=postgres()
)
BigQuery
from dlt.destinations import bigquery
pipeline = dlt.pipeline(
destination=bigquery()
)
Snowflake
from dlt.destinations import snowflake
pipeline = dlt.pipeline(
destination=snowflake()
)
Redshift
from dlt.destinations import redshift
pipeline = dlt.pipeline(
destination=redshift()
)
DuckDB
from dlt.destinations import duckdb
pipeline = dlt.pipeline(
destination=duckdb(database="local.duckdb")
)
MotherDuck
from dlt.destinations import motherduck
pipeline = dlt.pipeline(
destination=motherduck()
)
MS SQL Server
from dlt.destinations import mssql
pipeline = dlt.pipeline(
destination=mssql()
)
Azure Synapse
from dlt.destinations import synapse
pipeline = dlt.pipeline(
destination=synapse()
)
Databricks
from dlt.destinations import databricks
pipeline = dlt.pipeline(
destination=databricks()
)
Clickhouse
from dlt.destinations import clickhouse
pipeline = dlt.pipeline(
destination=clickhouse()
)
Dremio
from dlt.destinations import dremio
pipeline = dlt.pipeline(
destination=dremio()
)
SQLAlchemy (Generic)
from dlt.destinations import sqlalchemy
pipeline = dlt.pipeline(
destination=sqlalchemy()
)
Data Lakes & File Systems
Filesystem
from dlt.destinations import filesystem
pipeline = dlt.pipeline(
destination=filesystem(bucket_url="s3://my-bucket")
)
Supports:
- Local filesystem
- AWS S3
- Google Cloud Storage (GCS)
- Azure Blob Storage
- SFTP
Microsoft Fabric
from dlt.destinations import fabric
pipeline = dlt.pipeline(
destination=fabric()
)
Amazon Athena
from dlt.destinations import athena
pipeline = dlt.pipeline(
destination=athena()
)
DuckLake
from dlt.destinations import ducklake
pipeline = dlt.pipeline(
destination=ducklake()
)
Vector Databases
Qdrant
from dlt.destinations import qdrant
pipeline = dlt.pipeline(
destination=qdrant()
)
Weaviate
from dlt.destinations import weaviate
pipeline = dlt.pipeline(
destination=weaviate()
)
LanceDB
from dlt.destinations import lancedb
pipeline = dlt.pipeline(
destination=lancedb()
)
Special Purpose
Dummy (Testing)
from dlt.destinations import dummy
pipeline = dlt.pipeline(
destination=dummy() # For testing, discards all data
)
Custom Destination
from dlt.destinations import destination
# Create custom destination from callable
@destination(batch_size=1000)
def my_destination(items, table):
# Process items
for item in items:
print(item)
pipeline = dlt.pipeline(
destination=my_destination
)
Using Destinations
Simple Usage
Pass destination name as a string:
import dlt
pipeline = dlt.pipeline(
pipeline_name="my_pipeline",
destination="duckdb",
dataset_name="my_dataset"
)
With Configuration
Pass a destination factory with configuration:
from dlt.destinations import postgres
pipeline = dlt.pipeline(
pipeline_name="my_pipeline",
destination=postgres(
credentials="postgresql://user:pass@localhost/db"
),
dataset_name="my_dataset"
)
From Configuration Files
Credentials are typically stored in .dlt/secrets.toml:
[destination.postgres.credentials]
database = "mydb"
username = "user"
password = "password"
host = "localhost"
port = 5432
Then use:
pipeline = dlt.pipeline(
destination="postgres", # Credentials loaded from secrets.toml
dataset_name="my_dataset"
)
Environment Variables
Credentials can also be provided via environment variables:
export DESTINATION__POSTGRES__CREDENTIALS__DATABASE="mydb"
export DESTINATION__POSTGRES__CREDENTIALS__PASSWORD="password"
Destination Capabilities
Different destinations support different features:
| Destination | Merge | Replace | Append | Schema Evolution | File Formats |
|---|
| PostgreSQL | ✓ | ✓ | ✓ | ✓ | - |
| BigQuery | ✓ | ✓ | ✓ | ✓ | - |
| Snowflake | ✓ | ✓ | ✓ | ✓ | - |
| DuckDB | ✓ | ✓ | ✓ | ✓ | - |
| Filesystem | - | ✓ | ✓ | - | parquet, jsonl, csv |
| Athena | - | ✓ | ✓ | ✓ | parquet, csv |
| Qdrant | - | - | ✓ | - | - |
| Weaviate | - | - | ✓ | - | - |
Common Configuration
All destinations support these common configuration options:
destination(
credentials="...", # Connection credentials
destination_name="custom_name", # Override destination name
environment="production" # Environment suffix for dataset
)
Write Dispositions
Destinations support different write dispositions:
- append: Add new data to existing tables
- replace: Replace entire table with new data
- merge: Update existing records and insert new ones (requires
primary_key)
@dlt.resource(write_disposition="merge", primary_key="id")
def my_resource():
...
See Also