Skip to main content
By default, the Convex backend stores file data on the local filesystem within the Docker container. For production deployments, you can configure S3-compatible storage for better scalability and reliability.

What gets stored in S3

When configured, S3 storage is used for:
  • Snapshot exports: Database backups and exports
  • Snapshot imports: Data imports and migrations
  • Function modules: Compiled JavaScript/TypeScript code
  • User files: Files uploaded through the file storage API
  • Search indexes: Full-text search index data

Supported storage providers

  • AWS S3: Native S3 support
  • Cloudflare R2: S3-compatible storage
  • MinIO: Self-hosted S3-compatible storage
  • DigitalOcean Spaces: S3-compatible storage
  • Backblaze B2: S3-compatible storage
  • Other S3-compatible providers

S3 setup (AWS)

1

Create S3 buckets

Create the following buckets in your AWS region:
aws s3 mb s3://convex-snapshot-exports
aws s3 mb s3://convex-snapshot-imports
aws s3 mb s3://convex-modules
aws s3 mb s3://convex-user-files
aws s3 mb s3://convex-search-indexes
Use unique bucket names. S3 bucket names must be globally unique across all AWS accounts.
2

Create IAM user and credentials

Create an IAM user with programmatic access and attach a policy with permissions for these buckets:
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:PutObject",
        "s3:GetObject",
        "s3:DeleteObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::convex-*",
        "arn:aws:s3:::convex-*/*"
      ]
    }
  ]
}
3

Configure environment variables

Add to your .env file:
.env
AWS_REGION='us-east-1'
AWS_ACCESS_KEY_ID='AKIAIOSFODNN7EXAMPLE'
AWS_SECRET_ACCESS_KEY='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
S3_STORAGE_EXPORTS_BUCKET='convex-snapshot-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET='convex-snapshot-imports'
S3_STORAGE_MODULES_BUCKET='convex-modules'
S3_STORAGE_FILES_BUCKET='convex-user-files'
S3_STORAGE_SEARCH_BUCKET='convex-search-indexes'
Never commit AWS credentials to source control. Use environment variables or secrets management.
4

Restart the backend

docker compose down
docker compose up

Cloudflare R2 setup

Cloudflare R2 offers S3-compatible storage with zero egress fees.
1

Create R2 buckets

In the Cloudflare dashboard, create the following R2 buckets:
  • convex-snapshot-exports
  • convex-snapshot-imports
  • convex-modules
  • convex-user-files
  • convex-search-indexes
2

Create API token

Create an R2 API token with read and write permissions for your buckets.
3

Configure environment variables

Add to your .env file:
.env
AWS_REGION='auto'
AWS_ACCESS_KEY_ID='your-r2-access-key-id'
AWS_SECRET_ACCESS_KEY='your-r2-secret-access-key'
S3_ENDPOINT_URL='https://account-id.r2.cloudflarestorage.com'
AWS_S3_FORCE_PATH_STYLE=true
S3_STORAGE_EXPORTS_BUCKET='convex-snapshot-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET='convex-snapshot-imports'
S3_STORAGE_MODULES_BUCKET='convex-modules'
S3_STORAGE_FILES_BUCKET='convex-user-files'
S3_STORAGE_SEARCH_BUCKET='convex-search-indexes'
Replace account-id with your Cloudflare account ID from the R2 dashboard.
4

Restart the backend

docker compose down
docker compose up

MinIO setup (self-hosted)

MinIO is an open-source S3-compatible storage server you can self-host.
1

Run MinIO

docker run -p 9000:9000 -p 9001:9001 \
  -e "MINIO_ROOT_USER=minioadmin" \
  -e "MINIO_ROOT_PASSWORD=minioadmin" \
  quay.io/minio/minio server /data --console-address ":9001"
2

Create buckets

Access the MinIO console at http://localhost:9001 and create the required buckets, or use the CLI:
mc alias set local http://localhost:9000 minioadmin minioadmin
mc mb local/convex-snapshot-exports
mc mb local/convex-snapshot-imports
mc mb local/convex-modules
mc mb local/convex-user-files
mc mb local/convex-search-indexes
3

Configure environment variables

.env
AWS_REGION='us-east-1'
AWS_ACCESS_KEY_ID='minioadmin'
AWS_SECRET_ACCESS_KEY='minioadmin'
S3_ENDPOINT_URL='http://host.docker.internal:9000'
AWS_S3_FORCE_PATH_STYLE=true
AWS_S3_DISABLE_SSE=true
S3_STORAGE_EXPORTS_BUCKET='convex-snapshot-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET='convex-snapshot-imports'
S3_STORAGE_MODULES_BUCKET='convex-modules'
S3_STORAGE_FILES_BUCKET='convex-user-files'
S3_STORAGE_SEARCH_BUCKET='convex-search-indexes'

Environment variable reference

Required variables

AWS_REGION
string
required
AWS region where your S3 buckets are located. Use auto for Cloudflare R2.
AWS_REGION='us-east-1'
AWS_ACCESS_KEY_ID
string
required
Access key ID for S3 authentication.
AWS_SECRET_ACCESS_KEY
string
required
Secret access key for S3 authentication.

Bucket configuration

S3_STORAGE_EXPORTS_BUCKET
string
required
S3 bucket name for snapshot exports.
S3_STORAGE_EXPORTS_BUCKET='convex-snapshot-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET
string
required
S3 bucket name for snapshot imports.
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET='convex-snapshot-imports'
S3_STORAGE_MODULES_BUCKET
string
required
S3 bucket name for function modules.
S3_STORAGE_MODULES_BUCKET='convex-modules'
S3_STORAGE_FILES_BUCKET
string
required
S3 bucket name for user files.
S3_STORAGE_FILES_BUCKET='convex-user-files'
S3_STORAGE_SEARCH_BUCKET
string
required
S3 bucket name for search indexes.
S3_STORAGE_SEARCH_BUCKET='convex-search-indexes'

Optional configuration

S3_ENDPOINT_URL
string
Custom S3 endpoint URL. Required for S3-compatible services like R2, MinIO, etc.
# Cloudflare R2
S3_ENDPOINT_URL='https://account-id.r2.cloudflarestorage.com'

# MinIO
S3_ENDPOINT_URL='http://minio.my-domain.com:9000'
AWS_SESSION_TOKEN
string
Session token for temporary AWS credentials (e.g., when using IAM roles).
AWS_S3_FORCE_PATH_STYLE
boolean
Force path-style S3 URLs instead of virtual-hosted style.
AWS_S3_FORCE_PATH_STYLE=true
Required for Cloudflare R2 and most S3-compatible services.
AWS_S3_DISABLE_SSE
boolean
Disable server-side encryption for S3 objects.
AWS_S3_DISABLE_SSE=true
Useful for MinIO and other self-hosted solutions.
AWS_S3_DISABLE_CHECKSUMS
boolean
Disable checksums for S3 operations.
AWS_S3_DISABLE_CHECKSUMS=true

Migrating storage providers

If you’re switching between local storage and S3 storage (or between different S3 providers), you need to export and import your data.
1

Export from current backend

npx convex export --path ./backup.zip
This creates a complete backup of your deployment.
2

Set up new storage provider

Configure your new S3 buckets and environment variables as described above.
3

Restart backend with new storage

docker compose down
docker compose up
4

Import data to new backend

npx convex import --replace-all ./backup.zip
The import process will replace all data in your deployment. Ensure you have a backup before proceeding.

Bucket organization

You can use a single bucket with different prefixes or separate buckets for each type of data. The examples above use separate buckets for better organization and access control.

Single bucket approach

.env
S3_STORAGE_EXPORTS_BUCKET='convex-storage'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET='convex-storage'
S3_STORAGE_MODULES_BUCKET='convex-storage'
S3_STORAGE_FILES_BUCKET='convex-storage'
S3_STORAGE_SEARCH_BUCKET='convex-storage'
Convex will automatically organize data using prefixes within the bucket.
.env
S3_STORAGE_EXPORTS_BUCKET='my-app-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET='my-app-imports'
S3_STORAGE_MODULES_BUCKET='my-app-modules'
S3_STORAGE_FILES_BUCKET='my-app-files'
S3_STORAGE_SEARCH_BUCKET='my-app-search'
Separate buckets allow for:
  • Granular access control
  • Independent lifecycle policies
  • Easier cost tracking
  • Better organization

Security best practices

Follow these security practices to protect your data:
  1. Use IAM roles when possible: Instead of access keys, use IAM roles for EC2, ECS, or other AWS services
  2. Restrict bucket access: Use bucket policies to restrict access to your backend’s IP or VPC
  3. Enable encryption: Use server-side encryption (SSE-S3 or SSE-KMS) for sensitive data
  4. Rotate credentials: Regularly rotate your access keys
  5. Use separate buckets per environment: Don’t share buckets between development, staging, and production
  6. Enable versioning: Protect against accidental deletions
  7. Set lifecycle policies: Automatically delete old exports and reduce costs

Verification

After configuring S3 storage:
1

Check backend logs

docker compose logs backend | grep -i s3
Look for messages indicating S3 storage is configured.
2

Test file upload

Use the Convex file storage API to upload a test file and verify it appears in your S3 bucket.
3

Test export

npx convex export --path ./test-export.zip
Verify the export appears in your exports bucket.

Troubleshooting

Access denied errors

  1. Verify credentials: Check that your AWS access key and secret are correct
  2. Check IAM permissions: Ensure your IAM user/role has the required S3 permissions
  3. Verify bucket names: Ensure bucket names are correct and exist
  4. Check bucket policies: Verify bucket policies don’t block access

Connection timeout errors

  1. Check endpoint URL: Ensure S3_ENDPOINT_URL is correct for your provider
  2. Verify network access: Ensure your backend can reach the S3 endpoint
  3. Check firewall rules: Verify outbound connections to S3 are allowed

Path style errors

If you see errors about virtual-hosted style vs path style:
AWS_S3_FORCE_PATH_STYLE=true
This is required for most S3-compatible services.

Example configurations

Production with AWS S3

.env
AWS_REGION='us-east-1'
AWS_ACCESS_KEY_ID='AKIAIOSFODNN7EXAMPLE'
AWS_SECRET_ACCESS_KEY='wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
S3_STORAGE_EXPORTS_BUCKET='my-app-prod-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET='my-app-prod-imports'
S3_STORAGE_MODULES_BUCKET='my-app-prod-modules'
S3_STORAGE_FILES_BUCKET='my-app-prod-files'
S3_STORAGE_SEARCH_BUCKET='my-app-prod-search'

Production with Cloudflare R2

.env
AWS_REGION='auto'
AWS_ACCESS_KEY_ID='your-r2-access-key-id'
AWS_SECRET_ACCESS_KEY='your-r2-secret-access-key'
S3_ENDPOINT_URL='https://1a2b3c4d5e6f.r2.cloudflarestorage.com'
AWS_S3_FORCE_PATH_STYLE=true
S3_STORAGE_EXPORTS_BUCKET='convex-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET='convex-imports'
S3_STORAGE_MODULES_BUCKET='convex-modules'
S3_STORAGE_FILES_BUCKET='convex-files'
S3_STORAGE_SEARCH_BUCKET='convex-search'

Local development with MinIO

.env
AWS_REGION='us-east-1'
AWS_ACCESS_KEY_ID='minioadmin'
AWS_SECRET_ACCESS_KEY='minioadmin'
S3_ENDPOINT_URL='http://host.docker.internal:9000'
AWS_S3_FORCE_PATH_STYLE=true
AWS_S3_DISABLE_SSE=true
S3_STORAGE_EXPORTS_BUCKET='convex-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET='convex-imports'
S3_STORAGE_MODULES_BUCKET='convex-modules'
S3_STORAGE_FILES_BUCKET='convex-files'
S3_STORAGE_SEARCH_BUCKET='convex-search'

Next steps

Configuration

Explore all runtime configuration options

Database setup

Configure PostgreSQL or MySQL

Build docs developers (and LLMs) love