By default, the Convex backend stores file data on the local filesystem within the Docker container. For production deployments, you can configure S3-compatible storage for better scalability and reliability.
What gets stored in S3
When configured, S3 storage is used for:
Snapshot exports : Database backups and exports
Snapshot imports : Data imports and migrations
Function modules : Compiled JavaScript/TypeScript code
User files : Files uploaded through the file storage API
Search indexes : Full-text search index data
Supported storage providers
AWS S3 : Native S3 support
Cloudflare R2 : S3-compatible storage
MinIO : Self-hosted S3-compatible storage
DigitalOcean Spaces : S3-compatible storage
Backblaze B2 : S3-compatible storage
Other S3-compatible providers
S3 setup (AWS)
Create S3 buckets
Create the following buckets in your AWS region: aws s3 mb s3://convex-snapshot-exports
aws s3 mb s3://convex-snapshot-imports
aws s3 mb s3://convex-modules
aws s3 mb s3://convex-user-files
aws s3 mb s3://convex-search-indexes
Use unique bucket names. S3 bucket names must be globally unique across all AWS accounts.
Create IAM user and credentials
Create an IAM user with programmatic access and attach a policy with permissions for these buckets: {
"Version" : "2012-10-17" ,
"Statement" : [
{
"Effect" : "Allow" ,
"Action" : [
"s3:PutObject" ,
"s3:GetObject" ,
"s3:DeleteObject" ,
"s3:ListBucket"
],
"Resource" : [
"arn:aws:s3:::convex-*" ,
"arn:aws:s3:::convex-*/*"
]
}
]
}
Configure environment variables
Add to your .env file: AWS_REGION = 'us-east-1'
AWS_ACCESS_KEY_ID = 'AKIAIOSFODNN7EXAMPLE'
AWS_SECRET_ACCESS_KEY = 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
S3_STORAGE_EXPORTS_BUCKET = 'convex-snapshot-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET = 'convex-snapshot-imports'
S3_STORAGE_MODULES_BUCKET = 'convex-modules'
S3_STORAGE_FILES_BUCKET = 'convex-user-files'
S3_STORAGE_SEARCH_BUCKET = 'convex-search-indexes'
Never commit AWS credentials to source control. Use environment variables or secrets management.
Restart the backend
docker compose down
docker compose up
Cloudflare R2 setup
Cloudflare R2 offers S3-compatible storage with zero egress fees.
Create R2 buckets
In the Cloudflare dashboard, create the following R2 buckets:
convex-snapshot-exports
convex-snapshot-imports
convex-modules
convex-user-files
convex-search-indexes
Create API token
Create an R2 API token with read and write permissions for your buckets.
Configure environment variables
Add to your .env file: AWS_REGION = 'auto'
AWS_ACCESS_KEY_ID = 'your-r2-access-key-id'
AWS_SECRET_ACCESS_KEY = 'your-r2-secret-access-key'
S3_ENDPOINT_URL = 'https://account-id.r2.cloudflarestorage.com'
AWS_S3_FORCE_PATH_STYLE = true
S3_STORAGE_EXPORTS_BUCKET = 'convex-snapshot-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET = 'convex-snapshot-imports'
S3_STORAGE_MODULES_BUCKET = 'convex-modules'
S3_STORAGE_FILES_BUCKET = 'convex-user-files'
S3_STORAGE_SEARCH_BUCKET = 'convex-search-indexes'
Replace account-id with your Cloudflare account ID from the R2 dashboard.
Restart the backend
docker compose down
docker compose up
MinIO setup (self-hosted)
MinIO is an open-source S3-compatible storage server you can self-host.
Run MinIO
docker run -p 9000:9000 -p 9001:9001 \
-e "MINIO_ROOT_USER=minioadmin" \
-e "MINIO_ROOT_PASSWORD=minioadmin" \
quay.io/minio/minio server /data --console-address ":9001"
Create buckets
Access the MinIO console at http://localhost:9001 and create the required buckets, or use the CLI: mc alias set local http://localhost:9000 minioadmin minioadmin
mc mb local/convex-snapshot-exports
mc mb local/convex-snapshot-imports
mc mb local/convex-modules
mc mb local/convex-user-files
mc mb local/convex-search-indexes
Configure environment variables
AWS_REGION = 'us-east-1'
AWS_ACCESS_KEY_ID = 'minioadmin'
AWS_SECRET_ACCESS_KEY = 'minioadmin'
S3_ENDPOINT_URL = 'http://host.docker.internal:9000'
AWS_S3_FORCE_PATH_STYLE = true
AWS_S3_DISABLE_SSE = true
S3_STORAGE_EXPORTS_BUCKET = 'convex-snapshot-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET = 'convex-snapshot-imports'
S3_STORAGE_MODULES_BUCKET = 'convex-modules'
S3_STORAGE_FILES_BUCKET = 'convex-user-files'
S3_STORAGE_SEARCH_BUCKET = 'convex-search-indexes'
Environment variable reference
Required variables
AWS region where your S3 buckets are located. Use auto for Cloudflare R2.
Access key ID for S3 authentication.
Secret access key for S3 authentication.
Bucket configuration
S3_STORAGE_EXPORTS_BUCKET
S3 bucket name for snapshot exports. S3_STORAGE_EXPORTS_BUCKET = 'convex-snapshot-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET
S3 bucket name for snapshot imports. S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET = 'convex-snapshot-imports'
S3_STORAGE_MODULES_BUCKET
S3 bucket name for function modules. S3_STORAGE_MODULES_BUCKET = 'convex-modules'
S3 bucket name for user files. S3_STORAGE_FILES_BUCKET = 'convex-user-files'
S3 bucket name for search indexes. S3_STORAGE_SEARCH_BUCKET = 'convex-search-indexes'
Optional configuration
Custom S3 endpoint URL. Required for S3-compatible services like R2, MinIO, etc. # Cloudflare R2
S3_ENDPOINT_URL = 'https://account-id.r2.cloudflarestorage.com'
# MinIO
S3_ENDPOINT_URL = 'http://minio.my-domain.com:9000'
Session token for temporary AWS credentials (e.g., when using IAM roles).
Force path-style S3 URLs instead of virtual-hosted style. AWS_S3_FORCE_PATH_STYLE = true
Required for Cloudflare R2 and most S3-compatible services.
Disable server-side encryption for S3 objects. Useful for MinIO and other self-hosted solutions.
Disable checksums for S3 operations. AWS_S3_DISABLE_CHECKSUMS = true
Migrating storage providers
If you’re switching between local storage and S3 storage (or between different S3 providers), you need to export and import your data.
Export from current backend
npx convex export --path ./backup.zip
This creates a complete backup of your deployment.
Set up new storage provider
Configure your new S3 buckets and environment variables as described above.
Restart backend with new storage
docker compose down
docker compose up
Import data to new backend
npx convex import --replace-all ./backup.zip
The import process will replace all data in your deployment. Ensure you have a backup before proceeding.
Bucket organization
You can use a single bucket with different prefixes or separate buckets for each type of data. The examples above use separate buckets for better organization and access control.
Single bucket approach
S3_STORAGE_EXPORTS_BUCKET = 'convex-storage'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET = 'convex-storage'
S3_STORAGE_MODULES_BUCKET = 'convex-storage'
S3_STORAGE_FILES_BUCKET = 'convex-storage'
S3_STORAGE_SEARCH_BUCKET = 'convex-storage'
Convex will automatically organize data using prefixes within the bucket.
Multiple buckets approach (recommended)
S3_STORAGE_EXPORTS_BUCKET = 'my-app-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET = 'my-app-imports'
S3_STORAGE_MODULES_BUCKET = 'my-app-modules'
S3_STORAGE_FILES_BUCKET = 'my-app-files'
S3_STORAGE_SEARCH_BUCKET = 'my-app-search'
Separate buckets allow for:
Granular access control
Independent lifecycle policies
Easier cost tracking
Better organization
Security best practices
Follow these security practices to protect your data:
Use IAM roles when possible : Instead of access keys, use IAM roles for EC2, ECS, or other AWS services
Restrict bucket access : Use bucket policies to restrict access to your backend’s IP or VPC
Enable encryption : Use server-side encryption (SSE-S3 or SSE-KMS) for sensitive data
Rotate credentials : Regularly rotate your access keys
Use separate buckets per environment : Don’t share buckets between development, staging, and production
Enable versioning : Protect against accidental deletions
Set lifecycle policies : Automatically delete old exports and reduce costs
Verification
After configuring S3 storage:
Check backend logs
docker compose logs backend | grep -i s3
Look for messages indicating S3 storage is configured.
Test file upload
Use the Convex file storage API to upload a test file and verify it appears in your S3 bucket.
Test export
npx convex export --path ./test-export.zip
Verify the export appears in your exports bucket.
Troubleshooting
Access denied errors
Verify credentials : Check that your AWS access key and secret are correct
Check IAM permissions : Ensure your IAM user/role has the required S3 permissions
Verify bucket names : Ensure bucket names are correct and exist
Check bucket policies : Verify bucket policies don’t block access
Connection timeout errors
Check endpoint URL : Ensure S3_ENDPOINT_URL is correct for your provider
Verify network access : Ensure your backend can reach the S3 endpoint
Check firewall rules : Verify outbound connections to S3 are allowed
Path style errors
If you see errors about virtual-hosted style vs path style:
AWS_S3_FORCE_PATH_STYLE = true
This is required for most S3-compatible services.
Example configurations
Production with AWS S3
AWS_REGION = 'us-east-1'
AWS_ACCESS_KEY_ID = 'AKIAIOSFODNN7EXAMPLE'
AWS_SECRET_ACCESS_KEY = 'wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY'
S3_STORAGE_EXPORTS_BUCKET = 'my-app-prod-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET = 'my-app-prod-imports'
S3_STORAGE_MODULES_BUCKET = 'my-app-prod-modules'
S3_STORAGE_FILES_BUCKET = 'my-app-prod-files'
S3_STORAGE_SEARCH_BUCKET = 'my-app-prod-search'
Production with Cloudflare R2
AWS_REGION = 'auto'
AWS_ACCESS_KEY_ID = 'your-r2-access-key-id'
AWS_SECRET_ACCESS_KEY = 'your-r2-secret-access-key'
S3_ENDPOINT_URL = 'https://1a2b3c4d5e6f.r2.cloudflarestorage.com'
AWS_S3_FORCE_PATH_STYLE = true
S3_STORAGE_EXPORTS_BUCKET = 'convex-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET = 'convex-imports'
S3_STORAGE_MODULES_BUCKET = 'convex-modules'
S3_STORAGE_FILES_BUCKET = 'convex-files'
S3_STORAGE_SEARCH_BUCKET = 'convex-search'
Local development with MinIO
AWS_REGION = 'us-east-1'
AWS_ACCESS_KEY_ID = 'minioadmin'
AWS_SECRET_ACCESS_KEY = 'minioadmin'
S3_ENDPOINT_URL = 'http://host.docker.internal:9000'
AWS_S3_FORCE_PATH_STYLE = true
AWS_S3_DISABLE_SSE = true
S3_STORAGE_EXPORTS_BUCKET = 'convex-exports'
S3_STORAGE_SNAPSHOT_IMPORTS_BUCKET = 'convex-imports'
S3_STORAGE_MODULES_BUCKET = 'convex-modules'
S3_STORAGE_FILES_BUCKET = 'convex-files'
S3_STORAGE_SEARCH_BUCKET = 'convex-search'
Next steps
Configuration Explore all runtime configuration options
Database setup Configure PostgreSQL or MySQL