Endpoint
Creates a complete backup of your Blnk database and uploads it to Amazon S3 for secure, durable, off-site storage. This provides better disaster recovery capabilities than local disk backups.
Prerequisites
Before using this endpoint, ensure you have:
- An Amazon S3 bucket configured
- AWS credentials with S3 write permissions
- Blnk server configured with S3 settings (bucket name, region, credentials)
Response
Success message indicating the backup was uploaded to S3 successfully
Example Request
curl -X GET "https://api.blnk.io/backup-s3" \
-H "Authorization: Bearer YOUR_API_KEY"
Example Response
Backup Details
What’s Included
The S3 backup includes:
- Complete database dump
- All tables and data
- Database schema
- Indexes and constraints
- All Blnk data (transactions, balances, ledgers, identities, etc.)
S3 Storage
Backups are stored in your configured S3 bucket with:
- Naming convention:
blnk_backup_YYYYMMDD_HHMMSS.dump
- Storage class: Standard (configurable)
- Encryption: Server-side encryption (if configured)
- Versioning: Enabled if bucket versioning is on
Backup Process
- Creates a PostgreSQL dump
- Compresses the dump file
- Uploads to S3 bucket
- Verifies upload success
- Optionally deletes local copy
Error Responses
Error object with details about what went wrong
Common Errors
- 400 Bad Request: S3 configuration error or missing credentials
- 500 Internal Server Error: Failed to create or upload backup
Use Cases
Automated Daily Backups
Schedule automatic backups to S3:
const createS3Backup = async () => {
console.log('Starting S3 backup...');
const timestamp = new Date().toISOString();
try {
const response = await fetch('https://api.blnk.io/backup-s3', {
headers: {
'Authorization': 'Bearer YOUR_API_KEY'
}
});
if (response.ok) {
console.log('✓ S3 backup completed successfully');
// Log to monitoring
await logBackup({
timestamp,
destination: 's3',
status: 'success'
});
// Send success notification
await notify({
type: 'backup_success',
message: `S3 backup completed at ${timestamp}`
});
return true;
} else {
throw new Error(`Backup failed with status: ${response.status}`);
}
} catch (error) {
console.error('✗ S3 backup failed:', error);
// Alert operations team
await sendAlert({
severity: 'high',
title: 'S3 Backup Failed',
message: error.message,
timestamp
});
return false;
}
};
// Schedule with node-cron
const cron = require('node-cron');
// Daily at 3 AM
cron.schedule('0 3 * * *', createS3Backup);
console.log('S3 backup schedule configured');
Multi-Region Backup Strategy
Backup to multiple S3 regions:
const multiRegionBackup = async () => {
console.log('Starting multi-region backup strategy...');
// Primary backup to main S3 bucket
const primaryBackup = await fetch('https://api.blnk.io/backup-s3', {
headers: { 'Authorization': 'Bearer YOUR_API_KEY' }
});
if (primaryBackup.ok) {
console.log('✓ Primary backup (us-east-1) complete');
// Replicate to secondary region
await replicateToSecondaryRegion();
console.log('✓ Replicated to eu-west-1');
}
};
const replicateToSecondaryRegion = async () => {
// Use S3 cross-region replication or manual copy
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
// Implementation depends on your setup
// This is just an example structure
};
Backup Before Major Changes
Create S3 backup before critical operations:
const safeOperation = async (operation, operationName) => {
console.log(`Creating backup before ${operationName}...`);
// Create S3 backup
const backupResponse = await fetch('https://api.blnk.io/backup-s3', {
headers: { 'Authorization': 'Bearer YOUR_API_KEY' }
});
if (!backupResponse.ok) {
console.error('✗ Backup failed - operation cancelled');
return {
success: false,
message: 'Operation cancelled due to backup failure'
};
}
console.log('✓ Backup complete - proceeding with operation');
try {
// Perform the operation
const result = await operation();
console.log(`✓ ${operationName} completed successfully`);
return { success: true, result };
} catch (error) {
console.error(`✗ ${operationName} failed:`, error);
console.log('Backup available in S3 for recovery');
return { success: false, error };
}
};
// Usage
await safeOperation(
() => runDatabaseMigration(),
'Database Migration'
);
Compliance and Retention
Implement backup retention policies:
const manageBackupRetention = async () => {
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const bucket = 'your-blnk-backup-bucket';
const retentionDays = {
daily: 7,
weekly: 30,
monthly: 365
};
// List all backups
const backups = await s3.listObjectsV2({
Bucket: bucket,
Prefix: 'blnk_backup_'
}).promise();
const now = new Date();
for (const backup of backups.Contents) {
const backupDate = new Date(backup.LastModified);
const ageInDays = (now - backupDate) / (1000 * 60 * 60 * 24);
// Determine if backup should be kept
const isMonthlyBackup = backupDate.getDate() === 1;
const isWeeklyBackup = backupDate.getDay() === 0; // Sunday
let shouldKeep = false;
if (ageInDays <= retentionDays.daily) {
shouldKeep = true; // Keep all recent backups
} else if (isWeeklyBackup && ageInDays <= retentionDays.weekly) {
shouldKeep = true; // Keep weekly backups
} else if (isMonthlyBackup && ageInDays <= retentionDays.monthly) {
shouldKeep = true; // Keep monthly backups
}
if (!shouldKeep) {
console.log(`Deleting old backup: ${backup.Key}`);
await s3.deleteObject({
Bucket: bucket,
Key: backup.Key
}).promise();
}
}
};
// Run retention policy weekly
cron.schedule('0 4 * * 0', manageBackupRetention);
Best Practices
1. Use S3 Lifecycle Policies
Configure S3 lifecycle rules:
{
"Rules": [
{
"Id": "Move to Glacier after 30 days",
"Status": "Enabled",
"Transitions": [
{
"Days": 30,
"StorageClass": "GLACIER"
}
]
},
{
"Id": "Delete after 365 days",
"Status": "Enabled",
"Expiration": {
"Days": 365
}
}
]
}
2. Enable Versioning
Protect against accidental overwrites:
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
await s3.putBucketVersioning({
Bucket: 'your-backup-bucket',
VersioningConfiguration: {
Status: 'Enabled'
}
}).promise();
3. Enable Encryption
Use server-side encryption:
await s3.putBucketEncryption({
Bucket: 'your-backup-bucket',
ServerSideEncryptionConfiguration: {
Rules: [
{
ApplyServerSideEncryptionByDefault: {
SSEAlgorithm: 'AES256'
}
}
]
}
}).promise();
4. Monitor Backup Success
Implement monitoring and alerting:
const monitorS3Backups = async () => {
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const bucket = 'your-backup-bucket';
// Get latest backup
const backups = await s3.listObjectsV2({
Bucket: bucket,
Prefix: 'blnk_backup_',
MaxKeys: 1
}).promise();
if (backups.Contents.length === 0) {
await sendAlert({
severity: 'critical',
message: 'No backups found in S3'
});
return;
}
const latestBackup = backups.Contents[0];
const backupAge = Date.now() - new Date(latestBackup.LastModified);
const hoursOld = backupAge / (1000 * 60 * 60);
if (hoursOld > 24) {
await sendAlert({
severity: 'warning',
message: `Last S3 backup is ${hoursOld.toFixed(1)} hours old`
});
}
};
// Check every 6 hours
setInterval(monitorS3Backups, 6 * 60 * 60 * 1000);
5. Test Restore Process
Regularly test backup restoration:
const testBackupRestore = async () => {
console.log('Testing backup restore process...');
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
// Download latest backup
const backup = await s3.getObject({
Bucket: 'your-backup-bucket',
Key: 'latest-backup.dump'
}).promise();
// Restore to test database
// Implementation depends on your setup
console.log('✓ Restore test completed');
};
// Test monthly
cron.schedule('0 5 1 * *', testBackupRestore);
S3 Configuration
Ensure your Blnk instance is configured with:
# Environment variables
AWS_ACCESS_KEY_ID=your_access_key
AWS_SECRET_ACCESS_KEY=your_secret_key
AWS_REGION=us-east-1
S3_BACKUP_BUCKET=your-backup-bucket
Recovery from S3
To restore from an S3 backup:
# Download from S3
aws s3 cp s3://your-backup-bucket/blnk_backup_20240304_030000.dump ./backup.dump
# Restore to database
pg_restore -d blnk_database ./backup.dump
Cost Optimization
- Use lifecycle policies: Move old backups to Glacier
- Compress backups: Enable compression for smaller files
- Retention policies: Delete old backups automatically
- Intelligent-Tiering: Use S3 Intelligent-Tiering for automatic cost optimization
Benefits of S3 Backups
- Durability: 99.999999999% (11 9’s) durability
- Availability: Accessible from anywhere
- Versioning: Keep multiple versions of backups
- Encryption: Built-in encryption support
- Compliance: Meets regulatory requirements
- Cross-region: Replicate to multiple regions
- Cost-effective: Pay only for what you use