Backup & Restore
Protect your Bloqd data with regular backups and learn how to restore in case of disaster.
What to Backup
Critical Data
| Component | Location | Priority |
|---|---|---|
| SQLite Database | ./data/bloqd.db | Critical |
| Environment Config | .env | Critical |
| GeoIP Database | ./data/GeoLite2-Country.mmdb | Medium |
| Uploaded Files | ./uploads/ | Medium |
Agent Data (on managed servers)
| Component | Location | Priority |
|---|---|---|
| Agent Config | /etc/bloqd/agent.yaml | Critical |
| Local Cache | /var/lib/bloqd/ | Low |
Backup Methods
Manual Database Backup
# Stop the container (recommended for consistency)
docker compose stop bloqd
# Copy the database
cp ./data/bloqd.db ./backups/bloqd-$(date +%Y%m%d-%H%M%S).db
# Start the container
docker compose start bloqd
Online Backup (SQLite)
For minimal downtime, use SQLite's backup API:
# Backup without stopping (SQLite3 required)
sqlite3 ./data/bloqd.db ".backup './backups/bloqd-$(date +%Y%m%d).db'"
Automated Backup Script
Create /opt/bloqd/backup.sh:
#!/bin/bash
set -e
BACKUP_DIR="/opt/bloqd/backups"
DATA_DIR="/opt/bloqd/data"
RETENTION_DAYS=30
# Create backup directory
mkdir -p "$BACKUP_DIR"
# Backup database
TIMESTAMP=$(date +%Y%m%d-%H%M%S)
sqlite3 "$DATA_DIR/bloqd.db" ".backup '$BACKUP_DIR/bloqd-$TIMESTAMP.db'"
# Backup environment file
cp /opt/bloqd/.env "$BACKUP_DIR/env-$TIMESTAMP"
# Compress backup
tar -czf "$BACKUP_DIR/bloqd-backup-$TIMESTAMP.tar.gz" \
-C "$BACKUP_DIR" \
"bloqd-$TIMESTAMP.db" \
"env-$TIMESTAMP"
# Cleanup individual files
rm "$BACKUP_DIR/bloqd-$TIMESTAMP.db" "$BACKUP_DIR/env-$TIMESTAMP"
# Remove old backups
find "$BACKUP_DIR" -name "bloqd-backup-*.tar.gz" -mtime +$RETENTION_DAYS -delete
echo "Backup completed: bloqd-backup-$TIMESTAMP.tar.gz"
Cron Schedule
# Edit crontab
crontab -e
# Add daily backup at 2 AM
0 2 * * * /opt/bloqd/backup.sh >> /var/log/bloqd-backup.log 2>&1
Docker Volume Backup
If using Docker volumes:
# List volumes
docker volume ls | grep bloqd
# Backup volume
docker run --rm \
-v bloqd_data:/data:ro \
-v $(pwd)/backups:/backup \
alpine tar czf /backup/bloqd-data-$(date +%Y%m%d).tar.gz -C /data .
Remote Backup
To S3/MinIO
#!/bin/bash
# Requires aws-cli or mc (MinIO client)
BACKUP_FILE="bloqd-backup-$(date +%Y%m%d).tar.gz"
# Create backup
sqlite3 ./data/bloqd.db ".backup '/tmp/bloqd.db'"
tar -czf "/tmp/$BACKUP_FILE" -C /tmp bloqd.db
# Upload to S3
aws s3 cp "/tmp/$BACKUP_FILE" "s3://your-bucket/bloqd-backups/"
# Or MinIO
mc cp "/tmp/$BACKUP_FILE" myminio/bloqd-backups/
# Cleanup
rm /tmp/bloqd.db "/tmp/$BACKUP_FILE"
To Remote Server
# Using rsync
rsync -avz ./backups/ backup-server:/backups/bloqd/
# Using scp
scp ./backups/bloqd-backup-*.tar.gz backup-server:/backups/bloqd/
Restore Procedures
Full Restore
# Stop Bloqd
docker compose down
# Restore database
tar -xzf bloqd-backup-20240115.tar.gz
cp bloqd-*.db ./data/bloqd.db
# Restore environment
cp env-* .env
# Start Bloqd
docker compose up -d
Database Only Restore
# Stop container
docker compose stop bloqd
# Backup current database (just in case)
mv ./data/bloqd.db ./data/bloqd.db.old
# Restore from backup
cp /path/to/backup/bloqd-backup.db ./data/bloqd.db
# Fix permissions
chmod 644 ./data/bloqd.db
# Start container
docker compose start bloqd
Restore to New Server
-
Install Bloqd on new server:
mkdir -p /opt/bloqd
cd /opt/bloqd
curl -o docker-compose.yml https://raw.githubusercontent.com/clusterzx/bloqd/main/docker-compose.yml -
Copy backup to new server:
scp bloqd-backup-*.tar.gz newserver:/opt/bloqd/ -
Extract and restore:
tar -xzf bloqd-backup-*.tar.gz
mkdir -p data
cp bloqd-*.db data/bloqd.db
cp env-* .env -
Update configuration:
# Update .env with new server URL if needed
nano .env -
Start Bloqd:
docker compose up -d -
Re-register agents (if server URL changed):
# On each managed server
curl -sSL https://newserver.example.com/api/v1/install | sudo bash
Verify Backups
Check Database Integrity
# Verify SQLite database
sqlite3 ./backups/bloqd-backup.db "PRAGMA integrity_check;"
# Should return: ok
# Check tables
sqlite3 ./backups/bloqd-backup.db ".tables"
Test Restore
# Create test environment
mkdir -p /tmp/bloqd-test
cd /tmp/bloqd-test
# Extract backup
tar -xzf /path/to/bloqd-backup.tar.gz
# Verify database opens
sqlite3 bloqd-*.db "SELECT COUNT(*) FROM servers;"
# Cleanup
rm -rf /tmp/bloqd-test
Backup Encryption
Encrypt Backups
# Encrypt with GPG
gpg --symmetric --cipher-algo AES256 bloqd-backup.tar.gz
# Decrypt
gpg --decrypt bloqd-backup.tar.gz.gpg > bloqd-backup.tar.gz
Encrypt with OpenSSL
# Encrypt
openssl enc -aes-256-cbc -salt -pbkdf2 \
-in bloqd-backup.tar.gz \
-out bloqd-backup.tar.gz.enc
# Decrypt
openssl enc -aes-256-cbc -d -pbkdf2 \
-in bloqd-backup.tar.gz.enc \
-out bloqd-backup.tar.gz
Backup Best Practices
Recommendations
- Test restores regularly - A backup is useless if it can't be restored
- Use 3-2-1 rule - 3 copies, 2 different media, 1 offsite
- Encrypt sensitive backups - Especially when storing offsite
- Monitor backup jobs - Set up alerts for failed backups
- Document procedures - Keep restore instructions accessible
Disaster Recovery Checklist
- Backup files accessible
- Backup integrity verified
- .env file available (or documented)
- Docker/Docker Compose installed
- Network/DNS configured
- SSL certificates available
- Agent re-registration plan ready