It started as a typical evening check-in on one of my production web environments. Everything looked fine until I noticed an inconsistency in my data tables, specifically—several custom tables I created had vanished. Confident my hosting provider’s backup system would cover everything, I initiated a restore request, only to discover a gut-wrenching truth: the custom tables had never been included in the backups. That’s when I realized the critical importance of owning my backup workflow—and how rclone became my trusted tool in building a resilient, external data backup solution.
TL;DR
My hosting provider’s automated backup system overlooked custom database tables, putting crucial data at risk. After discovering this, I decided to set up my own external backup routine using rclone to automate encrypted backups across multiple cloud services. This article walks through how I identified the problem and the process I used to build a reliable, self-managed backup workflow. With rclone, I now control my data integrity independently from my web host.
The Wake-Up Call: Missing Custom Tables
Early signs of an issue came from application errors: failed form generations, missing metadata, and incomplete reports. Initially, I suspected minor bugs introduced in the latest code push. But it didn’t take long to trace the problem to specific MySQL tables that no longer existed. These were not default WordPress or CMS-managed tables—they were custom additions I’d made for application-specific functionality.
After initiating a support ticket with the hosting provider, I waited 14 anxious hours only to receive a polite but devastating response: “Our automated nightly backups cover standard schema only. Any custom or non-standard tables are excluded unless manually specified through an advanced configuration policy.” This clause had been buried in the fine print, and I was now staring at indeterminate data loss.
Understanding the Host Backup System’s Limitations
Like many developers and small teams, I had trusted my host’s “automated daily backups” claim. But trust is risky without verification. In this case, the host used a baseline policy tailored to standard CMS setups—WordPress, Joomla, and similar environments. If your database adds custom tables that the default schema doesn’t anticipate, those tables can, quite simply, be ignored.
Here are some key risks I discovered in many default hosting backup systems:
- Schema-specific rules: Custom tables fall outside standard routines unless explicitly flagged.
- Scope-limited storage: Some backup systems cap their scope to known paths or databases only.
- Retention gaps: Most keep only 1 to 3 days of rollbacks—useless for long-term data analysis or rollback needs after unnoticed corruption.
- Opaque policies: Terms often exclude responsibility for third-party or user-contributed database patterns.
Why I Chose rclone: The Starting Point
I needed a tool that would let me:
- Perform scheduled data dumps of my entire database—including custom tables.
- Sync those backups to multiple, redundant external storage endpoints.
- Encrypt backups during transmission and at rest.
- Work in CLI environments, with logging, verbosity, and test modes.
rclone checked all the boxes. Though initially daunting in its documentation, it proved to be an incredibly flexible, trustworthy companion in managing remote storage operations securely and scriptably.
Building the New Backup Workflow
Here’s how I structured my new backup process using rclone and standard cron tooling on my VPS.
1. Full SQL Dump with Custom Tables
I changed my database dump routine to explicitly pull custom tables. My mysqldump command now looked like this:
mysqldump -u myuser -pMYPASS mydatabase --tables table1 table2 table3 custom_table_alpha custom_table_beta > /backups/db_backup_$(date +\%F).sql
2. Encrypting the Backup
I used GPG to encrypt the SQL dump before uploading it:
gpg -c /backups/db_backup_$(date +\%F).sql
This results in a .gpg file that’s unreadable without a symmetric passphrase.
3. rclone Configuration for Remote Storage
I set up secure rclone remotes: one to a Dropbox business account, another to a Google Cloud Storage bucket, and a third to a remote SFTP server.
Here’s a sample of my rclone config:
[dropbox]
type = dropbox
token = {"access_token":"...","token_type":"Bearer"...}
[gcs]
type = google cloud storage
project_number = 123456...
object_acl = private
bucket_acl = private
[myserver]
type = sftp
host = example.com
user = backupuser
pass = myencryptedpassword
4. Uploading Encrypted Backups
With the configuration in place, I ran:
rclone copy /backups/db_backup_$(date +\%F).sql.gpg dropbox:db_backups/ rclone copy /backups/db_backup_$(date +\%F).sql.gpg gcs:project-bucket/db/ rclone copy /backups/db_backup_$(date +\%F).sql.gpg myserver:/mnt/backups/
I used the --progress and --log-file flags to verify operations and set job alerts via email in case of failures.
5. Automating with Cron
I scheduled the backup process to run nightly:
0 2 * * * /usr/local/bin/db_backup_script.sh
That shell script orchestrates the entire process: from dumping data to encryption and syncing with rclone.
Testing the Recovery Workflow
A backup is only as good as its ability to restore. I set up a dummy environment and tested:
- Downloading the .gpg file from each remote.
- Decrypting using GPG.
- Loading the SQL file into a new local MySQL instance.
The recovery process worked smoothly, and it gave me confidence not only in the infrastructure but also in my own response plan.
Redundant Storage Is No Longer Optional
After this incident, I advocate relentlessly for external backup systems—especially when working with business-critical or custom applications. Backups aren’t just about avoiding disaster—they are your last reliable source of truth when everything else fails.
Even the most trustworthy hosting solutions cannot guarantee custom scenarios will be covered. It’s your data, your responsibility. If you’re unsure about your backup scope, you probably don’t have any realistic protection in place yet.
Final Lessons and Best Practices
- Audit your backups monthly. Attempt a full restore in a sandbox to verify the workflow and data scope.
- Include versioning and retention policies. Keep multiple versions of backups to revert partial corruption over time.
- Use at least two separate storage providers. Cloud redundancies rarely protect against platform-wide failures or account bans.
- Encrypt everything. Never store unprotected backups, especially when using public or shared cloud environments.
- Keep credentials and keys safe. Backups are worthless if you lose access to decrypt and restore them.
In Closing
What began as a frustrating encounter with a missed backup became a valuable opportunity to grow my operational maturity. With rclone and consistent backup habits, I’ve not only regained control of my data—I’ve gained confidence in facing unforeseen disasters in the future. And if there’s one takeaway I can share: never assume your host has your back completely. Build systems that you understand, can test, and can trust.