Backup, Restore, and Disaster Recovery in Hadoop
Test your backup and restore procedures right after you install your cluster. Backups are a waste of time and space if they don't work and you can't get your data back!
Join the DZone community and get the full member experience.
Join For FreeMany people don't consider backups since Hadoop has 3X replication by default. Also, Hadoop is often a repository for data that resides in existing data warehouses or transactional systems, so the data can be reloaded. That is not the only case anymore! Social media data, ML models, logs, third-party feeds, open APIs, IoT data, and other sources of data may not be reloadable, easily available, or in the enterprise at all. So, this is not critical single-source data that must be backed up and stored forever.
There are a lot of tools in the open-source space that allow you to handle most of your backup, recovery, replication, and disaster recovery needs. There are also some other enterprise hardware and software options.
Some Options
Replication and mirroring with Apache Falcon.
Dual ingest or replication via HDF.
WANdisco.
In-memory WAN replication via memory grids (Gemfire, GridGain, Redis, etc.).
Apache Storm, Spark, and Flink custom jobs to keep clusters in sync.
Disaster Recovery
HDFS Snapshots and Distributed Copies
HDFS snapshots and distributed copies should be part of your backup policies. Make sure you leave 10-25% space free to make several snapshots of key directories. See the following resources:
Archival.
Creating a Hadoop archive is pretty straightforward. See here.
Distributed Copy (DistCP)
This process is well documented by Hortonworks here. DISTCP2 is a simple command line tool.
hadoop distcp hdfs://nn1:8020/source hdfs://nn2:8020/destination
Mirroring Data Sets
You can mirror datasets with Falcon. Mirroring is a very useful option for enterprises and is well-documented. This is something that you may want to get validated by a third party. See the following resources:
Data movement and integration (this overview from Hortonworks is very useful for practical data movement between and within the cluster).
Storage Policies
You must determine a storage policy of how many copies of data you have, what to do with it, data aging, and hot-warm-cold policies. Management, administrators, and users need to discuss these issues.
I like the idea of making backups, disaster recovery copies, and active-active replication where all data of importance come in lands in multiple places or in a write-ahead log. I also like having enough space in in-memory data storage (Hot HDFS, Alluxio, Ignite, SnappyData, Redis, Geode, GemfireXD, etc.). When that ages, it can be parallel-written to many permanent HDFS stores and potentially written to a cold, cold storage like Amazon Glacier or something else that is off-site, but available.
Test your backup and restore procedures right after you install your cluster. Backups are a waste of time and space if they don't work and you can't get your data back!
Reference
Opinions expressed by DZone contributors are their own.
Comments