Many people don't consider backups since Hadoop has 3X replication by default. Also, Hadoop is often a repository for data that resides in existing data warehouses or transactional systems, so the data can be reloaded. That is not the only case anymore! Social media data, ML models, logs, third-party feeds, open APIs, IoT data, and other sources of data may not be reloadable, easily available, or in the enterprise at all. So, this is not critical single-source data that must be backed up and stored forever.
There are a lot of tools in the open-source space that allow you to handle most of your backup, recovery, replication, and disaster recovery needs. There are also some other enterprise hardware and software options.
Replication and mirroring with Apache Falcon.
Dual ingest or replication via HDF.
In-memory WAN replication via memory grids (Gemfire, GridGain, Redis, etc.).
Apache Storm, Spark, and Flink custom jobs to keep clusters in sync.
HDFS Snapshots and Distributed Copies
Creating a Hadoop archive is pretty straightforward. See here.
Distributed Copy (DistCP)
This process is well documented by Hortonworks here. DISTCP2 is a simple command line tool.
hadoop distcp hdfs://nn1:8020/source hdfs://nn2:8020/destination
Mirroring Data Sets
You can mirror datasets with Falcon. Mirroring is a very useful option for enterprises and is well-documented. This is something that you may want to get validated by a third party. See the following resources:
Data movement and integration (this overview from Hortonworks is very useful for practical data movement between and within the cluster).
You must determine a storage policy of how many copies of data you have, what to do with it, data aging, and hot-warm-cold policies. Management, administrators, and users need to discuss these issues.
I like the idea of making backups, disaster recovery copies, and active-active replication where all data of importance come in lands in multiple places or in a write-ahead log. I also like having enough space in in-memory data storage (Hot HDFS, Alluxio, Ignite, SnappyData, Redis, Geode, GemfireXD, etc.). When that ages, it can be parallel-written to many permanent HDFS stores and potentially written to a cold, cold storage like Amazon Glacier or something else that is off-site, but available.
Test your backup and restore procedures right after you install your cluster. Backups are a waste of time and space if they don't work and you can't get your data back!