Over a million developers have joined DZone.

Creating Better CP Commands

DZone's Guide to

Creating Better CP Commands

As an experienced developer or operator, you always remember to back up critical config files before you make any changes...right?

· DevOps Zone ·
Free Resource

Get the fastest log management and analysis with Graylog open source or enterprise edition free up to 5GB per day

People might manually change critical config files in servers occasionally (for example, /etc/hosts to /etc/hostname, etc.).

As an experienced operator, you remember to back up before making any changes. Right? What would you do? CP /etc/hosts /etc/hosts.bak.

But is that good enough?

Manual Changes Are Troublesome

Try your best to avoid them. Use configuration management tools if possible — Chef, Puppet, Ansible, whatever. 

Back to the earth. You might still want to change files directly in some cases. Yes, I totally understand. Quick fix in non-critical environments. Test some stuff in local dev environments.

People usually backup like this:

# Simple backup cp /etc/hosts /etc/hosts.bak # Or a shorter command cp /etc/hosts{,.bak} 

What are the potential problems?

For one, the backup will be overwritten if we re-run the CP command.

Most of the time, we will change files more than once. Yes, whenever we run a CP command, we will get a new backup. However, we will lose the previous backup(s).

So, we only have the backup of intermediate versions. No way to go back to the state of the very beginning!

Avoid Overwriting Previous Backups

To avoid overwriting, we can add a time stamp to the CP command:

cp /etc/hosts \ /etc/hosts_$(date +'%Y-%m-%d_%H%M%S') 

What now? The issue lies in the garbage files of old backup in the servers.

Whenever we make a change, we literally generate a new “garbage” file. This will occupy disk capacity. Furthermore, we might confuse our colleagues or even ourselves. We need to be careful to clear up those files.

Reclaim Disks From Old Backups

Move backup files to a centralized directory so that you can easily remove old backups or enforce data retention.

cp /etc/hosts \ /data/backup/hosts_$(date +'%Y-%m-%d_%H%M%S') 

Look perfect? Not yet.

The issue here is that it's hard to tell what the original location of the backup is. We’ve lost the directory hierarchy.

Keep a Directory Hierarchy of the Original Files

Okay, our copy command looks better.

cp /etc/hosts \ /data/backup/etc/hosts_$(date +'%Y-%m-%d_%H%M%S') 

One more improvement.

When we back up different files, we need to type too much stuff. 

Use Bash Alias for Fast Input

We can define a bash alias and use it like below.

# sample: backup file mybackup /etc/hosts # sample: backup directory mybackup /etc/apache2 

Here is how.

1. Create /etc/profile.d/mybackup.sh

Create this with the below content:

 #!/bin/bash -e mybackup() { src_file=${1?"What files to backup"} # get default value from environment variable if [ -n "$BAK_ROOT_DIR" ]; then bak_root_dir="$BAK_ROOT_DIR" else bak_root_dir="/data/backup" fi parent_dir=$(dirname "$src_file") short_fname=$(basename "$src_file") date="$(date +'%Y%m%d.%H%M%S')" bak_dir="${bak_root_dir}$parent_dir" if [ -f "$src_file" ]; then mkdir -p "$bak_dir" echo "cp $src_file ${bak_dir}/${short_fname}-${date}" cp "$src_file" "${bak_dir}/${short_fname}-${date}" elif [ -d "$src_file" ]; then mkdir -p "$bak_dir-${short_fname}-${date}" echo "cp -r $src_file $bak_dir-${short_fname}-${date}" cp -r "$src_file" "$bak_dir-${short_fname}-${date}" else echo "Error: $src_file doesn't exist" exit 1 fi } 

2. Install Bash Alias

# install bash chmod 755 /etc/profile.d/mybackup.sh source /etc/profile # create destination directory mkdir -p /data/backup 

3. Try It and Have Fun!

source /etc/profile # sample: backup file mybackup /etc/hosts # sample: backup directory mybackup /etc/apache2 # check backup files. # Install tree package, if missing tree /data/backup 

Beyond CP Commands

To avoid messed up config files, here are some alternatives to CP:

  • Replace manual changes with configuration management tools.
  • Upload backup to a remote server. If your files are not mission critical, you can try transfer.sh. With one curl command, you get a safe HTTP download link.
  • Use source control. For example, create a local Git repo, create a hard link of critical files and directories, then Git commit. ETCKeeper is a nice wrapper of this mechanism.
  • Use iNotify to subscribe filesystem events of changes, creations, deletions, etc.

Get the fastest log management and analysis with Graylog open source or enterprise edition free up to 5GB per day

devops ,cp commands ,backups

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}