Using Amazon EFS for Container Workloads
Using Amazon EFS for Container Workloads
Check out how you can use Amazon EFS's inbuilt volumes feature to propogate changes across multiple containers.
Join the DZone community and get the full member experience.Join For Free
Container Monitoring and Management eBook: Read about the new realities of containerization.
When using containers for different application workloads, it is a common use case where we need to store data for persistence. Although it looks simple from the outside where we can save the data directly in the underlying container host, this is not practical when using container orchestration tools like Kubernetes, Amazon ECS, Docker Swarm where the containers could be placed in different Nodes. Therefore the containers need to run without knowing the underlying host machine.
The built-in solution that comes with containers is to use a mechanism called volumes that provides the following advantages:
Volumes allow sharing things across containers.
Volume drivers enable to store data not only within the container cluster but also on remote hosts or cloud providers.
It is also possible to encrypt these volumes.
Containers can pre-populate the data in new volumes.
Since volume is external, its contents exist outside of the lifecycle of the containers which is most important when doing modifications to containers.
Container Volume Use Cases
Let's look at few use cases where we can use Volumes. There are several common use cases where you can use container volumes to simplify the architecture.
If you are building an application with containers and require a database container, volumes are useful to mount as the storage path for database files. This creates the possibility of upgrading the database container for different versions without impacting the underlying storage of the data. It also allows mounting a different container to the same volume which could take care of backing up the filesystem at the block level.
Shared File Storage
This is one of the direct use cases for volumes, where it's possible to directly upload and save files from individual containers. When building scalable systems, the uploading could be handled by a fleet of containers instead of one, which still requires you to upload the files to a central place. In these situations, volumes become quite useful in implementing the shared storage.
Although this is more of an advanced use case, it is possible to keep application deployment in a volume or keep common artifacts such as binaries in a volume where it could be mounted for different containers for faster initialization and recovery.
When there are file-related operations, which are handled by multiple containers for scalability, it is possible to use a volume to do modifications to files and make it available for other containers (in the same place or move it to another directory) to coordinate content for different containers. This makes things simple (and improves the performance when dealing with large files) if each of the containers in the workflow is doing a well-defined job on each of these files where the files don't need to be moving between containers.
Volumes and Amazon EFS
We've been talking about volumes so far, so let's look at what Amazon EFS has to do with container volumes.
When we are looking at volumes, having a persistent and scalable and shareable underlying storage infrastructure are important properties. Amazon EFS is just the right technology where we could use it as the underlying storage infrastructure. Some of the useful properties of Amazon EFS is listed below.
Regional availability and durability.
By nature a shared file system.
Inter-operable and easy to mount network file system (NFS).
So how complex is it to use Amazon EFS for container volumes?
This is pretty straightforward and AWS comes with developer guidelines on using Amazon EFS for volumes which makes things much simpler.
This involves the following steps as described in the tutorial: Using Amazon EFS File Systems with Amazon ECS.
Step 1: Gather Cluster Information.
Step 2: Create a Security Group for an Amazon EFS File System.
Step 3: Create an Amazon EFS File System.
Step 4: Configure Container Instances.
Step 5: Create a Task Definition to Use the Amazon EFS File System.
Step 6: Add Content to the Amazon EFS File System.
Step 7: Run a Task and View the Results.
Make note look at Step 5, which is an inbuilt feature coming with Amazon ECS managed container service to directly connect Amazon EFS as a file system. If it's a different container orchestration like Swarm it will require to mount the Amazon EFS using the command line in the container bootup process.
Opinions expressed by DZone contributors are their own.