My First Experience With Atomic Host
My First Experience With Atomic Host
Learn how to get started with Atomic Host, a Linux distribution where updates are managed atomically.
Join the DZone community and get the full member experience.Join For Free
Discover the all-in-one test automation solution that increases quality and shortens release cycles with the most challenging applications.
A Bit of Introduction
Part of my job as a senior technical architect is to evaluate technologies that might be useful to the implementation of the technical architectures I'm responsible for. As part of an exercise to evaluate the design and implementation of an immutable infrastructure, I started closely looking at Project Atomic as the basic building block.
For those who experience difficulties with the management of system updates and the governance of a system's lifecycle, there are a lot of tools that can help you do the job, one of my preferred ones being Hashicorp's Terraform, among others.
However, all these tools face important challenges while trying to accommodate your own updates to the system, the fixes, and patches coming from the official channels as well as the local configuration changes.
On top of this, there is also the problem of updates not being transactional, therefore running the risk of leaving your system in an inconsistent state in the middle of a failed update.
All these aspects are happily addressed by an Atomic Host, which is basically a traditional Linux distribution (like RedHat Enterprise Linux or CentOS) where updates are managed atomically (fully in or fully out).
Atomic Host in a Nutshell
At the base of this approach, there's a number of concepts that will be briefly highlighted below:
- the entire host status is treated like a commit in a version control system and the administrator can easily move back and forth through these commits to safely rollback or roll forward the system at a specific point in time,
- Any rollback/roll forward activity or update to the system is performed in a transactional way, guaranteeing that the applied change is either fully implemented or not at all,
- The system is designed to be immutable. For example,
/usris read-only and can only be changed by applying a layered package (which triggers a new commit for the entire host that gets typically applied after a reboot).
- The system is also designed with isolation in mind through the use of containers. This means that your specific workload should be modeled as a Docker image and run as a container rather than being natively installed and run on the filesystem
Regarding the concept of a read-only file system, only the official Atomic toolchain can modify the status of the filesystem, guaranteeing the atomicity of the updates and the ability to instantly rollback any update.
In fact, here comes the issue: since the file system is mostly all read-only, the typical installer (being
dnf) cannot work since it's not allowed to modify the file system.
Hey, wait! You have the
atomic host install command line to locally install any package using
rpm-ostree as the underlying technology for layered packages!
Unfortunately, as part of this implementation, not every RPM package is automatically compatible with the requirements of an Atomic Host.
In the scenario I'm considering, I have to install an RPM package that pretends to install binaries and configuration files (this is also a tricky aspect we will discuss later) in the same directory in the typical '
/opt ' path.
This path is not allowed by the Atomic Host since it's not controlled by OStree and it's not guaranteed to deliver atomic updates. In fact, Atomic Host only allows read-write mount points for '
/opt ' and '
/etc ', which is good for configuration files but it does not allow the installation of binaries and libraries in any of them.
So, for my RPM package, it seems that there's no way to install it in Atomic Host. Even if I try to relocate the package to an allowed path, like '
/usr/local/opt ', since this will become read-only at the next reboot (to implement the atomic changes) and it will not allow changes to files at runtime, which is required for my product to work.
To make it even worst, the RPM package I'm trying to install it's a binary one for which I don't absolutely have access to the source code so I cannot rebuild it according to a different deployment layout.
The Clean Solution
By the way, the clean solution to this issue would be to rebuild the binary package according to the Atomic Host strategy, by splitting the package and putting the binaries in
/usr and configuration files in
/var/run for runtime data).
However, if you don't have control over the package layout and you desperately need a temporary workaround (let me stress the word temporary here...), just keep reading.
The Tricky Idea
The workaround I came with in order to temporarily solve this issue is based on the following strategy:
- rebuild the RPM package, for example using a tool like
rpmrebuild(http://rpmrebuild.sourceforge.net), to move the package content in a location that is acceptable to
rpm-ostree, the tool in Atomic Host that manages the layered packages,
- leverage the
systemdfeature commonly known as
tmpfiles.d, which is based on a simple declarative language to instruct
systemd(at specific stages of the boot or at scheduled times) to manipulate the filesystem. In this particular case, we will instruct
systemdto copy the binary files in the
/opt/xxxsubdirectory where they are expected by the product we want to install while keeping the original files from the RPM package. Bottom line, this is a copy, not link or move (more on this later).
Regarding the second point, we just need to inject a file during the
rpmrebuild phase in the path
/usr/lib/tmpfiles.d/xxxxxx.conf with a syntax similar to:
C /usr/lib/<product directory> - - - - /opt/<intended destination>
It's absolutely critical to include the four
'-' in the middle of the line or it won't work (for more details on the syntax, have a look here).
rpmrebuild with a command line similar to:
rpmrebuild -e -d . -p <original .rpm file to edit>
rpmrebuild execution is made of three phase:
- unpackaging of the files in a temporary directory (usually something like
- editing of the SPECS content,
- repackaging of the files in an RPM archive.
During the second stage, while editing the SPECS content, you should have a second terminal available in order to move into the temporary directory (i.e.
~/.tmp/rpmrebuild.xxxxx/work/root ) and move/inject the files as you wish but according to the specifications in the SPECS file.
Once you finish the content editing phase by saving the file,
rpmrebuild will analyze it and pick up the corresponding files, which must be already in the expected position of the temporary directory.
At the end of this exercise, you will end up with a modified RPM archive where the original files have been moved to an appropriate Project Atomic-friendly position and a
tmpfiles.d descriptor that will move the files in the expected path.
At this point, you can easily use a command line similar to:
atomic host install <new package file>
to install your product respecting both the Project Atomic specifications and the product requirements.
The Project Atomic philosophy is a bit different from the typical Linux distro and requires some adaptation along the way, including the packaging layout.
However, the great advantages coming from the transactional upgrade process and the container based workload management, are worth the hassle of modifying the few packages that are either not relocatable or incompatible with the atomic host requirements.
Considering that the majority of the software components should be run as containers, there's the expectation that very few and specific packages would need to be natively installed in an Atomic host.
Opinions expressed by DZone contributors are their own.