Celebrate a decade of Kubernetes. Explore why K8s continues to be one of the most prolific open-source systems in the SDLC.
With the guidance of FinOps experts, learn how to optimize AWS containers for performance and cost efficiency.
Containers allow applications to run quicker across many different development environments, and a single container encapsulates everything needed to run an application. Container technologies have exploded in popularity in recent years, leading to diverse use cases as well as new and unexpected challenges. This Zone offers insights into how teams can solve these challenges through its coverage of container performance, Kubernetes, testing, container orchestration, microservices usage to build and deploy containers, and more.
Why You Should Use Buildpacks Over Docker
Docker vs. Podman: Exploring Container Technologies for Modern Web Development
Efficient data synchronization is crucial in high-performance computing and multi-threaded applications. This article explores an optimization technique for scenarios where frequent writes to a container occur in a multi-threaded environment. We’ll examine the challenges of traditional synchronization methods and present an advanced approach that significantly improves performance for write-heavy environments. The method in question is beneficial because it is easy to implement and versatile, unlike pre-optimized containers that may be platform-specific, require special data types, or bring additional library dependencies. Traditional Approaches and Their Limitations Imagine a scenario where we have a cache of user transactions: C++ struct TransactionData { long transactionId; long userId; unsigned long date; double amount; int type; std::string description; }; std::map<long, std::vector<TransactionData>> transactionCache; // key - userId In a multi-threaded environment, we need to synchronize access to this cache. The traditional approach might involve using a mutex: C++ class SimpleSynchronizedCache { public: void write(const TransactionData&& transaction) { std::lock_guard<std::mutex> lock(cacheMutex); transactionCache[transaction.userId].push_back(transaction); } std::vector<TransactionData> read(const long&& userId) { std::lock_guard<std::mutex> lock(cacheMutex); try { return transactionCache.at(userId); } catch (const std::out_of_range& ex) { return std::vector<TransactionData>(); } } std::vector<TransactionData> pop(const long& userId) { std::lock_guard<std::mutex> lock(_cacheMutex); auto userNode = _transactionCache.extract(userId); return userNode.empty() ? std::vector<TransactionData>() : std::move(userNode.mapped()); } private: std::map<int, std::vector<TransactionData>> transactionCache; std::mutex cacheMutex; }; As system load increases, especially with frequent reads, we might consider using a shared_mutex: C++ class CacheWithSharedMutex { public: void write(const TransactionData&& transaction) { std::lock_guard<std::shared_mutex> lock(cacheMutex); transactionCache[transaction.userId].push_back(transaction); } std::vector<TransactionData> read(const long&& userId) { std::shared_lock<std::shared_mutex> lock(cacheMutex); try { return transactionCache.at(userId); } catch (const std::out_of_range& ex) { return std::vector<TransactionData>(); } } std::vector<TransactionData> pop(const long& userId) { std::lock_guard<std::shared_mutex> lock(_cacheMutex); auto userNode = _transactionCache.extract(userId); return userNode.empty() ? std::vector<TransactionData>() : std::move(userNode.mapped()); } private: std::map<int, std::vector<TransactionData>> transactionCache; std::shared_mutex cacheMutex; }; However, when the load is primarily generated by writes rather than reads, the advantage of a shared_mutex over a regular mutex becomes minimal. The lock will often be acquired exclusively, negating the benefits of shared access. Moreover, let’s imagine that we don’t use read() at all — instead, we frequently write incoming transactions and periodically flush the accumulated transaction vectors using pop(). As pop() involves reading with extraction, both write() and pop() operations would modify the cache, necessitating exclusive access rather than shared access. Thus, the shared_lock becomes entirely useless in terms of optimization over a regular mutex, and maybe even performs worse — its more intricate implementation is now used for the same exclusive locks that a faster regular mutex provides. Clearly, we need something else. Optimizing Synchronization With the Sharding Approach Given the following conditions: A multi-threaded environment with a shared container Frequent modification of the container from different threads Objects in the container can be divided for parallel processing by some member variable. Regarding point 3, in our cache, transactions from different users can be processed independently. While creating a mutex for each user might seem ideal, it would lead to excessive overhead in maintaining so many locks. Instead, we can divide our cache into a fixed number of chunks based on the user ID, in a process known as sharding. This approach reduces the overhead and yet allows the parallel processing, thereby optimizing performance in a multi-threaded environment. C++ class ShardedCache { public: ShardedCache(size_t shardSize): _shardSize(shardSize), _transactionCaches(shardSize) { std::generate( _transactionCaches.begin(), _transactionCaches.end(), []() { return std::make_unique<SimpleSynchronizedCache>(); }); } void write(const TransactionData& transaction) { _transactionCaches[transaction.userId % _shardSize]->write(transaction); } std::vector<TransactionData> read(const long& userId) { _transactionCaches[userId % _shardSize]->read(userId); } std::vector<TransactionData> pop(const long& userId) { return std::move(_transactionCaches[userId % _shardSize]->pop(userId)); } private: const size_t _shardSize; std::vector<std::unique_ptr<SimpleSynchronizedCache>> _transactionCaches; }; This approach allows for finer-grained locking without the overhead of maintaining an excessive number of mutexes. The division can be adjusted based on system architecture specifics, such as size of a thread pool that works with the cache, or hardware concurrency. Let’s run tests where we check how sharding accelerates cache performance by testing different partition sizes. Performance Comparison In these tests, we aim to do more than just measure the maximum number of operations the processor can handle. We want to observe how the cache behaves under conditions that closely resemble real-world scenarios, where transactions occur randomly. Our optimization goal is to minimize the processing time for these transactions, which enhances system responsiveness in practical applications. The implementation and tests are available in the GitHub repository. C++ #include <thread> #include <functional> #include <condition_variable> #include <random> #include <chrono> #include <iostream> #include <fstream> #include <array> #include "SynchronizedContainers.h" const auto hardware_concurrency = (size_t)std::thread::hardware_concurrency(); class TaskPool { public: template <typename Callable> TaskPool(size_t poolSize, Callable task) { for (auto i = 0; i < poolSize; ++i) { _workers.emplace_back(task); } } ~TaskPool() { for (auto& worker : _workers) { if (worker.joinable()) worker.join(); } } private: std::vector<std::thread> _workers; }; template <typename CacheImpl> class Test { public: template <typename CacheImpl = ShardedCache, typename ... CacheArgs> Test(const int testrunsNum, const size_t writeWorkersNum, const size_t popWorkersNum, const std::string& resultsFile, CacheArgs&& ... cacheArgs) : _cache(std::forward<CacheArgs>(cacheArgs)...), _writeWorkersNum(writeWorkersNum), _popWorkersNum(popWorkersNum), _resultsFile(resultsFile), _testrunsNum(testrunsNum), _testStarted (false) { std::random_device rd; _randomGenerator = std::mt19937(rd()); } void run() { for (auto i = 0; i < _testrunsNum; ++i) { runSingleTest(); logResults(); } } private: void runSingleTest() { { std::lock_guard<std::mutex> lock(_testStartSync); _testStarted = false; } // these pools won’t just fire as many operations as they can, // but will emulate real-time occuring requests to the cache in multithreaded environment auto writeTestPool = TaskPool(_writeWorkersNum, std::bind(&Test::writeTransactions, this)); auto popTestPool = TaskPool(_popWorkersNum, std::bind(&Test::popTransactions, this)); _writeTime = 0; _writeOpNum = 0; _popTime = 0; _popOpNum = 0; { std::lock_guard<std::mutex> lock(_testStartSync); _testStarted = true; _testStartCv.notify_all(); } } void logResults() { std::cout << "===============================================" << std::endl; std::cout << "Writing operations number per sec:\t" << _writeOpNum / 60. << std::endl; std::cout << "Writing operations avg time (mcsec):\t" << (double)_writeTime / _writeOpNum << std::endl; std::cout << "Pop operations number per sec: \t" << _popOpNum / 60. << std::endl; std::cout << "Pop operations avg time (mcsec): \t" << (double)_popTime / _popOpNum << std::endl; std::ofstream resultsFilestream; resultsFilestream.open(_resultsFile, std::ios_base::app); resultsFilestream << _writeOpNum / 60. << "," << (double)_writeTime / _writeOpNum << "," << _popOpNum / 60. << "," << (double)_popTime / _popOpNum << std::endl; std::cout << "Results saved to file " << _resultsFile << std::endl; } void writeTransactions() { { std::unique_lock<std::mutex> lock(_testStartSync); _testStartCv.wait(lock, [this] { return _testStarted; }); } std::chrono::steady_clock::time_point start = std::chrono::steady_clock::now(); // hypothetical system has around 100k currently active users std::uniform_int_distribution<> userDistribution(1, 100000); // delay up to 5 ms for every thread not to start simultaneously std::uniform_int_distribution<> waitTimeDistribution(0, 5000); std::this_thread::sleep_for(std::chrono::microseconds(waitTimeDistribution(_randomGenerator))); for ( auto iterationStart = std::chrono::steady_clock::now(); iterationStart - start < std::chrono::minutes(1); iterationStart = std::chrono::steady_clock::now()) { auto generatedUser = userDistribution(_randomGenerator); TransactionData dummyTransaction = { 5477311, generatedUser, 1824507435, 8055.05, 0, "regular transaction by " + std::to_string(generatedUser)}; std::chrono::steady_clock::time_point operationStart = std::chrono::steady_clock::now(); _cache.write(dummyTransaction); std::chrono::steady_clock::time_point operationEnd = std::chrono::steady_clock::now(); ++_writeOpNum; _writeTime += std::chrono::duration_cast<std::chrono::microseconds>(operationEnd - operationStart).count(); // make span between iterations at least 5ms std::this_thread::sleep_for(iterationStart + std::chrono::milliseconds(5) - std::chrono::steady_clock::now()); } } void popTransactions() { { std::unique_lock<std::mutex> lock(_testStartSync); _testStartCv.wait(lock, [this] { return _testStarted; }); } std::chrono::steady_clock::time_point start = std::chrono::steady_clock::now(); // hypothetical system has around 100k currently active users std::uniform_int_distribution<> userDistribution(1, 100000); // delay up to 100 ms for every thread not to start simultaneously std::uniform_int_distribution<> waitTimeDistribution(0, 100000); std::this_thread::sleep_for(std::chrono::microseconds(waitTimeDistribution(_randomGenerator))); for ( auto iterationStart = std::chrono::steady_clock::now(); iterationStart - start < std::chrono::minutes(1); iterationStart = std::chrono::steady_clock::now()) { auto requestedUser = userDistribution(_randomGenerator); std::chrono::steady_clock::time_point operationStart = std::chrono::steady_clock::now(); auto userTransactions = _cache.pop(requestedUser); std::chrono::steady_clock::time_point operationEnd = std::chrono::steady_clock::now(); ++_popOpNum; _popTime += std::chrono::duration_cast<std::chrono::microseconds>(operationEnd - operationStart).count(); // make span between iterations at least 100ms std::this_thread::sleep_for(iterationStart + std::chrono::milliseconds(100) - std::chrono::steady_clock::now()); } } CacheImpl _cache; std::atomic<long> _writeTime; std::atomic<long> _writeOpNum; std::atomic<long> _popTime; std::atomic<long> _popOpNum; size_t _writeWorkersNum; size_t _popWorkersNum; std::string _resultsFile; int _testrunsNum; bool _testStarted; std::mutex _testStartSync; std::condition_variable _testStartCv; std::mt19937 _randomGenerator; }; void testCaches(const size_t testedShardSize, const size_t workersNum) { if (testedShardSize == 1) { auto simpleImplTest = Test<SimpleSynchronizedCache>( 10, workersNum, workersNum, "simple_cache_tests(" + std::to_string(workersNum) + "_workers).csv"); simpleImplTest.run(); } else { auto shardedImpl4Test = Test<ShardedCache>( 10, workersNum, workersNum, "sharded_cache_" + std::to_string(testedShardSize) + "_tests(" + std::to_string(workersNum) + "_workers).csv", 4); shardedImpl4Test.run(); } } int main() { std::cout << "Hardware concurrency: " << hardware_concurrency << std::endl; std::array<size_t, 7> testPlan = { 1, 4, 8, 32, 128, 4096, 100000 }; for (auto i = 0; i < testPlan.size(); ++i) { testCaches(testPlan[i], 4 * hardware_concurrency); } // additional tests with diminished load to show limits of optimization advantage std::array<size_t, 4> additionalTestPlan = { 1, 8, 128, 100000 }; for (auto i = 0; i < additionalTestPlan.size(); ++i) { testCaches(additionalTestPlan[i], hardware_concurrency); } } We observe that with 2,000 writes and 300 pops per second (with a concurrency of 8) — which are not very high numbers for a high-load system — optimization using sharding significantly accelerates cache performance, by orders of magnitude. However, evaluating the significance of this difference is left to the reader, as, in both scenarios, operations took less than a millisecond. It’s important to note that the tests used a relatively lightweight data structure for transactions, and synchronization was applied only to the container itself. In real-world scenarios, data is often more complex and larger, and synchronized processing may require additional computations and access to other data, which can significantly increase the time of operation itself. Therefore, we aim to spend as little time on synchronization as possible. The tests do not show the significant difference in processing time when increasing the shard size. The greater the size the bigger is the maintaining overhead, so how low should we go? I suspect that the minimal effective value is tied to the system's concurrency, so for modern server machines with much greater concurrency than my home PC, a shard size that is too small won’t yield the most optimal results. I would love to see the results on other machines with different concurrency that may confirm or disprove this hypothesis, but for now I assume it is optimal to use a shard size that is several times larger than the concurrency. You can also note that the largest size tested — 100,000 — effectively matches the mentioned earlier approach of assigning a mutex to each user (in the tests, user IDs were generated within the range of 100,000). As can be seen, this did not provide any advantage in processing speed, and this approach is obviously more demanding in terms of memory. Limitations and Considerations So, we determined an optimal shard size, but this is not the only thing that should be considered for the best results. It’s important to remember that such a difference compared to a simple implementation exists only because we are attempting to perform a sufficiently large number of transactions at the same time, causing a “queue” to build up. If the system’s concurrency and the speed of each operation (within the mutex lock) allow operations to be processed without bottlenecks, the effectiveness of sharding optimization decreases. To demonstrate this, let’s look at the test results with reduced load — at 500 writes and 75 pops (with a concurrency of 8) — the difference is still present, but it is no longer as significant. This is yet another reminder that premature optimizations can complicate code without significantly impacting results. It’s crucial to understand the application requirements and expected load. Also, it’s important to note that the effectiveness of sharding heavily depends on the distribution of values of the chosen key (in this case, user ID). If the distribution becomes heavily skewed, we may revert to performance more similar to that of a single mutex — imagine all of the transactions coming from a single user. Conclusion In scenarios with frequent writes to a container in a multi-threaded environment, traditional synchronization methods can become a bottleneck. By leveraging the ability of parallel processing of data and predictable distribution by some specific key and implementing a sharded synchronization approach, we can significantly improve performance without sacrificing thread safety. This technique can prove itself effective for systems dealing with user-specific data, such as transaction processing systems, user session caches, or any scenario where data can be logically partitioned based on a key attribute. As with any optimization, it’s crucial to profile your specific use case and adjust the implementation accordingly. The approach presented here provides a starting point for tackling synchronization challenges in write-heavy, multi-threaded applications. Remember, the goal of optimization is not just to make things faster, but to make them more efficient and scalable. By thinking critically about your data access patterns and leveraging the inherent structure of your data, you can often find innovative solutions to performance bottlenecks.
Before containerization made it so easy to prepare images for virtualization, it was quite an art to prepare custom ISO images to boot from CD. Later these images were used to boot virtual machines from. In other words, ISO images were precursors of container images. It is so that I had a couple of unfortunate run-ins with the Windows Docker client. Even when not running any containers, the Windows memory manager would hand it as much memory as possible slowing whatever I was busy with. I hence banned the Windows Docker client from my machine. Please do not get me wrong. I do not hate Docker — just its Windows client. This step forced me to move back in time. I started running virtual machines directly on Hyper-V, the Windows Hypervisor. Forming Kubernetes clusters on Windows then became a happy hobby for me as can be seen from my past posts published here at DZone. Shoemaker, Why Do You Go Barefoot? After following the same mouse clicks to create virtual machines on Hyper-V Manager for many an hour, I realized that I am like a shoemaker who goes barefoot: I build a DevOps pipeline for an hourly rate, but waste time on mouse clicks? Challenge accepted. I duckduckgo'd and read that it is possible to create virtual machines using PowerShell. It did not take a week to have a script that creates a new virtual machine as can be seen here. A sister script can start a virtual machine that is turned off. An Old Art Rediscovered This was great, but I realized I was still doing mouse clicks when installing Ubuntu. To automate this looked like a tougher nut to crack. One has to unpack an ISO image, manipulate it some way or another, and then package it again taking care to leave whatever instructs a computer how to boot intact. Fortunately, I found an excellent guide on how to do just this. The process consists of three steps: Unpack the Ubuntu ISO boot image. Manipulate the content: Move the master boot record (MBR) out. Specify what users normally do on the GUI and customize what is installed and run during installation. This is done using a subset of Ubuntu's Cloud-init language. See here for the instructions I created. Instruct the bootloader (Grub, in this case) where to find the custom boot instructions and to not wait for user input. Here is the Grub config I settled on. Package it all using an application called Xorriso. For the wizards of this ancient craft, Xorriso serves as their magic wand. It has pages of documentation in something that resembles a spell book. I will have to dirty my hands to understand fully, but my current (and most likely faulty) understanding is that it creates Boot partitions, loads the MBR copied out, and does something with the Cloud-init-like instructions to create an amended ISO image. Ansible for the Finishing Touches Although it was with great satisfaction that I managed to boot Ubuntu22 from PowerShell without any further input from me, what about the next time when Ubuntu brings out a new version? True DevOps mandates to document the process not in ASCII, but in some script ready to run when needed. Ansible shows its versatility in that I managed to do just this in an afternoon. The secret is to instruct Ansible that it is a local action. In other words, do not use SSH to target a machine to receive instruction, but the Ansible Controller is also the student: YAML - hosts: localhost connection: local The full play is given next and provides another view of what was explained above: YAML # stamp_images.yml - hosts: localhost connection: local become: true vars_prompt: - name: "base_iso_location" prompt: "Enter the path to the base image" private: no default: /tmp/ubuntu-22.04.4-live-server-amd64.iso tasks: - name: Install 7Zip ansible.builtin.apt: name: p7zip-full state: present - name: Install Xorriso ansible.builtin.apt: name: xorriso state: present - name: Unpack ISO ansible.builtin.command: cmd: "7z -y x {{ base_iso_location } -o/tmp/source-files" - name: Copy boot partitions ansible.builtin.copy: src: /tmp/source-files/[BOOT]/ dest: /tmp/BOOT - name: Delete working boot partitions ansible.builtin.file: path: /tmp/source-files/[BOOT] state: absent - name: Copy files for Ubuntu Bare ansible.builtin.copy: src: bare/source-files/bare_ubuntu dest: /tmp/source-files/ - name: Copy boot config for Ubuntu bare ansible.builtin.copy: src: bare/source-files/boot/grub/grub.cfg dest: /tmp/source-files/boot/grub/grub.cfg - name: Stamp bare image ansible.builtin.command: cmd: xorriso -as mkisofs -r -V 'Ubuntu 22.04 LTS AUTO (EFIBIOS)' -o ../ubuntu-22.04-wormhole-autoinstall-bare_V5_1.iso --grub2-mbr ../BOOT/1-Boot-NoEmul.img -partition_offset 16 --mbr-force-bootable -append_partition 2 28732ac11ff8d211ba4b00a0c93ec93b ../BOOT/2-Boot-NoEmul.img -appended_part_as_gpt -iso_mbr_part_type a2a0d0ebe5b9334487c068b6b72699c7 -c '/boot.catalog' -b '/boot/grub/i386-pc/eltorito.img' -no-emul-boot -boot-load-size 4 -boot-info-table --grub2-boot-info -eltorito-alt-boot -e '--interval:appended_partition_2:::' -no-emul-boot . chdir: /tmp/source-files - name: Copy files for Ubuntu Atomika ansible.builtin.copy: src: atomika/source-files/atomika_ubuntu dest: /tmp/source-files/ - name: Copy boot config for Ubuntu Atomika ansible.builtin.copy: src: atomika/source-files/boot/grub/grub.cfg dest: /tmp/source-files/boot/grub/grub.cfg - name: Stamp Atomika image ansible.builtin.command: cmd: xorriso -as mkisofs -r -V 'Ubuntu 22.04 LTS AUTO (EFIBIOS)' -o ../ubuntu-22.04-wormhole-autoinstall-atomika_V5_1.iso --grub2-mbr ../BOOT/1-Boot-NoEmul.img -partition_offset 16 --mbr-force-bootable -append_partition 2 28732ac11ff8d211ba4b00a0c93ec93b ../BOOT/2-Boot-NoEmul.img -appended_part_as_gpt -iso_mbr_part_type a2a0d0ebe5b9334487c068b6b72699c7 -c '/boot.catalog' -b '/boot/grub/i386-pc/eltorito.img' -no-emul-boot -boot-load-size 4 -boot-info-table --grub2-boot-info -eltorito-alt-boot -e '--interval:appended_partition_2:::' -no-emul-boot . chdir: /tmp/source-files Note the magic of the Xorriso command used here to prepare two images: one with support and one without support for Kubernetes. The only caveat is to have a machine installed with Ansible to run this play from. The output from the above play can be downloaded from here and pre-install a very recent version of Ansible. Conclusion This post went retro, but it is important to revisit where things started to gain an understanding of why things are the way they are. Windows and containers, furthermore, do not mix that well and any investigation into ways to make the days of developers better should be welcomed. I referred to part of the code, but the full project can be viewed on GitHub.
In the changing world of software development, testing is crucial for making sure that applications are reliable, functional, and perform well. Unit testing is a method used to check how parts or units of code behave. However, testing can get tricky when dealing with applications that rely on systems like databases, message brokers, or third-party APIs. This is where test containers come in handy alongside Docker – they are tools for developers. In this guide, we will explore the realm of test containers with Docker. Learn how to effectively use them for unit testing. Understanding Test Containers Before we delve into the specifics of test containers, it's important to understand their concept. Test containers are temporary containers created during testing to provide environments for running tests. These containers contain dependencies such as databases, message queues, or web servers allowing developers to create predictable testing environments. By using test containers developers can ensure that their tests run reliably across setups improving the quality and efficiency of their testing processes. Using Docker for Test Containers Docker plays a role in managing and creating test containers since it is a leading platform, for containerization. With Docker's capabilities developers can easily set up deploy and manage test containers within their testing process. Let's take a look, at some of the advantages of using Docker for test containers, Consistency across environments: Docker maintains consistency across environments, be it development, testing, or production, by packaging dependencies and configurations into containers. This consistency helps avoid the issue of "IT WORKS ON MY MACHINE" and promotes a testing environment throughout the development cycle. Reproducible environments: Test containers offer spaces for running tests preventing any interference between them and ensuring an easy way to reproduce. Each test operates within its container to ensure that external factors or changes in the system setup do not impact the test results. Scalability and resource optimization: Docker allows developers to scale test environments dynamically by running containers. This scalability boosts the efficiency of test execution in cases where tests need to run. Additionally, test containers are lightweight and temporary, consuming resources and minimizing overhead compared to virtual machines. Starting With Test Containers and Docker Now that we've grasped the concepts, behind test containers and Docker let's delve into how we can utilize them for unit testing code. Here's a simple breakdown to help you begin: Step 1 First, establish a configuration for Docker Compose. Create a docker-compose.yml file that outlines the services and dependencies for your testing needs. This file acts as a guide, for setting up the testing environment with Docker Compose. YAML version: '3.8' services: db: image: mcr.microsoft.com/mssql/server:2019-latest environment: SA_PASSWORD: "<password>" ACCEPT_EULA: "Y" ports: - "1433:1433" Step 2 Create tests for your application code by using testing tools, like JUnit, NUnit, or pytest. These tests should focus on parts of the code. Simulate any external dependencies that are required. C# using Xunit; public class DatabaseTests { [Fact] public void TestDatabaseConnection() { var connectionString = "Server=localhost;Database=test_db;User Id=sa;Password=yourStrong(!)Password;"; using var connection = new SqlConnection(connectionString); connection.Open(); Assert.True(connection.State == System.Data.ConnectionState.Open); } } Step 3 Set up the test environment by starting and stopping Docker containers using Docker Compose or a Docker library before and after running the tests. C# public class TestSetup : IDisposable { public TestSetup() { // Start Docker containers var dockerComposeUp = new Process { StartInfo = new ProcessStartInfo { FileName = "docker-compose", Arguments = "up -d", RedirectStandardOutput = true, RedirectStandardError = true, UseShellExecute = false, CreateNoWindow = true, } }; dockerComposeUp.Start(); dockerComposeUp.WaitForExit(); } public void Dispose() { // Stop Docker containers var dockerComposeDown = new Process { StartInfo = new ProcessStartInfo { FileName = "docker-compose", Arguments = "down", RedirectStandardOutput = true, RedirectStandardError = true, UseShellExecute = false, CreateNoWindow = true, } }; dockerComposeDown.Start(); dockerComposeDown.WaitForExit(); } } Step 4 Execute your unit tests with your testing framework or run it directly from VS Test Explorer. The tests will run within the designated containers interacting with the dependencies specified in the Docker Compose setup. Guidelines for Using Test Containers With Docker Test containers combined with Docker are tools for developers looking to efficiently test their applications. By enclosing dependencies in containers developers can establish consistent and reliable testing environments. To ensure performance and productivity when using test containers it's crucial to follow recommended practices. This article will delve into some recommendations for utilizing test containers alongside Docker. 1. Maintain Lightweight Containers A core tenet of Docker is the emphasis on keeping containers lightweight. When setting up test containers aim to reduce their size by utilizing base images and optimizing dependencies. This approach does not speed up the process of building and deploying containers. Also enhances performance during testing procedures. By prioritizing containers developers can simplify their testing processes. Boost the efficiency of their testing setup. 2. Employ Docker Compose for Coordination Docker Compose offers a user method for defining and managing multi-container test environments. Then handling individual container operations leverage Docker Compose to streamline environment setup with a single configuration file. Specify the services and dependencies, for testing in a docker compose.yml file then utilize Docker Compose commands to control the lifecycle of the test containers. This method ensures that testing is done consistently and reproducibly in scenarios making it easier to manage the test infrastructure. 3. Tidy up Resources Post Testing It's important to manage resources when using Docker test containers. Once tests are completed make sure to clean up resources to prevent resource leaks and unnecessary consumption.. Delete test containers using Docker commands or Docker Compose to free up system resources and maintain system cleanliness. By cleaning up resources after testing developers can avoid conflicts. Ensure the integrity of test runs. 4. Simulate External Dependencies Whenever Feasible Although test containers offer a way to test applications, with dependencies it's crucial to reduce reliance on real external services whenever possible. Instead utilize mocking frameworks to mimic interactions with dependencies in unit tests. By simulating dependencies, developers can isolate the code being tested and focus on verifying its behavior without depending on services. This approach does not simplify the testing process. Also enhances test performance and reliability. 5. Monitor Resource Usage and Performance Monitoring resource usage and performance metrics is vital when conducting tests with test containers. Keep track of metrics, like CPU usage, memory usage, and container health to spot bottlenecks and optimize resource allocation. Leverage Docker monitoring solutions and dashboards to keep tabs, on container metrics and performance data. By overseeing resource utilization and performance levels developers can enhance their testing environment speed up test runs and guarantee test outcomes. Challenges in Using Test Containers Initial setup complexity: setting up test containers can be complex for beginners due to the need to create and configure Docker Compose files. Performance overhead: Running multiple containers can introduce performance overhead, affecting test execution speed, especially in resource-constrained environments. Dependency management: Ensuring all dependencies are correctly specified and maintained within containers can be challenging and requires careful configuration. Troubleshooting and debugging: Debugging issues within containers can be more complex compared to traditional testing environments due to the additional layer of containerization. Integration with CI/CD pipelines: it can be challenging and may require additional configuration and tooling. Conclusion Using Docker for unit testing code with dependencies offers a practical solution. Encapsulating dependencies in containers allows developers to create consistent testing environments, improving reliability and effectiveness. By following the steps and practices outlined here for using test containers, developers can streamline their testing process, enhance software quality, and deliver reliable applications. Happy Coding !!!
Every day, developers are pushed to evaluate and use different tools, cloud provider services, and follow complex inner-development loops. In this article, we look at how the open-source Dapr project can help Spring Boot developers build more resilient and environment-agnostic applications. At the same time, they keep their inner development loop intact. Meeting Developers Where They Are A couple of weeks ago at Spring I/O, we had the chance to meet the Spring community face-to-face in the beautiful city of Barcelona, Spain. At this conference, the Spring framework maintainers, core contributors, and end users meet yearly to discuss the framework's latest additions, news, upgrades, and future initiatives. While I’ve seen many presentations covering topics such as Kubernetes, containers, and deploying Spring Boot applications to different cloud providers, these topics are always covered in a way that makes sense for Spring developers. Most tools presented in the cloud-native space involve using new tools and changing the tasks performed by developers, sometimes including complex configurations and remote environments. Tools like the Dapr project, which can be installed on a Kubernetes cluster, push developers to add Kubernetes as part of their inner-development loop tasks. While some developers might be comfortable with extending their tasks to include Kubernetes for local development, some teams prefer to keep things simple and use tools like Testcontainers to create ephemeral environments where they can test their code changes for local development purposes. With Dapr, developers can rely on consistent APIs across programming languages. Dapr provides a set of building blocks (state management, publish/subscribe, service Invocation, actors, and workflows, among others) that developers can use to code their application features. Instead of spending too much time describing what Dapr is, in this article, we cover how the Dapr project and its integration with the Spring Boot framework can simplify the development experience for Dapr-enabled applications that can be run, tested, and debugged locally without the need to run inside a Kubernetes cluster. Today, Kubernetes, and Cloud-Native Runtimes Today, if you want to work with the Dapr project, no matter the programming language you are using, the easiest way is to install Dapr into a Kubernetes cluster. Kubernetes and container runtimes are the most common runtimes for our Java applications today. Asking Java developers to work and run their applications on a Kubernetes cluster for their day-to-day tasks might be way out of their comfort zone. Training a large team of developers on using Kubernetes can take a while, and they will need to learn how to install tools like Dapr on their clusters. If you are a Spring Boot developer, you probably want to code, run, debug, and test your Spring Boot applications locally. For this reason, we created a local development experience for Dapr, teaming up with the Testcontainers folks, now part of Docker. As a Spring Boot developer, you can use the Dapr APIs without a Kubernetes cluster or needing to learn how Dapr works in the context of Kubernetes. This test shows how Testcontainers provisions the Dapr runtime by using the @ClassRule annotation, which is in charge of bootstrapping the Dapr runtime so your application code can use the Dapr APIs to save/retrieve state, exchange asynchronous messages, retrieve configurations, create workflows, and use the Dapr actor model. How does this compare to a typical Spring Boot application? Let’s say you have a distributed application that uses Redis, PostgreSQL, and RabbitMQ to persist and read state and Kafka to exchange asynchronous messages. You can find the code for this application here (under the java/ directory, you can find all the Java implementations). Your Spring Boot applications will need to have not only the Redis client but also the PostgreSQL JDBC driver and the RabbitMQ client as dependencies. On top of that, it is pretty standard to use Spring Boot abstractions, such as Spring Data KeyValue for Redis, Spring Data JDBC for PostgreSQL, and Spring Boot Messaging RabbitMQ. These abstractions and libraries elevate the basic Redis, relational database, and RabbitMQ client experiences to the Spring Boot programming model. Spring Boot will do more than just call the underlying clients. It will manage the underlying client lifecycle and help developers implement common use cases while promoting best practices under the covers. If we look back at the test that showed how Spring Boot developers can use the Dapr APIs, the interactions will look like this: In the second diagram, the Spring Boot application only depends on the Dapr APIs. In both the unit test using the Dapr APIs shown above and the previous diagram, instead of connecting to the Dapr APIs directly using HTTP or gRPC requests, we have chosen to use the Dapr Java SDK. No RabbitMQ, Redis clients, or JDBC drivers were included in the application classpath. This approach of using Dapr has several advantages: The application has fewer dependencies, so it doesn’t need to include the Redis or RabbitMQ client. The application size is not only smaller but less dependent on concrete infrastructure components that are specific to the environment where the application is being deployed. Remember that these clients’ versions must match the component instance running on a given environment. With more and more Spring Boot applications deployed to cloud providers, it is pretty standard not to have control over which versions of components like databases and message brokers will be available across environments. Developers will likely run a local version of these components using containers, causing version mismatches with environments where the applications run in front of our customers. The application doesn’t create connections to Redis, RabbitMQ, or PostgreSQL. Because the configuration of connection pools and other details closely relate to the infrastructure and these components are pushed away from the application code, the application is simplified. All these concerns are now moved out of the application and consolidated behind the Dapr APIs. A new application developer doesn’t need to learn how RabbitMQ, PostgreSQL, or Redis works. The Dapr APIs are self-explanatory: if you want to save the application’s state, use the saveState() method. If you publish an event, use the publishEvent() method. Developers using an IDE can easily check which APIs are available for them to use. The teams configuring the cloud-native runtime can use their favorite tools to configure the available infrastructure. If they move from a self-managed Redis instance to a Google Cloud In-Memory Store, they can swap their Redis instance without changing the application code. If they want to swap a self-managed Kafka instance for Google PubSub or Amazon SQS/SNS, they can shift Dapr configurations. But, you ask, what about those APIs, saveState/getState and publishEvent? What about subscriptions? How do you consume an event? Can we elevate these API calls to work better with Spring Boot so developers don’t need to learn new APIs? Tomorrow, a Unified Cross-Runtime Experience In contrast with most technical articles, the answer here is not, “It depends." Of course, the answer is YES. We can follow the Spring Data and Messaging approach to provide a richer Dapr experience that integrates seamlessly with Spring Boot. This, combined with a local development experience (using Testcontainers), can help teams design and code applications that can run quickly and without changes across environments (local, Kubernetes, cloud provider). If you are already working with Redis, PostgreSQL, and/or RabbitMQ, you are most likely using Spring Boot abstractions Spring Data and Spring RabbitMQ/Kafka/Pulsar for asynchronous messaging. For Spring Data KeyValue, check the post A Guide to Spring Data Key Value for more details. To find an Employee by ID: For asynchronous messaging, we can take a look at Spring Kafka, Spring Pulsar, and Spring AMQP (RabbitMQ) (see also Messaging with RabbitMQ ), which all provide a way to produce and consume messages. Producing messages with Kafka is this simple: Consuming Kafka messages is extremely straightforward too: For RabbitMQ, we can do pretty much the same: And then to send a message: To consume a message from RabbitMQ, you can do: Elevating Dapr to the Spring Boot Developer Experience Now let’s take a look at how it would look with the new Dapr Spring Boot starters: Let’s take a look at the DaprKeyValueTemplate: Let’s now store our Vote object using the KeyValueTemplate. Let’s find all the stored votes by creating a query to the KeyValue store: Now, why does this matter? The DaprKeyValueTemplate, implements the KeyValueOperations interfaces provided by Spring Data KeyValue, which is implemented by tools like Redis, MongoDB, Memcached, PostgreSQL, and MySQL, among others. The big difference is that this implementation connects to the Dapr APIs and does not require any specific client. The same code can store data in Redis, PostgreSQL, MongoDB, and cloud provider-managed services such as AWS DynamoDB and Google Cloud Firestore. Over 30 data stores are supported in Dapr, and no changes to the application or its dependencies are needed. Similarly, let’s take a look at the DaprMessagingTemplate. Let’s publish a message/event now: To consume messages/events, we can use the annotation approach similar to the Kafka example: An important thing to notice is that out-of-the-box Dapr uses CloudEvents to exchange events (other formats are also supported), regardless of the underlying implementations. Using the @Topic annotation, our application subscribes to listen to all events happening in a specific Dapr PubSub component in a specified Topic. Once again, this code will work for all supported Dapr PubSub component implementations such as Kafka, RabbitMQ, Apache Pulsar, and cloud provider-managed services such as Azure Event Hub, Google Cloud PubSub, and AWS SNS/SQS (see Dapr Pub/sub brokers documentation). Combining the DaprKeyValueTemplate and DaprMessagingTemplate gives developers access to data manipulation and asynchronous messaging under a unified API, which doesn’t add application dependencies, and it is portable across environments, as you can run the same code against different cloud provider services. While this looks much more like Spring Boot, more work is required. On top of Spring Data KeyValue, the Spring Repository interface can be implemented to provide a CRUDRepository experience. There are also some rough edges for testing, and documentation is needed to ensure developers can get started with these APIs quickly. Advantages and Trade-Offs As with any new framework, project, or tool you add to the mix of technologies you are using, understanding trade-offs is crucial in measuring how a new tool will work specifically for you. One way that helped me understand the value of Dapr is to use the 80% vs 20% rule. Which goes as follows: 80% of the time, applications do simple operations against infrastructure components such as message brokers, key/value stores, configuration servers, etc. The application will need to store and retrieve state and emit and consume asynchronous messages just to implement application logic. For these scenarios, you can get the most value out of Dapr. 20% of the time, you need to build more advanced features that require deeper expertise on the specific message broker that you are using or to write a very performant query to compose a complex data structure. For these scenarios, it is okay not to use the Dapr APIs, as you probably require access to specific underlying infrastructure features from your application code. It is common when we look at a new tool to generalize it to fit as many use cases as we can. With Dapr, we should focus on helping developers when the Dapr APIs fit their use cases. When the Dapr APIs don’t fit or require specific APIs, using provider-specific SDKs/clients is okay. By having a clear understanding of when the Dapr APIs might be enough to build a feature, a team can design and plan in advance what skills are needed to implement a feature. For example, do you need a RabbitMQ/Kafka or an SQL and domain expert to build some advanced queries? Another mistake we should avoid is not considering the impact of tools on our delivery practices. If we can have the right tools to reduce friction between environments and if we can enable developers to create applications that can run locally using the same APIs and dependencies required when running on a cloud provider. With these points in mind let’s look at the advantage and trade-offs: Advantages Concise APIs to tackle cross-cutting concerns and access to common behavior required by distributed applications. This enables developers to delegate to Dapr concerns such as resiliency (retry and circuit breaker mechanisms), observability (using OpenTelemetry, logs, traces and metrics), and security (certificates and mTLS). With the new Spring Boot integration, developers can use the existing programming model to access functionality With the Dapr and Testcontainers integration, developers don’t need to worry about running or configuring Dapr, or learning other tools that are external to their existing inner development loops. The Dapr APIs will be available for developers to build, test, and debug their features locally. The Dapr APIs can help developers save time when interacting with infrastructure. For example, instead of pushing every developer to learn about how Kafka/Pulsar/RabbitMQ works, they just need to learn how to publish and consume events using the Dapr APIs. Dapr enables portability across cloud-native environments, allowing your application to run against local or cloud-managed infrastructure without any code changes. Dapr provides a clear separation of concerns to enable operations/platform teams to wire infrastructure across a wide range of supported components. Trade-Offs Introducing abstraction layers, such as the Dapr APIs, always comes with some trade-offs. Dapr might not be the best fit for all scenarios. For those cases, nothing stops developers from separating more complex functionality that requires specific clients/drivers into separate modules or services. Dapr will be required in the target environment where the application will run. Your applications will depend on Dapr to be present and the infrastructure needed by the application wired up correctly for your application to work. If your operation/platform team is already using Kubernetes, Dapr should be easy to adopt as it is a quite mature CNCF project with over 3,000 contributors. Troubleshooting with an extra abstraction between our application and infrastructure components can become more challenging. The quality of the Spring Boot integration can be measured by how well errors are propagated to developers when things go wrong. I know that advantages and trade-offs depend on your specific context and background, feel free to reach out if you see something missing in this list. Summary and Next Steps Covering the Dapr Statestore (KeyValue) and PubSub (Messaging) is just the first step, as adding more advanced Dapr features into the Spring Boot programming model can help developers access more functionality required to create robust distributed applications. On our TODO list, Dapr Workflows for durable executions is coming next, as providing a seamless experience to develop complex, long-running orchestration across services is a common requirement. One of the reasons why I was so eager to work on the Spring Boot and Dapr integration is that I know that the Java community has worked hard to polish their developer experiences focusing on productivity and consistent interfaces. I strongly believe that all this accumulated knowledge in the Java community can be used to take the Dapr APIs to the next level. By validating which use cases are covered by the existing APIs and finding gaps, we can build better integrations and automatically improve developers’ experiences across languages. You can find all the source code for the example we presented at Spring I/O linked in the "Today, Kubernetes, and Cloud-Native Runtimes" section of this article. We expect to merge the Spring Boot and Dapr integration code to the Dapr Java SDK to make this experience the default Dapr experience when working with Spring Boot. Documentation will come next. If you want to contribute or collaborate with these projects and help us make Dapr even more integrated with Spring Boot, please contact us.
Kubernetes is a de facto platform for managing containerized applications. It provides a rich ecosystem for deployment, scaling, and operations with first-class support (tons of ready configs and documentation) on the Google Cloud platform. Given the growing importance of data privacy and protection in the digital world today, Kubernetes has a key part to play in helping secure that data. This enables organizations to secure data properly and comply with privacy regulations effortlessly by using built-in security features and best practices. In this article, we will discuss how Kubernetes improves our data privacy by keeping our installations updated, monitoring activities, and applying network policies. Understanding Kubernetes Kubernetes is an open-source platform built to automate deploying, scaling, and operating application containers. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF). Kubernetes, or, as we frequently discover, K8s as well, is a system for managing containerized applications across multiple hosts. This is a framework for the resilient execution of distributed systems. Kubernetes can help you effectively manage clusters of hosts running containers on Linux. Top Features and Functions Kubernetes automates deployment and scaling. It scales the amount of running containers you have depending on your current workload, so as to get the most mileage out of the resources you have. Automated deployment and scaling: Kubernetes automates the process of deploying and scaling applications. It can adjust the number of running containers based on your current needs, ensuring optimal use of resources. Automatic self-healing: If one of your containers fails, Kubernetes will automatically restart the one, replacing and rescheduling containers if nodes die. Ensuring your applications keep running without issues thanks to this self-healing mechanism. Service discovery and load balancing: For a given set of containers, Kubernetes provides IP addresses and a single DNS name for the purpose of balancing the load and network traffic dispatching or distributing, ensuring consistent behavior. Storage orchestration: It mounts on its own the storage system of your choice, regardless of whether it is local storage, a public cloud provider, or any network storage solution you own. One can handle sensitive information like passwords, OAuth tokens, and SSH keys without rebuilding your images or exposing secrets in the DevOps pipeline. Data Privacy and Protection Basics Data privacy is primarily about how the personal data of individuals is gathered, processed, and stored with their consent thereby ensuring data of these individuals is not lost, stolen, or misused. It comes down to having power over who can see your data and how is it being used. Data is the most important thing in a digital world, to protecting it is the most crucial task. It secures private and confidential data from being accessed by unauthorized parties and getting breached, thereby preserving loyalty between you as a business entity and your esteemed clients. Cybersecurity regulations require strict protocols so that data cannot be misused or leaked and personal information stays private, data protection is key. Common Threats to Data Security Phishing: The fraudulent practice of fishing for users' credentials by displaying a trustworthy entity in an electronic communication. Malware: Malicious software designed to cause harm, damage, or gain unauthorized access to your devices. Ransomware: A piece of malware that encrypts a victim's files, and then demands payment to restore access. Insider threat: Risks attributable to persons with organizational access that may exploit or be exploited unintentionally to degrade data assets If the attacker succeeds, the second layer of security never gets engaged, thus the weak password. Kubernetes and Data Security The open-source container orchestration platform Kubernetes employs a strong architecture that manages data efficiently. It manages to organize containers as groups into clusters of nodes with their own storage and networking implementations. This is why Kubernetes uses persistent volumes (PV) to store data separately from its container, so the data persists even when the container is deleted or moved. This separation of storage and containers has provided increased data mobility, data integrity, and data availability. Security Features of Kubernetes Kubernetes comes with several built-in security features designed to protect data and ensure system integrity: Role-Based Access Control (RBAC): RBAC enables administrators to define roles and permissions so that only authorized users can access or change certain sensitive data. You will be able to manage all Kubernetes secrets: All sensitive information that should not be in the code like passwords, tokens, keys, etc. Network policies: These policies specify how which pods can access which services and by default block traffic that doesn't belong to them, hence limiting the foothold to a potential threat. Pod security policies: Enforce security standards to Pods, such as not allowing to use of images with privileged containers and the need to protect network function requirements. Encryption and Kubernetes With encryption, you will be able to protect your data which will act as a defense line against any unauthorized intrusion. They take data that could be read by a person and they turn it into code that only people with the right decoder can read. This procedure is crucial to protect sensitive data, ensure data privacy, and comply with the law. How To Add Encryption to Kubernetes Kubernetes, one of the most recognized frameworks for container orchestration, supports table-stakes procedures for enabling encryption. Key steps to encrypt data in Kubernetes: Encrypting data at rest: Kubernetes supports encryption of data stored in etcd, its key-value store. By enabling encryption at rest, you protect sensitive information stored within Kubernetes clusters. Enable data encryption in transit: Secure communication between Kubernetes components using Transport Layer Security (TLS). Therefore, keeping data that flows within the cluster safe from eavesdropping and anyone trying to tamper with it. Using secrets: Kubernetes Secrets help you manage sensitive information - such as passwords and API keys securely. All your applications can easily access the secrets in encrypted form. Access Controls and Kubernetes Many organizations may need to use their own Identity Provider (IdP) for authentication because access control is a very important security issue and k8s is no exception. You can use Role-Based Access Control (RBAC) to manage permissions to keep your Kubernetes environment secure and compliant. Only authorized users are able to access the data from the database and it provides an additional layer of security that an administrator can employ (role-based access control)RBAC as an administrative security only where the document authorizes it and system permitting requirements. RBAC in Kubernetes lets you specify what a user can do on a given resource. Here’s how to set it up: Create roles: Allows the creation of roles that can be used to describe the level of access to different Kubernetes resources. E.g, The role of "developer" can create and modify deployments and cannot delete them. RoleBinding: As created above, tie the defined roles to users or groups. A RoleBinding grants the permissions defined in a role to a user or group within the specified namespace. ClusterRole and ClusterRoleBinding: For cluster-wide privileges, use ClusterRoles and ClusterRoleBindings. These are sort of like roles and role bindings but at the cluster level. Permissions and User Accessibility Control Implementing permissions and managing user access properly means that you should regularly reevaluate and change roles and bindings to correspond to the current state/needs and organization structure. Routine audits: Review roles and bindings on a regular basis and guarantee the appropriate permissions are followed per the current security policies. Least privilege: Users should have only as much access as they need to fulfill their roles. Which reduces the chances of accidental or malicious activity that may harm our system. Monitoring and logging: Ensure that you can monitor and log the access to your resources, who accessed a resource, and when. This is used to detect and respond to attempted unauthorized login. Compliance and Kubernetes Ensuring compliance with data protection regulations is crucial for any organization handling sensitive data. Kubernetes, a powerful container orchestration platform, offers several tools and practices to help organizations maintain compliance. Compliance with data protection regulations like GDPR, HIPAA, and CCPA involves securing data, controlling access, and maintaining transparency. Kubernetes supports these requirements through its robust security features and customizable policies. Kubernetes Tools and Practices for Compliance Role-Based Access Control (RBAC): RBAC in Kubernetes allows administrators to define roles and permissions, ensuring that only authorized personnel have access to sensitive data. This minimizes the risk of data breaches and unauthorized access. Encryption: Kubernetes supports encryption both in transit and at rest. Encrypting data helps protect it from unauthorized access and ensures that even if data is intercepted, it remains unreadable to attackers. Audit logging: Kubernetes provides detailed audit logs, which record all access and activity within the cluster. These logs are essential for monitoring compliance and identifying any potential security incidents. Network policies: With Kubernetes, you can define network policies that control the flow of traffic between pods. This helps to isolate sensitive workloads and ensures that only authorized communication occurs within the cluster. Automated compliance checks: Tools like Kubernetes Bench for Security and Open Policy Agent (OPA) can automate compliance checks, ensuring that your cluster adheres to security best practices and regulatory requirements. Best Practices for Using Kubernetes to Enhance Data Privacy Keeping Kubernetes and its components updated is crucial for data privacy. Regular updates ensure you have the latest security patches, reducing vulnerabilities. Always monitor Kubernetes releases and promptly apply patches to avoid potential security risks. Effective monitoring and logging are essential for maintaining data privacy. Kubernetes offers built-in tools like Prometheus and Fluentd to track activities and detect suspicious behavior. Regularly review logs to identify and address any security incidents promptly. Network policies in Kubernetes control traffic between pods, enhancing data privacy. Define and enforce strict policies to limit communication to only necessary pods. This minimizes the risk of unauthorized access and data breaches. Practical Steps Forward Kubernetes plays a vital role in enhancing data privacy and protection. Its robust security features, including encryption, role-based access controls, and network policies, help safeguard sensitive information. By leveraging Kubernetes' tools and best practices, organizations can ensure compliance with data protection regulations and protect their data from unauthorized access and breaches. As data privacy continues to be a critical concern, Kubernetes provides a reliable and efficient solution for managing and securing data.
With Spring Boot 3.2 and Spring Framework 6.1, we get support for Coordinated Restore at Checkpoint (CRaC), a mechanism that enables Java applications to start up faster. With Spring Boot, we can use CRaC in a simplified way, known as Automatic Checkpoint/Restore at startup. Even though not as powerful as the standard way of using CRaC, this blog post will show an example where the Spring Boot applications startup time is decreased by 90%. The sample applications are from chapter 6 in my book on building microservices with Spring Boot. Overview The blog post is divided into the following sections: Introducing CRaC, benefits, and challenges Creating CRaC-based Docker images with a Dockerfile Trying out CRaC with automatic checkpoint/restore Summary Next blog post Let’s start learning about CRaC and its benefits and challenges. 1. Introducing CRaC, Benefits, and Challenges Coordinated Restore at Checkpoint (CRaC) is a feature in OpenJDK, initially developed by Azul, to enhance the startup performance of Java applications by allowing them to restore to a previously saved state quickly. CRaC enables Java applications to save their state at a specific point in time (checkpoint) and then restore from that state at a later time. This is particularly useful for scenarios where fast startup times are crucial, such as serverless environments, microservices, and, in general, applications that must be able to scale up their instances quickly and also support scale-to-zero when not being used. This introduction will first explain a bit about how CRaC works, then discuss some of the challenges and considerations associated with it, and finally, describe how Spring Boot 3.2 integrates with it. The introduction is divided into the following subsections: 1.1. How CRaC Works 1.2. Challenges and Considerations 1.3. Spring Boot 3.2 integration with CRaC 1.1. How CRaC Works Checkpoint Creation At a chosen point during the application’s execution, a checkpoint is created. This involves capturing the entire state of the Java application, including the heap, stack, and all active threads. The state is then serialized and saved to the file system. During the checkpoint process, the application is typically paused to ensure a consistent state is captured. This pause is coordinated to minimize disruption and ensure the application can resume correctly. Before taking the checkpoint, some requests are usually sent to the application to ensure that it is warmed up, i.e., all relevant classes are loaded, and the JVM HotSpot engine has had a chance to optimize the bytecode according to how it is being used in runtime. Commands to perform a checkpoint: Shell java -XX:CRaCCheckpointTo=<some-folder> -jar my_app.jar # Make calls to the app to warm up the JVM... jcmd my_app.jar JDK.checkpoint State Restoration When the application is started from the checkpoint, the previously saved state is deserialized from the file system and loaded back into memory. The application then continues execution from the exact point where the checkpoint was taken, bypassing the usual startup sequence. Command to restore from a checkpoint: Shell java -XX:CRaCRestoreFrom=<some-folder> Restoring from a checkpoint allows applications to skip the initial startup process, including class loading, warmup initialization, and other startup routines, significantly reducing startup times. For more information, see Azul’s documentation: What is CRaC? 1.2. Challenges and Considerations As with any new technology, CRaC comes with a new set of challenges and considerations: State Management Open files and connections to external resources, such as databases, must be closed before the checkpoint is taken. After the restore, they must be reopened. CRaC exposes a Java lifecycle interface that applications can use to handle this, org.crac.Resource, with the callback methods beforeCheckpoint and afterRestore. Sensitive Information Credentials and secrets stored in the JVM’s memory will be serialized into the files created by the checkpoint. Therefore, these files need to be protected. An alternative is to run the checkpoint command against a temporary environment that uses other credentials and replace the credentials on restore. Linux Dependency The checkpoint technique is based on a Linux feature called CRIU, “Checkpoint/Restore In Userspace”. This feature only works on Linux, so the easiest way to test CRaC on a Mac or a Windows PC is to package the application into a Linux Docker image. Linux Privileges Required CRIU requires special Linux privileges, resulting in Docker commands to build Docker images and creating Docker containers also requiring Linux privileges to be able to run. Storage Overhead Storing and managing checkpoint data requires additional storage resources, and the checkpoint size can impact the restoration time. The original jar file is also required to be able to restart a Java application from a checkpoint. I will describe how to handle these challenges in the section on creating Docker images. 1.3. Spring Boot 3.2 Integration With CRaC Spring Boot 3.2 (and the underlying Spring Framework) helps with the processing of closing and reopening connections to external resources. Before the creation of the checkpoint, Spring stops all running beans, giving them a chance to close resources if needed. After a restore, the same beans are restarted, allowing beans to reopen connections to the resources. The only thing that needs to be added to a Spring Boot 3.2-based application is a dependency to the crac-library. Using Gradle, it looks like the following in the gradle.build file: Groovy dependencies { implementation 'org.crac:crac' Note: The normal Spring Boot BOM mechanism takes care of versioning the crac dependency. The automatic closing and reopening of connections handled by Spring Boot usually works. Unfortunately, when this blog post was written, some Spring modules lacked this support. To track the state of CRaC support in the Spring ecosystem, a dedicated test project, Spring Lifecycle Smoke Tests, has been created. The current state can be found on the project’s status page. If required, an application can register callback methods to be called before a checkpoint and after a restore by implementing the above-mentioned Resource interface. The microservices used in this blog post have been extended to register callback methods to demonstrate how they can be used. The code looks like this: Java import org.crac.*; public class MyApplication implements Resource { public MyApplication() { Core.getGlobalContext().register(this); } @Override public void beforeCheckpoint(Context<? extends Resource> context) { LOG.info("CRaC's beforeCheckpoint callback method called..."); } @Override public void afterRestore(Context<? extends Resource> context) { LOG.info("CRaC's afterRestore callback method called..."); } } Spring Boot 3.2 provides a simplified alternative to take a checkpoint compared to the default on-demand alternative described above. It is called automatic checkpoint/restore at startup. It is triggered by adding the JVM system property -Dspring.context.checkpoint=onRefresh to the java -jar command. When set, a checkpoint is created automatically when the application is started. The checkpoint is created after Spring beans have been created but not started, i.e., after most of the initialization work but before that application starts. For details, see Spring Boot docs and Spring Framework docs. With an automatic checkpoint, we don’t get a fully warmed-up application, and the runtime configuration must be specified at build time. This means that the resulting Docker images will be runtime-specific and contain sensitive information from the configuration, like credentials and secrets. Therefore, the Docker images must be stored in a private and protected container registry. Note: If this doesn’t meet your requirements, you can opt for the on-demand checkpoint, which I will describe in the next blog post. With CRaC and Spring Boot 3.2’s support for CRaC covered, let’s see how we can create Docker images for Spring Boot applications that use CRaC. 2. Creating CRaC-Based Docker Images With a Dockerfile While learning how to use CRaC, I studied several blog posts on using CRaC with Spring Boot 3.2 applications. They all use rather complex bash scripts (depending on your bash experience) using Docker commands like docker run, docker exec, and docker commit. Even though they work, it seems like an unnecessarily complex solution compared to producing a Docker image using a Dockerfile. So, I decided to develop a Dockerfile that runs the checkpoint command as a RUN command in the Dockerfile. It turned out to have its own challenges, as described below. I will begin by describing my initial attempt and then explain the problems I stumbled into and how I solved them, one by one until I reach a fully working solution. The walkthrough is divided into the following subsections: 2.1. First attempt 2.2. Problem #1, privileged builds with docker build 2.3. Problem #2, CRaC returns exit status 137, instead of 0 2.4. Problem #3, Runtime configuration 2.5. Problem #4, Spring Data JPA 2.6. The resulting Dockerfile Let’s start with a first attempt and see where it leads us. 2.1. First Attempt My initial assumption was to create a Dockerfile based on a multi-stage build, where the first stage creates the checkpoint using a JDK-based base image, and the second step uses a JRE-based base image for runtime. However, while writing this blog post, I failed to find a base image for a Java 21 JRE supporting CRaC. So I changed my mind to use a regular Dockerfile instead, using a base image from Azul: azul/zulu-openjdk:21.0.3-21.34-jdk-crac Note: BellSoft also provides base images for CraC; see Liberica JDK with CRaC Support as an alternative to Azul. The first version of the Dockerfile looks like this: Dockerfile FROM azul/zulu-openjdk:21.0.3-21.34-jdk-crac ADD build/libs/*.jar app.jar RUN java -Dspring.context.checkpoint=onRefresh -XX:CRaCCheckpointTo=checkpoint -jar app.jar EXPOSE 8080 ENTRYPOINT ["java", "-XX:CRaCRestoreFrom=checkpoint"] This Dockerfile is unfortunately not possible to use since CRaC requires a build to run privileged commands. 2.2. Problem #1, Privileged Builds With Docker Build As mentioned in section 1.2. Challenges and Considerations, CRIU, which CRaC is based on, requires special Linux privileges to perform a checkpoint. The standard docker build command doesn’t allow privileged builds, so it can’t be used to build Docker images using the above Dockerfile. Note: The --privileged - flag that can be used in docker run commands is not supported by docker build. Fortunately, Docker provides an improved builder backend called BuildKit. Using BuildKit, we can create a custom builder that is insecure, meaning it allows a Dockerfile to run privileged commands. To communicate with BuildKit, we can use Docker’s CLI tool buildx. The following command can be used to create an insecure builder named insecure-builder: Shell docker buildx create --name insecure-builder --buildkitd-flags '--allow-insecure-entitlement security.insecure' Note: The builder runs in isolation within a Docker container created by the docker buildx create command. You can run a docker ps command to reveal the container. When the builder is no longer required, it can be removed with the command: docker buildx rm insecure-builder. The insecure builder can be used to build a Docker image with a command like: Shell docker buildx --builder insecure-builder build --allow security.insecure --load . Note: The --load flag loads the built image into the regular local Docker image cache. Since the builder runs in an isolated container, its result will not end up in the regular local Docker image cache by default. RUN commands in a Dockerfile that requires privileges must be suffixed with --security=insecure. The --security-flag is only in preview and must therefore be enabled in the Dockerfile by adding the following line as the first line in the Dockerfile: Dockerfile # syntax=docker/dockerfile:1.3-labs For more details on BuildKit and docker buildx, see Docker Build architecture. We can now perform the build; however, the way the CRaC is implemented stops the build, as we will learn in the next section. 2.3. Problem #2, CRaC Returns Exit Status 137 Instead of 0 On a successful checkpoint, the java -Dspring.context.checkpoint=onRefresh -XX:CRaCCheckpointTo... command is terminated forcefully (like using kill -9) and returns the exit status 137 instead of 0, causing the Docker build command to fail. To prevent the build from stopping, the java command is extended with a test that verifies that 137 is returned and, if so, returns 0 instead. The following is added to the java command: || if [ $? -eq 137 ]; then return 0; else return 1; fi. Note: || means that the command following will be executed if the first command fails. With CRaC working in a Dockerfile, let’s move on and learn about the challenges with runtime configuration and how to handle them. 2.4. Problem #3, Runtime Configuration Using Spring Boot’s automatic checkpoint/restore at startup, there is no way to specify runtime configuration on restore; at least, I haven’t found a way to do it. This means that the runtime configuration has to be specified at build time. Sensitive information from the runtime configuration, such as credentials used for connecting to a database, will written to the checkpoint files. Since the Docker images will contain these checkpoint files they also need to be handled in a secure way. The Spring Framework documentation contains a warning about this, copied from the section Automatic checkpoint/restore at startup: As mentioned above, and especially in use cases where the CRaC files are shipped as part of a deployable artifact (a container image, for example), operate with the assumption that any sensitive data “seen” by the JVM ends up in the CRaC files, and assess carefully the related security implications. So, let’s assume that we can protect the Docker images, for example, in a private registry with proper authorization in place and that we can specify the runtime configuration at build time. In Chapter 6 of the book, the source code specifies the runtime configuration in the configuration files, application.yml, in a Spring profile named docker. The RUN command, which performs the checkpoint, has been extended to include an environment variable that declares what Spring profile to use: SPRING_PROFILES_ACTIVE=docker. Note: If you have the runtime configuration in a separate file, you can add the file to the Docker image and point it out using an environment variable like SPRING_CONFIG_LOCATION=file:runtime-configuration.yml. With the challenges of proper runtime configuration covered, we have only one problem left to handle: Spring Data JPA’s lack of support for CRaC without some extra work. 2.5. Problem #4, Spring Data JPA Spring Data JPA does not work out-of-the-box with CRaC, as documented in the Smoke Tests project; see the section about Prevent early database interaction. This means that auto-creation of database tables when starting up the application, is not possible when using CRaC. Instead, the creation has to be performed outside of the application startup process. Note: This restriction does not apply to embedded SQL databases. For example, the Spring PetClinic application works with CRaC without any modifications since it uses an embedded SQL database by default. To address these deficiencies, the following changes have been made in the source code of Chapter 6: Manual creation of a SQL DDL script, create-tables.sql Since we can no longer rely on the application to create the required database tables, a SQL DDL script has been created. To enable the application to create the script file, a Spring profile create-ddl-script has been added in the review microservice’s configuration file, microservices/review-service/src/main/resources/application.yml. It looks like: YAML spring.config.activate.on-profile: create-ddl-script spring.jpa.properties.jakarta.persistence.schema-generation: create-source: metadata scripts: action: create create-target: crac/sql-scripts/create-tables.sql The SQL DDL file has been created by starting the MySQL database and, next, the application with the new Spring profile. Once connected to the database, the application and database are shut down. Sample commands: Shell docker compose up -d mysql SPRING_PROFILES_ACTIVE=create-ddl-script java -jar microservices/review-service/build/libs/review-service-1.0.0-SNAPSHOT.jar # CTRL/C once "Connected to MySQL: jdbc:mysql://localhost/review-db" is written to the log output docker compose down The resulting SQL DDL script, crac/sql-scripts/create-tables.sql, has been added to Chapter 6’s source code. The Docker Compose file configures MySQL to execute the SQL DDL script at startup. A CraC-specific version of the Docker Compose file has been created, crac/docker-compose-crac.yml. To create the tables when the database is starting up, the SQL DDL script is used as an init script. The SQL DDL script is mapped into the init-folder /docker-entrypoint-initdb.d with the following volume-mapping in the Docker Compose file: Dockerfile volumes: - "./sql-scripts/create-tables.sql:/docker-entrypoint-initdb.d/create-tables.sql" Added a runtime-specific Spring profile in the review microservice’s configuration file. The guidelines in the Smoke Tests project’s JPA section have been followed by adding an extra Spring profile named crac. It looks like the following in the review microservice’s configuration file: YAML spring.config.activate.on-profile: crac spring.jpa.database-platform: org.hibernate.dialect.MySQLDialect spring.jpa.properties.hibernate.temp.use_jdbc_metadata_defaults: false spring.jpa.hibernate.ddl-auto: none spring.sql.init.mode: never spring.datasource.hikari.allow-pool-suspension: true Finally, the Spring profile crac is added to the RUN command in the Dockerfile to activate the configuration when the checkpoint is performed. 2.6. The Resulting Dockerfile Finally, we are done with handling the problems resulting from using a Dockerfile to build a Spring Boot application that can restore quickly using CRaC in a Docker image. The resulting Dockerfile, crac/Dockerfile-crac-automatic, looks like: Dockerfile # syntax=docker/dockerfile:1.3-labs FROM azul/zulu-openjdk:21.0.3-21.34-jdk-crac ADD build/libs/*.jar app.jar RUN --security=insecure \ SPRING_PROFILES_ACTIVE=docker,crac \ java -Dspring.context.checkpoint=onRefresh \ -XX:CRaCCheckpointTo=checkpoint -jar app.jar \ || if [ $? -eq 137 ]; then return 0; else return 1; fi EXPOSE 8080 ENTRYPOINT ["java", "-XX:CRaCRestoreFrom=checkpoint"] Note: One and the same Dockerfile is used by all microservices to create CRaC versions of their Docker images. We are now ready to try it out! 3. Trying Out CRaC With Automatic Checkpoint/Restore To try out CRaC, we will use the microservice system landscape used in Chapter 6 of my book. If you are not familiar with the system landscape, it looks like the following: Chapter 6 uses Docker Compose to manage (build, start, and stop) the system landscape. Note: If you don’t have all the tools used in this blog post installed in your environment, you can look into Chapters 21 and 22 for installation instructions. To try out CRaC, we need to get the source code from GitHub, compile it, and create the Docker images for each microservice using a custom insecure Docker builder. Next, we can use Docker Compose to start up the system landscape and run the end-to-end validation script that comes with the book to ensure that everything works as expected. We will wrap up the try-out section by comparing the startup times of the microservices when they start with and without using CRaC. We will go through each step in the following subsections: 3.1. Getting the source code 3.2. Building the CRaC-based Docker images 3.3. Running end-to-end tests 3.4. Comparing startup times without CRaC 3.1. Getting the Source Code Run the following commands to get the source code from GitHub, jump into the Chapter06 folder, check out the branch SB3.2-crac-automatic, and ensure that a Java 21 JDK is used (Eclipse Temurin is used here): Shell git clone https://github.com/PacktPublishing/Microservices-with-Spring-Boot-and-Spring-Cloud-Third-Edition.git cd Microservices-with-Spring-Boot-and-Spring-Cloud-Third-Edition/Chapter06 git checkout SB3.2-crac-automatic sdk use java 21.0.3-tem 3.2. Building the CRaC-Based Docker Images Start with compiling the microservices source code: Shell ./gradlew build If not already created, create the insecure builder with the command: Shell docker buildx create --name insecure-builder --buildkitd-flags '--allow-insecure-entitlement security.insecure' Now we can build a Docker image, where the build performs a CRaC checkpoint for each of the microservices with the commands: Shell docker buildx --builder insecure-builder build --allow security.insecure -f crac/Dockerfile-crac-automatic -t product-composite-crac --load microservices/product-composite-service docker buildx --builder insecure-builder build --allow security.insecure -f crac/Dockerfile-crac-automatic -t product-crac --load microservices/product-service docker buildx --builder insecure-builder build --allow security.insecure -f crac/Dockerfile-crac-automatic -t recommendation-crac --load microservices/recommendation-service docker buildx --builder insecure-builder build --allow security.insecure -f crac/Dockerfile-crac-automatic -t review-crac --load microservices/review-service 3.3. Running End-To-End Tests To start up the system landscape, we will use Docker Compose. Since CRaC requires special Linux privileges, a CRaC-specific docker-compose file comes with the source code, crac/docker-compose-crac.yml. Each microservice is given the required privilege, CHECKPOINT_RESTORE, by specifying: YAML cap_add: - CHECKPOINT_RESTORE Note: Several blog posts on CRaC suggest using privileged containers, i.e., starting them with run --privleged or adding privileged: true in the Docker Compose file. This is a really bad idea since an attacker who gets control over such a container can easily take control of the host that runs Docker. For more information, see Docker’s documentation on Runtime privilege and Linux capabilities. The final addition to the CRaC-specific Docker Compose file is the volume mapping for MySQL to add the init file described above in section 2.5. Problem #4, Spring Data JPA: Dockerfile volumes: - "./sql-scripts/create-tables.sql:/docker-entrypoint-initdb.d/create-tables.sql" Using this Docker Compose file, we can start up the system landscape and run the end-to-end verification script with the following commands: Shell export COMPOSE_FILE=crac/docker-compose-crac.yml docker compose up -d Let’s start with verifying that the CRaC afterRestore callback methods were called: Shell docker compose logs | grep "CRaC's afterRestore callback method called..." Expect something like: Shell ...ReviewServiceApplication : CRaC's afterRestore callback method called... ...RecommendationServiceApplication : CRaC's afterRestore callback method called... ...ProductServiceApplication : CRaC's afterRestore callback method called... ...ProductCompositeServiceApplication : CRaC's afterRestore callback method called... Now, run the end-to-end verification script: Shell ./test-em-all.bash If the script ends with a log output similar to: Shell End, all tests OK: Fri Jun 28 17:40:43 CEST 2024 …it means all tests run ok, and the microservices behave as expected. Bring the system landscape down with the commands: Shell docker compose down unset COMPOSE_FILE After verifying that the microservices behave correctly when started from a CRaC checkpoint, we can compare their startup times with microservices started without using CRaC. 3.4. Comparing Startup Times Without CRaC Now over to the most interesting part: How much faster does the microservice startup when performing a restore from a checkpoint compared to a regular cold start? The tests have been run on a MacBook Pro M1 with 64 GB memory. Let’s start with measuring startup times without using CRaC. 3.4.1. Startup Times Without CRaC To start the microservices without CRaC, we will use the default Docker Compose file. So, we must ensure that the COMPOSE_FILE environment variable is unset before we build the Docker images for the microservices. After that, we can start the database services, MongoDB and MySQL: Shell unset COMPOSE_FILE docker compose build docker compose up -d mongodb mysql Verify that the databases are reporting healthy with the command: docker compose ps. Repeat the command until both report they are healthy. Expect a response like this: Shell NAME ... STATUS ... chapter06-mongodb-1 ... Up 13 seconds (healthy) ... chapter06-mysql-1 ... Up 13 seconds (healthy) ... Next, start the microservices and look in the logs for the startup time (searching for the word Started). Repeat the logs command until logs are shown for all four microservices: Shell docker compose up -d docker compose logs | grep Started Look for a response like: Shell ...Started ProductCompositeServiceApplication in 1.659 seconds ...Started ProductServiceApplication in 2.219 seconds ...Started RecommendationServiceApplication in 2.203 seconds ...Started ReviewServiceApplication in 3.476 seconds Finally, bring down the system landscape: Shell docker compose down 3.4.2. Startup Times With CRaC First, declare that we will use the CRaC-specific Docker Compose file and start the database services, MongoDB and MySQL: Shell export COMPOSE_FILE=crac/docker-compose-crac.yml docker compose up -d mongodb mysql Verify that the databases are reporting healthy with the command: docker compose ps. Repeat the command until both report they are healthy. Expect a response like this: Shell NAME ... STATUS ... crac-mongodb-1 ... Up 10 seconds (healthy) ... crac-mysql-1 ... Up 10 seconds (healthy) ... Next, start the microservices and look in the logs for the startup time (this time searching for the word Restored). Repeat the logs command until logs are shown for all four microservices: Shell docker compose up -d docker compose logs | grep Restored Look for a response like: Shell ...Restored ProductCompositeServiceApplication in 0.131 seconds ...Restored ProductServiceApplication in 0.225 seconds ...Restored RecommendationServiceApplication in 0.236 seconds ...Restored ReviewServiceApplication in 0.154 seconds Finally, bring down the system landscape: Shell docker compose down unset COMPOSE_FILE Now, we can compare the startup times! 3.4.3. Comparing Startup Times Between JVM and CRaC Here is a summary of the startup times, along with calculations of how many times faster the CRaC-enabled microservice starts and the reduction of startup times in percentage: MICROSERVICE WITHOUT CRAC WITH CRAC CRAC TIMES FASTER CRAC REDUCED STARTUP TIME product-composite 1.659 0.131 12.7 92% product 2.219 0.225 9.9 90% recommendation 2.203 0.236 9.3 89% review 3.476 0.154 22.6 96% Generally, we can see a 10-fold performance improvement in startup times or 90% shorter startup time; that’s a lot! Note: The improvement in the Review microservice is even better since it no longer handles the creation of database tables. However, this improvement is irrelevant when comparing improvements using CRaC, so let’s discard the figures for the Review microservice. 4. Summary Coordinated Restore at Checkpoint (CRaC) is a powerful feature in OpenJDK that improves the startup performance of Java applications by allowing them to resume from a previously saved state, a.k.a., a checkpoint. With Spring Boot 3.2, we also get a simplified way of creating a checkpoint using CRaC, known as automatic checkpoint/restore at startup. The tests in this blog post indicate a 10-fold improvement in startup performance, i.e., a 90% reduction in startup time when using automatic checkpoint/restore at startup. The blog post also explained how Docker images using CRaC can be built using a Dockerfile instead of the complex bash scripts suggested by most blog posts on the subject. This, however, comes with some challenges of its own, like using custom Docker builders for privileged builds, as explained in the blog post. Using Docker images created using automatic checkpoint/restore at startup comes with a price. The Docker images will contain runtime-specific and sensitive information, such as credentials to connect to a database at runtime. Therefore, they must be protected from unauthorized use. The Spring Boot support for CRaC does not fully cover all modules in Spring’s eco-system, forcing some workaround to be applied, e.g., when using Spring Data JPA. Also, when using automatic checkpoint/Restore at startup, the JVM HotSpot engine cannot be warmed up before the checkpoint. If optimal execution time for the first requests being processed is important, automatic checkpoint/restore at startup is probably not the way to go. 5. Next Blog Post In the next blog post, I will show you how to use regular on-demand checkpoints to solve some of the considerations with automatic checkpoint/restore at startup. Specifically, the problems with specifying the runtime configuration at build time, storing sensitive runtime configuration in the Docker images, and how the Java VM can be warmed up before performing the checkpoint.
While debugging in an IDE or using simple command line tools is relatively straightforward, the real challenge lies in production debugging. Modern production environments have enabled sophisticated self-healing deployments, yet they have also made troubleshooting more complex. Kubernetes (aka k8s) is probably the most well-known orchestration production environment. To effectively teach debugging in Kubernetes, it's essential to first introduce its fundamental principles. This part of the debugging series is designed for developers looking to effectively tackle application issues within Kubernetes environments, without delving deeply into the complex DevOps aspects typically associated with its operations. Kubernetes is a big subject: it took me two videos just to explain the basic concepts and background. Introduction to Kubernetes and Distributed Systems Kubernetes, while often discussed in the context of cloud computing and large-scale operations, is not just a tool for managing containers. Its principles apply broadly to all large-scale distributed systems. In this post I want to explore Kubernetes from the ground up, emphasizing its role in solving real-world problems faced by developers in production environments. The Evolution of Deployment Technologies Before Kubernetes, the deployment landscape was markedly different. Understanding this evolution helps us appreciate the challenges Kubernetes aims to solve. The image below represents the road to Kubernetes and the technologies we passed along the way. In the image, we can see that initially, applications were deployed directly onto physical servers. This process was manual, error-prone, and difficult to replicate across multiple environments. For instance, if a company needed to scale its application, it involved procuring new hardware, installing operating systems, and configuring the application from scratch. This could take weeks or even months, leading to significant downtime and operational inefficiencies. Imagine a retail company preparing for the holiday season surge. Each time they needed to handle increased traffic, they would manually set up additional servers. This was not only time-consuming but also prone to human error. Scaling down after the peak period was equally cumbersome, leading to wasted resources. Enter Virtualization Virtualization technology introduced a layer that emulated the hardware, allowing for easier replication and migration of environments but at the cost of performance. However, fast virtualization enabled the cloud revolution. It lets companies like Amazon lease their servers at scale without compromising their own workloads. Virtualization involves running multiple operating systems on a single physical hardware host. Each virtual machine (VM) includes a full copy of an operating system, the application, necessary binaries, and libraries—taking up tens of GBs. VMs are managed via a hypervisor, such as VMware's ESXi or Microsoft's Hyper-V, which sits between the hardware and the operating system and is responsible for distributing hardware resources among the VMs. This layer adds additional overhead and can lead to decreased performance due to the need to emulate hardware. Note that virtualization is often referred to as "virtual machines," but I chose to avoid that terminology due to the focus of this blog on Java and the JVM where a virtual machine is typically a reference to the Java Virtual Machine (JVM). Rise of Containers Containers emerged as a lightweight alternative to full virtualization. Tools like Docker standardized container formats, making it easier to create and manage containers without the overhead associated with traditional virtual machines. Containers encapsulate an application’s runtime environment, making them portable and efficient. Unlike virtualization, containerization encapsulates an application in a container with its own operating environment, but it shares the host system’s kernel with other containers. Containers are thus much more lightweight, as they do not require a full OS instance; instead, they include only the application and its dependencies, such as libraries and binaries. This setup reduces the size of each container and improves boot times and performance by removing the hypervisor layer. Containers operate using several key Linux kernel features: Namespaces: Containers use namespaces to provide isolation for global system resources between independent containers. This includes aspects of the system like process IDs, networking interfaces, and file system mounts. Each container has its own isolated namespace, which gives it a private view of the operating system with access only to its resources. Control groups (cgroups): Cgroups further enhance the functionality of containers by limiting and prioritizing the hardware resources a container can use. This includes parameters such as CPU time, system memory, network bandwidth, or combinations of these resources. By controlling resource allocation, cgroups ensure that containers do not interfere with each other’s performance and maintain the efficiency of the underlying server. Union file systems: Containers use union file systems, such as OverlayFS, to layer files and directories in a lightweight and efficient manner. This system allows containers to appear as though they are running on their own operating system and file system, while they are actually sharing the host system’s kernel and base OS image. Rise of Orchestration As containers began to replace virtualization due to their efficiency and speed, developers and organizations rapidly adopted them for a wide range of applications. However, this surge in container usage brought with it a new set of challenges, primarily related to managing large numbers of containers at scale. While containers are incredibly efficient and portable, they introduce complexities when used extensively, particularly in large-scale, dynamic environments: Management overhead: Manually managing hundreds or even thousands of containers quickly becomes unfeasible. This includes deployment, networking, scaling, and ensuring availability and security. Resource allocation: Containers must be efficiently scheduled and managed to optimally use physical resources, avoiding underutilization or overloading of host machines. Service discovery and load balancing: As the number of containers grows, keeping track of which container offers which service and how to balance the load between them becomes critical. Updates and rollbacks: Implementing rolling updates, managing version control, and handling rollbacks in a containerized environment require robust automation tools. To address these challenges, the concept of container orchestration was developed. Orchestration automates the scheduling, deployment, scaling, networking, and lifecycle management of containers, which are often organized into microservices. Efficient orchestration tools help ensure that the entire container ecosystem is healthy and that applications are running as expected. Enter Kubernetes Among the orchestration tools, Kubernetes emerged as a frontrunner due to its robust capabilities, flexibility, and strong community support. Kubernetes offers several features that address the core challenges of managing containers: Automated scheduling: Kubernetes intelligently schedules containers on the cluster’s nodes, taking into account the resource requirements and other constraints, optimizing for efficiency and fault tolerance. Self-healing capabilities: It automatically replaces or restarts containers that fail, ensuring high availability of services. Horizontal scaling: Kubernetes can automatically scale applications up and down based on demand, which is essential for handling varying loads efficiently. Service discovery and load balancing: Kubernetes can expose a container using the DNS name or using its own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable. Automated rollouts and rollbacks: Kubernetes allows you to describe the desired state for your deployed containers using declarative configuration, and can change the actual state to the desired state at a controlled rate, such as to roll out a new version of an application. Why Kubernetes Stands Out Kubernetes not only solves practical, operational problems associated with running containers but also integrates with the broader technology ecosystem, supporting continuous integration and continuous deployment (CI/CD) practices. It is backed by the Cloud Native Computing Foundation (CNCF), ensuring it remains cutting-edge and community-focused. There used to be a site called "doyouneedkubernetes.com," and when you visited that site, it said, "No." Most of us don't need Kubernetes and it is often a symptom of Resume Driven Design (RDD). However, even when we don't need its scaling capabilities the advantages of its standardization are tremendous. Kubernetes became the de-facto standard and created a cottage industry of tools around it. Features such as observability and security can be plugged in easily. Cloud migration becomes arguably easier. Kubernetes is now the "lingua franca" of production environments. Kubernetes For Developers Understanding Kubernetes architecture is crucial for debugging and troubleshooting. The following image shows the high-level view of a Kubernetes deployment. There are far more details in most tutorials geared towards DevOps engineers, but for a developer, the point that matters is just "Your Code" - that tiny corner at the edge. In the image above we can see: Master node (represented by the blue Kubernetes logo on the left): The control plane of Kubernetes, responsible for managing the state of the cluster, scheduling applications, and handling replication Worker nodes: These nodes contain the pods that run the containerized applications. Each worker node is managed by the master. Pods: The smallest deployable units created and managed by Kubernetes, usually containing one or more containers that need to work together These components work together to ensure that an application runs smoothly and efficiently across the cluster. Kubernetes Basics In Practice Up until now, this post has been theory-heavy. Let's now review some commands we can use to work with a Kubernetes cluster. First, we would want to list the pods we have within the cluster which we can do using the get pods command as such: $ kubectl get pods NAME READY STATUS RESTARTS AGE my-first-pod-id-xxxx 1/1 Running 0 13s my-second-pod-id-xxxx 1/1 Running 0 13s A command such as kubectl describe pod returns a high-level description of the pod such as its name, parent node, etc. Many problems in production pods can be solved by looking at the system log. This can be accomplished by invoking the logs command: $ kubectl logs -f <pod> [2022-11-29 04:12:17,262] INFO log data ... Most typical large-scale application logs are ingested by tools such as Elastic, Loki, etc. As such, the logs command isn't as useful in production except for debugging edge cases. Final Word This introduction to Kubernetes has set the stage for deeper exploration into specific debugging and troubleshooting techniques, which we will cover in the upcoming posts. The complexity of Kubernetes makes it much harder to debug, but there are facilities in place to work around some of that complexity. While this article (and its follow-ups) focus on Kubernetes, future posts will delve into observability and related tools, which are crucial for effective debugging in production environments.
Over the years Docker containers have completely changed how developers create, share, and run applications. With their flexible design, Docker containers ensure an environment, across various platforms simplifying the process of deploying applications reliably. When integrated with .NET, developers can harness Dockers capabilities to streamline the development and deployment phases of .NET applications. This article delves into the advantages of using Docker containers with .NET applications and offers a guide on getting started. Figure courtesy of Docker Why Choose Docker for .NET? 1. Consistent Development Environment Docker containers encapsulate all dependencies and configurations for running an application guaranteeing consistency across development, testing, and production environments. By leveraging Docker, developers can avoid the typical statement of "it works on my machine" issue, as they can create environments that operate flawlessly across various development teams and devices. 2. Simplified Dependency Management Docker eliminates the need to manually install and manage dependencies on developer machines. By specifying dependencies in a Docker file developers can effortlessly bundle their .NET applications with libraries and dependencies reducing setup time and minimizing compatibility issues. 3. Scalability and Resource Efficiency Due to its nature and containerization technology, Docker is well suited for horizontally or vertically scaling .NET applications. Developers have the ability to easily set up instances of their applications using Docker Swarm or Kubernetes which helps optimize resource usage and enhance application performance. 4. Simplified Deployment Process Docker simplifies the deployment of .NET applications. Developers have the ability to wrap their applications into Docker images. These can be deployed to any Docker-compatible environment, including local servers, cloud platforms like AWS or Azure, and even IOT devices. This not only streamlines the deployment process but also accelerates the release cycle of .NET applications Starting With Docker and .NET Step 1: Installing Docker Installing Docker is easy by navigating to the Docker desktop. Docker desktop is available for Windows, Mac, and Linux. I have downloaded and installed it for Windows. Once installed, the Docker (whale) icon is shown on the systems side tray as shown below. When you click on the icon, it will open the Docker desktop dashboard as shown below. You can see the list of containers, images, volumes, builds, and extensions. In the below figure, it shows the list of containers I have created on my local machine. Step 2: Creating a .NET Application Create a .NET application using the tool of your choice like Visual Studio, Visual Studio Code, or the.NET CLI. For example, you can use the following command directly from the command line. PowerShell dotnet new web -n MinimalApiDemo Step 3: Setting up Your Application With a Docker Create a Dockerfile in the root folder of your .NET project to specify the Docker image for your application. Below is an example of a Dockerfile for an ASP.NET Core application which was created in the previous step. Dockerfile # Use the official ASP.NET Core runtime as a base image FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS base WORKDIR /app EXPOSE 8080 # Use the official SDK image to build the application FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build WORKDIR /src COPY ["MinimalApiDemo.csproj", "./"] RUN dotnet restore "MinimalApiDemo.csproj" COPY . . WORKDIR "/src/" RUN dotnet build "MinimalApiDemo.csproj" -c Release -o /app/build # Publish the application FROM build AS publish RUN dotnet publish "MinimalApiDemo.csproj" -c Release -o /app/publish # Final image with only the published application FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "MinimalApiDemo.dll"] Step 4: Creating and Launching Your Docker Image Create a Docker image by executing the command from a terminal window (use lowercase letters). PowerShell docker build -t minimalapidemo . After finishing the construction process you are ready to start up your Docker image by running it inside a container. Run the below docker command to spin up a new container. PowerShell docker run -d -p 8080:8080 --name myminimalapidemo minimalapidemo Your API service is currently running within a Docker container and can be reached at this localhost as shown below. Refer to my previous article to see how I created products controllers using Minimal API's with different HTTP endpoints. Here Are Some Recommended Strategies for Dockerizing .NET Applications 1. Reduce Image Size Enhance the efficiency of your Docker images by utilizing stage builds eliminating dependencies and minimizing layers in your Docker file. 2. Utilize .dockerignore File Generate a .dockerignore file to exclude files and directories from being transferred into the Docker image thereby decreasing image size and enhancing build speed. 3. Ensure Container Security Adhere to security practices during the creation and operation of Docker containers including updating base images conducting vulnerability scans and restricting container privileges. 4. Employ Docker Compose for Multi Container Applications For applications with services or dependencies, leverage Docker Compose to define and manage multi-container applications simplifying both development and deployment processes. 5. Monitor and Troubleshoot Containers Monitor the performance and health of your Docker containers using Docker’s own monitoring tools or third-party solutions. Make use of tools such as Docker logs and debugging utilities to promptly resolve issues and boost the efficiency of your containers. Conclusion Docker containers offer an efficient platform for the development, packaging, and deployment of .NET applications. By containerizing these applications, developers can create development environments, simplify dependency management, and streamline deployment processes. Whether the focus is on microservices, web apps, or APIs, Docker provides a proficient method to operate .NET applications across various environments. By adhering to best practices and maximizing Docker’s capabilities, developers can fully leverage the benefits of containerization, thereby accelerating the process of constructing and deploying .NET applications
Editor's Note: The following is an article written for and published in DZone's 2024 Trend Report, Cloud Native: Championing Cloud Development Across the SDLC. Simplicity is a key selling point of cloud technology. Rather than worrying about racking and stacking equipment, configuring networks, and installing operating systems, developers can just click through a friendly web interface and quickly deploy an application. Of course, that friendly web interface hides serious complexity, and deploying an application is just the first and easiest step toward a performant and reliable system. Once an application grows beyond a single deployment, issues begin to creep in. New versions require database schema changes or added components, and multiple team members can change configurations. The application must also be scaled to serve more users, provide redundancy to ensure reliability, and manage backups to protect data. While it might be possible to manage this complexity using that friendly web interface, we need automated cloud orchestration to deliver consistently at speed. There are many choices for cloud orchestration, so which one is best for a particular application? Let's use a case study to consider two key decisions in the trade space: The number of different technologies we must learn and manage Our ability to migrate to a different cloud environment with minimal changes to the automation However, before we look at the case study, let's start by understanding some must-have features of any cloud automation. Cloud Orchestration Must-Haves Our goal with cloud orchestration automation is to manage the complexity of deploying and operating a cloud-native application. We want to be confident that we understand how our application is configured, that we can quickly restore an application after outages, and that we can manage changes over time with confidence in bug fixes and new capabilities while avoiding unscheduled downtime. Repeatability and Idempotence Cloud-native applications use many cloud resources, each with different configuration options. Problems with infrastructure or applications can leave resources in an unknown state. Even worse, our automation might fail due to network or configuration issues. We need to run our automation confidently, even when cloud resources are in an unknown state. This key property is called idempotence, which simplifies our workflow as we can run the automation no matter the current system state and be confident that successful completion places the system in the desired state. Idempotence is typically accomplished by having the automation check the current state of each resource, including its configuration parameters, and applying only necessary changes. This kind of smart resource application demands dedicated orchestration technology rather than simple scripting. Change Tracking and Control Automation needs to change over time as we respond to changes in application design or scaling needs. As needs change, we must manage automation changes as dueling versions will defeat the purpose of idempotence. This means we need Infrastructure as Code (IaC), where cloud orchestration automation is managed identically to other developed software, including change tracking and version management, typically in a Git repository such as this example. Change tracking helps us identify the source of issues sooner by knowing what changes have been made. For this reason, we should modify our cloud environments only by automation, never manually, so we can know that the repository matches the system state — and so we can ensure changes are reviewed, understood, and tested prior to deployment. Multiple Environment Support To test automation prior to production deployment, we need our tooling to support multiple environments. Ideally, we can support rapid creation and destruction of dynamic test environments because this increases confidence that there are no lingering required manual configurations and enables us to test our automation by using it. Even better, dynamic environments allow us to easily test changes to the deployed application, creating unique environments for developers, complex changes, or staging purposes prior to production. Cloud automation accomplishes multi-environment support through variables or parameters passed from a configuration file, environment variables, or on the command line. Managed Rollout Together, idempotent orchestration, a Git repository, and rapid deployment of dynamic environments bring the concept of dynamic environments to production, enabling managed rollouts for new application versions. There are multiple managed rollout techniques, including blue-green deployments and canary deployments. What they have in common is that a rollout consists of separately deploying the new version, transitioning users over to the new version either at once or incrementally, then removing the old version. Managed rollouts can eliminate application downtime when moving to new versions, and they enable rapid detection of problems coupled with automated fallback to a known working version. However, a managed rollout is complicated to implement as not all cloud resources support it natively, and changes to application architecture and design are typically required. Case Study: Implementing Cloud Automation Let's explore the key features of cloud automation in the context of a simple application. We'll deploy the same application using both a cloud-agnostic approach and a single-cloud approach to illustrate how both solutions provide the necessary features of cloud automation, but with differences in implementation and various advantages and disadvantages. Our simple application is based on Node, backed by a PostgreSQL database, and provides an interface to create, retrieve, update, and delete a list of to-do items. The full deployment solutions can be seen in this repository. Before we look at differences between the two deployments, it's worth considering what they have in common: Use a Git repository for change control of the IaC configuration Are designed for idempotent execution, so both have a simple "run the automation" workflow Allow for configuration parameters (e.g., cloud region data, unique names) that can be used to adapt the same automation to multiple environments Cloud-Agnostic Solution Our first deployment, as illustrated in Figure 1, uses Terraform (or OpenTofu) to deploy a Kubernetes cluster into a cloud environment. Terraform then deploys a Helm chart, with both the application and PostgreSQL database. Figure 1. Cloud-agnostic deployment automation The primary advantage of this approach, as seen in the figure, is that the same deployment architecture is used to deploy to both Amazon Web Services (AWS) and Microsoft Azure. The container images and Helm chart are identical in both cases, and the Terraform workflow and syntax are also identical. Additionally, we can test container images, Kubernetes deployments, and Helm charts separately from the Terraform configuration that creates the Kubernetes environment, making it easy to reuse much of this automation to test changes to our application. Finally, with Terraform and Kubernetes, we're working at a high level of abstraction, so our automation code is short but can still take advantage of the reliability and scalability capabilities built into Kubernetes. For example, an entire Azure Kubernetes Service (AKS) cluster is created in about 50 lines of Terraform configuration via the azurerm_kubernetes_cluster resource: Shell resource "azurerm_kubernetes_cluster" "k8s" { location = azurerm_resource_group.rg.location name = random_pet.azurerm_kubernetes_cluster_name.id ... default_node_pool { name = "agentpool" vm_size = "Standard_D2_v2" node_count = var.node_count } ... network_profile { network_plugin = "kubenet" load_balancer_sku = "standard" } } Even better, the Helm chart deployment is just five lines and is identical for AWS and Azure: Shell resource "helm_release" "todo" { name = "todo" repository = "https://book-of-kubernetes.github.io/helm/" chart = "todo" } However, a cloud-agnostic approach brings additional complexity. First, we must create and maintain configuration using multiple tools, requiring us to understand Terraform syntax, Kubernetes manifest YAML files, and Helm templates. Also, while the overall Terraform workflow is the same, the cloud provider configuration is different due to differences in Kubernetes cluster configuration and authentication. This means that adding a third cloud provider would require significant effort. Finally, if we wanted to use additional features such as cloud-native databases, we'd first need to understand the key configuration details of that cloud provider's database, then understand how to apply that configuration using Terraform. This means that we pay an additional price in complexity for each native cloud capability we use. Single Cloud Solution Our second deployment, illustrated in Figure 2, uses AWS CloudFormation to deploy an Elastic Compute Cloud (EC2) virtual machine and a Relational Database Service (RDS) cluster: Figure 2. Single cloud deployment automation The biggest advantage of this approach is that we create a complete application deployment solution entirely in CloudFormation's YAML syntax. By using CloudFormation, we are working directly with AWS cloud resources, so there's a clear correspondence between resources in the AWS web console and our automation. As a result, we can take advantage of the specific cloud resources that are best suited for our application, such as RDS for our PostgreSQL database. This use of the best resources for our application can help us manage our application's scalability and reliability needs while also managing our cloud spend. The tradeoff in exchange for this simplicity and clarity is a more verbose configuration. We're working at the level of specific cloud resources, so we have to specify each resource, including items such as routing tables and subnets that Terraform configures automatically. The resulting CloudFormation YAML is 275 lines and includes low-level details such as egress routing from our VPC to the internet: Shell TodoInternetRoute: Type: AWS::EC2::Route Properties: DestinationCidrBlock: 0.0.0.0/0 GatewayId: !Ref TodoInternetGateway RouteTableId: !Ref TodoRouteTable Also, of course, the resources and configuration are AWS-specific, so if we wanted to adapt this automation to a different cloud environment, we would need to rewrite it from the ground up. Finally, while we can easily adapt this automation to create multiple deployments on AWS, it is not as flexible for testing changes to the application as we have to deploy a full RDS cluster for each new instance. Conclusion Our case study enabled us to exhibit key features and tradeoffs for cloud orchestration automation. There are many more than just these two options, but whatever solution is chosen should use an IaC repository for change control and a tool for idempotence and support for multiple environments. Within that cloud orchestration space, our deployment architecture and our tool selection will be driven by the importance of portability to new cloud environments compared to the cost in additional complexity. This is an excerpt from DZone's 2024 Trend Report, Cloud Native: Championing Cloud Development Across the SDLC.Read the Free Report
Are you ready to get started with cloud-native observability with telemetry pipelines? This article is part of a series exploring a workshop guiding you through the open source project Fluent Bit, what it is, a basic installation, and setting up the first telemetry pipeline project. Learn how to manage your cloud-native data from source to destination using the telemetry pipeline phases covering collection, aggregation, transformation, and forwarding from any source to any destination. In the previous article in this series, we explored what backpressure was, how it manifests in telemetry pipelines, and took the first steps to mitigate this with Fluent Bit. In this article, we look at how to enable Fluent Bit features that will help with avoiding telemetry data loss as we saw in the previous article. You can find more details in the accompanying workshop lab. Before we get started it's important to review the phases of a telemetry pipeline. In the diagram below we see them laid out again. Each incoming event goes from input to parser to filter to buffer to routing before they are sent to its final output destination(s). For clarity in this article, we'll split up the configuration into files that are imported into a main fluent bit configuration file we'll name workshop-fb.conf. Tackling Data Loss Previously, we explored how input plugins can hit their ingestion limits when our telemetry pipelines scale beyond memory limits when using default in-memory buffering of our events. We also saw that we can limit the size of our input plugin buffers to prevent our pipeline from failing on out-of-memory errors, but that the pausing of the ingestion can also lead to data loss if the clearing of the input buffers takes too long. To rectify this problem, we'll explore another buffering solution that Fluent Bit offers, ensuring data and memory safety at scale by configuring filesystem buffering. To that end, let's explore how the Fluent Bit engine processes data that input plugins emit. When an input plugin emits events, the engine groups them into a Chunk. The chunk size is around 2MB. The default is for the engine to place this Chunk only in memory. We saw that limiting in-memory buffer size did not solve the problem, so we are looking at modifying this default behavior of only placing chunks into memory. This is done by changing the property storage.type from the default Memory to Filesystem. It's important to understand that memory and filesystem buffering mechanisms are not mutually exclusive. By enabling filesystem buffering for our input plugin we automatically get performance and data safety Filesystem Buffering Tips When changing our buffering from memory to filesystem with the property storage.type filesystem, the settings for mem_buf_limit are ignored. Instead, we need to use the property storage.max_chunks_up for controlling the size of our memory buffer. Shockingly, when using the default settings the property storage.pause_on_chunks_overlimit is set to off, causing the input plugins not to pause. Instead, input plugins will switch to buffering only in the filesystem. We can control the amount of disk space used with storage.total_limit_size. If the property storage.pause_on_chunks_overlimit is set to on, then the buffering mechanism to the filesystem behaves just like our mem_buf_limit scenario demonstrated previously. Configuring Stressed Telemetry Pipeline In this example, we are going to use the same stressed Fluent Bit pipeline to simulate a need for enabling filesystem buffering. All examples are going to be shown using containers (Podman) and it's assumed you are familiar with container tooling such as Podman or Docker. We begin the configuration of our telemetry pipeline in the INPUT phase with a simple dummy plugin generating a large number of entries to flood our pipeline with as follows in our configuration file inputs.conf (note that the mem_buf_limit fix is commented out): # This entry generates a large amount of success messages for the workshop. [INPUT] Name dummy Tag big.data Copies 15000 Dummy {"message":"true 200 success", "big_data": "blah blah blah blah blah blah blah blah blah"} #Mem_Buf_Limit 2MB Now ensure the output configuration file outputs.conf has the following configuration: # This entry directs all tags (it matches any we encounter) # to print to standard output, which is our console. [OUTPUT] Name stdout Match * With our inputs and outputs configured, we can now bring them together in a single main configuration file. Using a file called workshop-fb.conf in our favorite editor, ensure the following configuration is created. For now, just import two files: # Fluent Bit main configuration file. # # Imports section. @INCLUDE inputs.conf @INCLUDE outputs.conf Let's now try testing our configuration by running it using a container image. The first thing that is needed is to ensure a file called Buildfile is created. This is going to be used to build a new container image and insert our configuration files. Note this file needs to be in the same directory as our configuration files, otherwise adjust the file path names: FROM cr.fluentbit.io/fluent/fluent-bit:3.0.4 COPY ./workshop-fb.conf /fluent-bit/etc/fluent-bit.conf COPY ./inputs.conf /fluent-bit/etc/inputs.conf COPY ./outputs.conf /fluent-bit/etc/outputs.conf Now we'll build a new container image, naming it with a version tag as follows using the Buildfile and assuming you are in the same directory: $ podman build -t workshop-fb:v8 -f Buildfile STEP 1/4: FROM cr.fluentbit.io/fluent/fluent-bit:3.0.4 STEP 2/4: COPY ./workshop-fb.conf /fluent-bit/etc/fluent-bit.conf --> a379e7611210 STEP 3/4: COPY ./inputs.conf /fluent-bit/etc/inputs.conf --> f39b10d3d6d0 STEP 4/4: COPY ./outputs.conf /fluent-bit/etc/outputs.conf COMMIT workshop-fb:v6 --> e74b2f228729 Successfully tagged localhost/workshop-fb:v8 e74b2f22872958a79c0e056efce66a811c93f43da641a2efaa30cacceb94a195 If we run our pipeline in a container configured with constricted memory, in our case, we need to give it around a 6.5MB limit, then we'll see the pipeline run for a bit and then fail due to overloading (OOM): $ podman run --memory 6.5MB --name fbv8 workshop-fb:v8 The console output shows that the pipeline ran for a bit; in our case, below to event number 862 before it hit the OOM limits of our container environment (6.5MB): ... [860] big.data: [[1716551898.202389716, {}], {"message"=>"true 200 success", "big_data"=>"blah blah blah blah blah blah blah blah"}] [861] big.data: [[1716551898.202389925, {}], {"message"=>"true 200 success", "big_data"=>"blah blah blah blah blah blah blah blah"}] [862] big.data: [[1716551898.202390133, {}], {"message"=>"true 200 success", "big_data"=>"blah blah blah blah blah blah blah blah"}] [863] big.data: [[1 <<<< CONTAINER KILLED WITH OOM HERE We can validate that the stressed telemetry pipeline actually failed on an OOM status by viewing our container, and inspecting it for an OOM failure to validate our backpressure worked: # Use the container name to inspect for reason it failed $ podman inspect fbv8 | grep OOM "OOMKilled": true, Already having tried in a previous lab to manage this with mem_buf_limit settings, we've seen that this also is not the real fix. To prevent data loss we need to enable filesystem buffering so that overloading the memory buffer means that events will be buffered in the filesystem until there is memory free to process them. Using Filesystem Buffering The configuration of our telemetry pipeline in the INPUT phase needs a slight adjustment by adding storage.type to as shown, set to filesystem to enable it. Note that mem_buf_limit has been removed: # This entry generates a large amount of success messages for the workshop. [INPUT] Name dummy Tag big.data Copies 15000 Dummy {"message":"true 200 success", "big_data": "blah blah blah blah blah blah blah blah blah"} storage.type filesystem We can now bring it all together in the main configuration file. Using a file called the following workshop-fb.conf in our favorite editor, update the file to include SERVICE configuration is added with settings for managing the filesystem buffering: # Fluent Bit main configuration file. [SERVICE] flush 1 log_Level info storage.path /tmp/fluentbit-storage storage.sync normal storage.checksum off storage.max_chunks_up 5 # Imports section @INCLUDE inputs.conf @INCLUDE outputs.conf A few words on the SERVICE section properties might be needed to explain their function: storage.path - Putting filesystem buffering in the tmp filesystem storage.sync- Using normal and turning off checksum processing storage.max_chunks_up - Set to ~10MB, amount of allowed memory for events Now it's time for testing our configuration by running it using a container image. The first thing that is needed is to ensure a file called Buildfile is created. This is going to be used to build a new container image and insert our configuration files. Note this file needs to be in the same directory as our configuration files, otherwise adjust the file path names: FROM cr.fluentbit.io/fluent/fluent-bit:3.0.4 COPY ./workshop-fb.conf /fluent-bit/etc/fluent-bit.conf COPY ./inputs.conf /fluent-bit/etc/inputs.conf COPY ./outputs.conf /fluent-bit/etc/outputs.conf Now we'll build a new container image, naming it with a version tag, as follows using the Buildfile and assuming you are in the same directory: $ podman build -t workshop-fb:v9 -f Buildfile STEP 1/4: FROM cr.fluentbit.io/fluent/fluent-bit:3.0.4 STEP 2/4: COPY ./workshop-fb.conf /fluent-bit/etc/fluent-bit.conf --> a379e7611210 STEP 3/4: COPY ./inputs.conf /fluent-bit/etc/inputs.conf --> f39b10d3d6d0 STEP 4/4: COPY ./outputs.conf /fluent-bit/etc/outputs.conf COMMIT workshop-fb:v6 --> e74b2f228729 Successfully tagged localhost/workshop-fb:v9 e74b2f22872958a79c0e056efce66a811c93f43da641a2efaa30cacceb94a195 If we run our pipeline in a container configured with constricted memory (slightly larger value due to memory needed for mounting the filesystem) - in our case, we need to give it around a 9MB limit - then we'll see the pipeline running without failure: $ podman run -v ./:/tmp --memory 9MB --name fbv9 workshop-fb:v9 The console output shows that the pipeline runs until we stop it with CTRL-C, with events rolling by as shown below. ... [14991] big.data: [[1716559655.213181639, {}], {"message"=>"true 200 success", "big_data"=>"blah blah blah blah blah blah blah"}] [14992] big.data: [[1716559655.213182181, {}], {"message"=>"true 200 success", "big_data"=>"blah blah blah blah blah blah blah"}] [14993] big.data: [[1716559655.213182681, {}], {"message"=>"true 200 success", "big_data"=>"blah blah blah blah blah blah blah"}] ... We can now validate the filesystem buffering by looking at the filesystem storage. Check the filesystem from the directory where you started your container. While the pipeline is running with memory restrictions, it will be using the filesystem to store events until the memory is free to process them. If you view the contents of the file before stopping your pipeline, you'll see a messy message format stored inside (cleaned up for you here): $ ls -l ./fluentbit-storage/dummy.0/1-1716558042.211576161.flb -rw------- 1 username groupname 1.4M May 24 15:40 1-1716558042.211576161.flb $ cat fluentbit-storage/dummy.0/1-1716558042.211576161.flb ??wbig.data???fP?? ?????message?true 200 success?big_data?'blah blah blah blah blah blah blah blah???fP?? ?p???message?true 200 success?big_data?'blah blah blah blah blah blah blah blah???fP?? ߲???message?true 200 success?big_data?'blah blah blah blah blah blah blah blah???fP?? ?F???message?true 200 success?big_data?'blah blah blah blah blah blah blah blah???fP?? ?d???message?true 200 success?big_data?'blah blah blah blah blah blah blah blah???fP?? ... Last Thoughts on Filesystem Buffering This solution is the way to deal with backpressure and other issues that might flood your telemetry pipeline and cause it to crash. It's worth noting that using a filesystem to buffer the events also introduces the limits of the filesystem being used. It's important to understand that just as memory can run out, so too can the filesystem storage reach its limits. It's best to have a plan to address any possible filesystem challenges when using this solution, but this is outside the scope of this article. This completes our use cases for this article. Be sure to explore this hands-on experience with the accompanying workshop lab. What's Next? This article walked us through how Fluent Bit filesystem buffering provides a data- and memory-safe solution to the problems of backpressure and data loss. Stay tuned for more hands-on material to help you with your cloud-native observability journey.
Yitaek Hwang
Software Engineer,
NYDIG
Emmanouil Gkatziouras
Cloud Architect,
egkatzioura.com
Marija Naumovska
Product Manager,
Microtica