Humio at Scale in Kubernetes
The folks at Humio found a great benefit to running their software on Kubernetes. See what they discovered here.
Join the DZone community and get the full member experience.
Join For FreeAs Kubernetes matures, features such as StatefulSets give the ecosystem new capabilities that allow users to deploy applications that have much different requirements than stateless microservices. Since the stable release of StatefulSets in 2018, many efforts have been made to deploy applications, such as Kafka and Postgres, that rely on pods having a one-to-one relationship with a persistent filesystem, usually in the form of a mounted block device. Combined with improvements to the provisioning capabilities within the Kubernetes, it is easier than ever to deploy a stateful system, such as Humio, on many of the Kubernetes backplanes provided by the leading cloud providers.
As more practitioners look towards consolidating their microservices and stateful workloads, it was imperative that Humio began to understand the dynamics of running our software within Kubernetes with ingest rates at and above what our customers are doing today. I chose to use Google Cloud Platform’s Kubernetes platform, GKE, due to our recent positive experience with customer deployments and proof of concept deployments.
In order to understand the performance at scale, I began by deploying a 40-worker Kubernetes cluster on preemptible n-standard-32 instances. This cluster is capable of running each Humio and Kafka on what amounts to isolated Kubernetes workers by making large requests for CPU and memory resources. This allowed us to judge the performance of individual Humio partitions and not be impacted by other Humio or Kafka instances. Each Humio pod is provisioning a 1TB pd-ssd, GCP’s fast persistent disk option. In addition to our Kubernetes cluster, an additional cluster of 25 workers was deployed to run Humio’s performance testing tool. This was done to further isolate the cluster’s resources and not impact the performance of the Humio cluster.
Both the Humio and test cluster were provisioned with Terraform and the applications are managed using Humio’s Helm chart. One interesting issue I confronted during the initial tests was hitting the upper, most likely default, limits of GCP’s load balancers. Using a global forwarding rule I was only able to reach a maximum of 10TB/day of ingest. This wasn’t expected and to continue testing at higher loads I configured our performance testing tool to connect to the individual Humio pods directly by configuring the Humio StatefulSet to have a NodePort service.
The results exceeded my expectations. Humio is able to ingest in excess of 100TB/day while maintaining an ingest latency of less than a second.
The ingest rates of events, depending on event size, ranged from 2.5-5 million per second. Search speed ranges from 200-500GB/s searching on a worst case string search that is scanning the billions of records. As I continue to test I’m hoping to improve the search performance in particular as I’ve found the behavior of the cgroup’s page-caching facility to work quite differently than our standard deployments.
This is just the beginning of Humio’s exploration of running our clusters on Kubernetes and there are many additional points of focus as we peel back the layers of performance. Similar results have been found on running a 25-worker node cluster on n-standard-64 instances where Humio and Kafka run on the same worker nodes. Testing will continue as we hope to run the same tests on other cloud providers and different hardware configurations. After some additional improvements and testing on different providers, we will be releasing the projects as reference architectures for our users to utilize as the basis for their own deployments.
Opinions expressed by DZone contributors are their own.
Comments