{{announcement.body}}
{{announcement.title}}

Cloud-Native Benchmarking With Kubestone

DZone 's Guide to

Cloud-Native Benchmarking With Kubestone

This tool is meant to assist your development teams with getting performance metrics from your Kubernetes clusters.

· Cloud Zone ·
Free Resource

Intro

Organizations are increasingly looking to containers and distributed applications to provide the agility and scalability needed to satisfy their clients. While doing so, modern enterprises also need the ability to benchmark their application and be aware of certain metrics in relation to their infrastructure.

In this post, I am introducing you to a cloud-native bench-marking tool known as Kubestone. This tool is meant to assist your development teams with getting performance metrics from your Kubernetes clusters.

How Does Kubestone Work?

At it's core, Kubestone is implemented as a Kubernetes Operator in Go language with the help of Kubebuilder. You can find more info on the Operator Framework via this blog post.
Kubestone leverages Open Source benchmarks to measure Core Kubernetes and Application performance. As benchmarks are executed in Kubernetes, they must be containerized to work on the cluster. A certified set of benchmark containers is provided via xridge's DockerHub space. Here is a list of currently supported benchmarks:

Type Benchmark Name Status
Core/CPU sysbench Supported
Core/Disk fio Supported
Core/Disk ioping Supported
Core/Memory sysbench Supported
Core/Network iperf3 Supported
Core/Network qperf Supported
HTTP Load Tester drill Supported
Application/Etcd etcd Planned
Application/K8S kubeperf Planned
Application/PostgreSQL pgbench Supported
Application/Spark sparkbench Planned


Let's try installing Kubestone and running a benchmark ourselves and see how it works.

Installing Kubestone

Requirements

Deploy Kubestone to kubestone-system namespace with the following command:

Shell
 




x


 
1
$ kustomize build github.com/xridge/kubestone/config/default | kubectl create -f -



Once deployed, Kubestone will listen for Custom Resources created with the kubestone.xridge.io group.

Benchmarking

Benchmarks can be executed via Kubestone by creating Custom Resources in your cluster.

Namespace

It is recommended to create a dedicated namespace for benchmarking.

Shell
 




xxxxxxxxxx
1


 
1
$ kubectl create namespace kubestone



After the namespace is created, you can use it to post a benchmark request to the cluster.

The resulting benchmark executions will reside in this namespace.

Custom Resource Rendering

We will be using kustomize to render the Custom Resource from the github repository.

Kustomize takes a base yaml, and patches with an overlay file to render the final yaml file, which describes the benchmark.

Shell
 




xxxxxxxxxx
1


1
$ kustomize build github.com/xridge/kubestone/config/samples/fio/overlays/pvc



The Custom Resource (rendered yaml) looks as follows: 

Shell
 




xxxxxxxxxx
1
18


 
1
apiVersion: perf.kubestone.xridge.io/v1alpha1
2
kind: Fio
3
metadata:
4
  name: fio-sample
5
spec:
6
  cmdLineArgs: --name=randwrite --iodepth=1 --rw=randwrite --bs=4m --size=256M
7
  image:
8
    name: xridge/fio:3.13
9
  volume:
10
    persistentVolumeClaimSpec:
11
      accessModes:
12
      - ReadWriteOnce
13
      resources:
14
        requests:
15
          storage: 1Gi
16
    volumeSource:
17
      persistentVolumeClaim:
18
        claimName: GENERATED



When we create this resource in Kubernetes, the operator interprets it and creates the associated benchmark. The fields of the Custom Resource controls how the benchmark will be executed:

  • metadata.name: Identifies the Custom Resource. Later, this can be used to query or delete the benchmark in the cluster.
  • cmdLineArgs: Arguments passed to the benchmark. In this case we are providing the arguments to Fio (a filesystem benchmark). It instructs the benchmark to execute a random write test with 4Mb of block size with an overall transfer size of 256 MB.
  • image.name: Describes the Docker Image of the benchmark. In case of Fio, we are using xridge's fio Docker Image, which is built from this repository.
  • volume.persistentVolumeClaimSpec: Given that Fio is a disk benchmark, we can set a PersistentVolumeClaim for the benchmark to be executed. The above setup instructs Kubernetes to take 1GB of space from the default StorageClass and use it for the benchmark.

Running the Benchmark

Now, as we understand the definition of the benchmark, we can try to execute it.

Note: Make sure you installed the kubestone operator and have it running before executing this step.

Shell
 




xxxxxxxxxx
1


 
1
$ kustomize build github.com/xridge/kubestone/config/samples/fio/overlays/pvc | kubectl create --namespace kubestone -f -



Since we pipe the output of the kustomize build command into kubectl create, it will create the object in our Kubernetes cluster.

The resulting object can be queried using the object's type (fio) and it's name (fio-sample):

Shell
 




xxxxxxxxxx
1
36


 
1
$ kubectl describe --namespace kubestone fio fio-sample
2
Name:         fio-sample
3
Namespace:    kubestone
4
Labels:       <none>
5
Annotations:  <none>
6
API Version:  perf.kubestone.xridge.io/v1alpha1
7
Kind:         Fio
8
Metadata:
9
  Creation Timestamp:  2019-09-14T11:31:02Z
10
  Generation:          1
11
  Resource Version:    31488293
12
  Self Link:           /apis/perf.kubestone.xridge.io/v1alpha1/namespaces/kubestone/fios/fio-sample
13
  UID:                 21cdbe92-d6e3-11e9-ba70-4439c4920abc
14
Spec:
15
  Cmd Line Args:  --name=randwrite --iodepth=1 --rw=randwrite --bs=4m --size=256M
16
  Image:
17
    Name:  xridge/fio:3.13
18
  Volume:
19
    Persistent Volume Claim Spec:
20
      Access Modes:
21
        ReadWriteOnce
22
      Resources:
23
        Requests:
24
          Storage:  1Gi
25
    Volume Source:
26
      Persistent Volume Claim:
27
        Claim Name:  GENERATED
28
Status:
29
  Completed:  true
30
  Running:    false
31
Events:
32
  Type    Reason           Age   From       Message
33
  ----    ------           ----  ----       -------
34
  Normal  Created  11s   kubestone  Created /api/v1/namespaces/kubestone/configmaps/fio-sample
35
  Normal  Created  11s   kubestone  Created /api/v1/namespaces/kubestone/persistentvolumeclaims/fio-sample
36
  Normal  Created  11s   kubestone  Created /apis/batch/v1/namespaces/kubestone/jobs/fio-sample



As the Events section shows, Kubestone has created a ConfigMap, a PersistentVolumeClaim and aJob for the provided Custom Resource. The Status field tells us that the benchmark has completed.

Inspecting the Benchmark

The created objects related to the benchmark can be listed using kubectl command:

Shell
 




xxxxxxxxxx
1
12


1
$ kubectl get pods,jobs,configmaps,pvc --namespace kubestone
2
NAME                   READY   STATUS      RESTARTS   AGE
3
pod/fio-sample-bqqmm   0/1     Completed   0          54s
4
 
          
5
NAME                   COMPLETIONS   DURATION   AGE
6
job.batch/fio-sample   1/1           15s        54s
7
 
          
8
NAME                   DATA   AGE
9
configmap/fio-sample   0      54s
10
 
          
11
NAME                               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
12
persistentvolumeclaim/fio-sample   Bound    pvc-b3898236-c698-11e9-8071-4439c4920abc   1Gi        RWO            rook-ceph-block   54s



As shown above, Fio controller has created a PersistentVolumeClaim and a ConfigMap which is used by the Fio Job during benchmark execution. The Fio Job has an associated Pod which contains our test execution. The results of the run can be shown with the kubectl logs command: 

Shell
 




xxxxxxxxxx
1
29


 
1
$ kubectl logs --namespace kubestone fio-sample-bqqmm
2
randwrite: (g=0): rw=randwrite, bs=(R) 4096KiB-4096KiB, (W) 4096KiB-4096KiB, (T) 4096KiB-4096KiB, ioengine=psync, iodepth=1
3
fio-3.13
4
Starting 1 process
5
randwrite: Laying out IO file (1 file / 256MiB)
6
 
          
7
randwrite: (groupid=0, jobs=1): err= 0: pid=47: Sat Aug 24 17:58:10 2019
8
  write: IOPS=470, BW=1882MiB/s (1974MB/s)(256MiB/136msec); 0 zone resets
9
    clat (usec): min=1887, max=2595, avg=2042.76, stdev=136.56
10
     lat (usec): min=1953, max=2688, avg=2107.35, stdev=142.94
11
    clat percentiles (usec):
12
     |  1.00th=[ 1893],  5.00th=[ 1926], 10.00th=[ 1926], 20.00th=[ 1958],
13
     | 30.00th=[ 1991], 40.00th=[ 2008], 50.00th=[ 2024], 60.00th=[ 2040],
14
     | 70.00th=[ 2057], 80.00th=[ 2073], 90.00th=[ 2114], 95.00th=[ 2409],
15
     | 99.00th=[ 2606], 99.50th=[ 2606], 99.90th=[ 2606], 99.95th=[ 2606],
16
     | 99.99th=[ 2606]
17
  lat (msec)   : 2=34.38%, 4=65.62%
18
  cpu          : usr=2.22%, sys=97.78%, ctx=1, majf=0, minf=9
19
  IO depths    : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0%
20
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
21
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
22
     issued rwts: total=0,64,0,0 short=0,0,0,0 dropped=0,0,0,0
23
     latency   : target=0, window=0, percentile=100.00%, depth=1
24
 
          
25
Run status group 0 (all jobs):
26
  WRITE: bw=1882MiB/s (1974MB/s), 1882MiB/s-1882MiB/s (1974MB/s-1974MB/s), io=256MiB (268MB), run=136-136msec
27
 
          
28
Disk stats (read/write):
29
  rbd7: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%



Listing Benchmarks

We have learned that Kubestone uses Custom Resources to define benchmarks. We can list the installed custom resources using the kubectl get crds command:

Shell
 




xxxxxxxxxx
1


 
1
$ kubectl get crds | grep kubestone
2
drills.perf.kubestone.xridge.io         2019-09-08T05:51:26Z
3
fios.perf.kubestone.xridge.io           2019-09-08T05:51:26Z
4
iopings.perf.kubestone.xridge.io        2019-09-08T05:51:26Z
5
iperf3s.perf.kubestone.xridge.io        2019-09-08T05:51:26Z
6
pgbenches.perf.kubestone.xridge.io      2019-09-08T05:51:26Z
7
sysbenches.perf.kubestone.xridge.io     2019-09-08T05:51:26Z



Using the CRD names above, we can list the executed benchmarks in the system.

Kubernetes provides a convenience feature regarding CRDs: one can use the shortened name of the CRD, which is the singular part of the fully qualified CRD name. In our case, fios.perf.kubestone.xridge.io can be shortened to fio. Hence, we can list the executed fio benchmark using the following command:

Shell
 




xxxxxxxxxx
1


 
1
$ kubectl get --namespace kubestone fios.perf.kubestone.xridge.io
2
NAME         RUNNING   COMPLETED
3
fio-sample   false     true



Cleaning Up

After a successful benchmark run the resulting objects are stored in the Kubernetes cluster. Given that Kubernetes can hold a limited number of pods in the system, it is advised that the user cleans up the benchmark runs time to time. This can be achieved by deleting the Custom Resource, which initiated the benchmark:

Shell
 




xxxxxxxxxx
1


 
1
$ kubectl delete --namespace kubestone fio fio-sample



Since the Custom Resource has ownership on the created resources, the underlying pods, jobs, configmaps, pvcs, etc. are also removed by this operation.

Next Steps

Now you are familiar with the key concepts of Kubestone, it is time to explore and benchmark. You can play around with Fio Benchmark via it's cmdLineArgs, Persistent Volume and Scheduling related settings. You can find more information about that in Fio's benchmark page. Hopefully you gained some valuable knowledge from this post!

This article was originally posted on https://appfleet.com/.

Topics:
cloud, cloud native, cloud-native benchmarking, docker, kubernetes, kubestone, tutorial

Published at DZone with permission of Sudip Sengupta . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}