DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations

Trending

  • Exploratory Testing Tutorial: A Comprehensive Guide With Examples and Best Practices
  • Database Integration Tests With Spring Boot and Testcontainers
  • Reactive Programming
  • Getting Started With the YugabyteDB Managed REST API

Trending

  • Exploratory Testing Tutorial: A Comprehensive Guide With Examples and Best Practices
  • Database Integration Tests With Spring Boot and Testcontainers
  • Reactive Programming
  • Getting Started With the YugabyteDB Managed REST API
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Targeting Kubernetes Cluster With Gremlin Chaos Test

Targeting Kubernetes Cluster With Gremlin Chaos Test

Sudip Sengupta user avatar by
Sudip Sengupta
CORE ·
Sep. 02, 20 · Tutorial
Like (2)
Save
Tweet
Share
4.18K Views

Join the DZone community and get the full member experience.

Join For Free

Gremlin is a leading software company focusing on chaos-test in the market. It also has a tool similar to Chaos Monkey which belongs to Netflix, but is more customized to test the system with random loads or scheduled shutdowns. In the article below, we will be testing a simple Kubernetes cluster running on EKS with Chaos Test.

Why Is Chaos Testing Important?

Chaos Engineering is used to improve system resilience. Gremlin’s “Failure as a Service” helps to find weaknesses in the system before problems occur.

Overview

To successfully experience Chaos Engineering with Gremlin, we have two requirements: a running EKS cluster and two applications deployed to EKS. This tutorial helps to produce the requirements and create a scenario to "simulate an attack with Gremlin".

  • Step 1 - Prepare Cloud9 IDE
  • Step 2 - Create an EKS cluster using eksctl
  • Step 3 -  Deploy Kubernetes Dashboard
  • Step 4 - Install Gremlin using Helm
  • Step 5 - Deploy a Microservice Demo Application
  • Step 6 - Run a Shutdown Container Attack using Gremlin

Prerequsities

1. An AWS account
2. A Gremlin account which can be registered from here

Step 1 - Prepare Cloud9 IDE

Firstly, let's start to create the Cloud9 environment. Login to your AWS account and navigate Cloud9 to service page. Click on Get Started and type the name with anything, in this example we have chosen chaous gremlin. Keep all default settings as they are stated since it is needed only to reach EKS resources.

Creating environment in AWS


Wait for a while for the new console to build. Then close all terminals to open a new one.

Creating AWS Cloud9 environment

To start creating the cluster, firstly check whether AWS CLI is installed or not with the command below:

Shell
xxxxxxxxxx
1
 
1
Administrator:~/environment $ aws --version 
2
aws-cli/1.17.5 Python/2.7.16 Linux/4.14.158-101.185.amzn1.x86_64 botocore/1.14.5 
3
Administrator:~/environment $ 

Step 2 - Creating an EKS cluster using eksctl

We will use eksctl to create our EKS clusters.

Shell
xxxxxxxxxx
1
 
1
curl --silent --location "https://github.com/weaveworks/eksctl/releases/download/latest_release/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp 
2
3
sudo mv -v /tmp/eksctl /usr/local/bin 


Making eksctl executable

Shell
xxxxxxxxxx
1
 
1
Administrator:~/environment $ sudo chmod +x /usr/local/bin/eksctl 
2
Administrator:~/environment $ 


Confirming whether the eksctl command works:

Shell
xxxxxxxxxx
1
 
1
Administrator:~ $ eksctl version 
2
[ℹ]  version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.13.0"} 
3
Administrator:~ $ 


Below we are creating the cluster named gremlin-eksctl with three EC2 nodes. Just a word of warning - EKS can cost a lot so please do not forget to delete your resources after you have done with your failure test.

Shell
xxxxxxxxxx
1
 
1
eksctl create cluster --name=gremlin-eksctll --nodes=3 --managed --alb-ingress-access --region=${AWS_REGION}


It might take around 15-30 minutes to get ready which you can cluster on the EKS service page.

Quit Tip - Fee of EKS is 0,20$/h and fee of EC2 with a m5.large instance type that EKS runs on is 0.096$/h. Estimation of total cost per day will be around 11-12$.

Managing clusters


Checking the cluster whether it is working and getting its status. As expected, there is only one cluster created.

Shell
xxxxxxxxxx
1
 
1
Administrator:~/environment $ eksctl get clusters 
2
NAME            REGION 
3
gremlin-eksctl  us-east-1 Administrator:~/environment $ 


Updating the kubeconfig file by giving the cluster name and region via AWS CLI tool.

Shell
xxxxxxxxxx
1
 
1
Administrator:~/environment $ sudo aws eks --region us-east-1  update-kubeconfig --name gremlin-eksctl 
2
Updated context arn:aws:eks:us-east-1:312867154612:cluster/gremlin-eksctl in /root/.kube/config 
3
Administrator:~/environment $ 


On checking we see 3 nodes in the cluster. Two of them are worker-nodes while the third is cluster management master-node.

Shell
xxxxxxxxxx
1
 
1
Administrator:~/environment $ kubectl get nodes 
2
NAME                            STATUS   ROLES    AGE   VERSION 
3
ip-192-168-14-59.ec2.internal   Ready       59m   v1.14.7-eks-1861c5 
4
ip-192-168-33-12.ec2.internal   Ready       58m   v1.14.7-eks-1861c5 
5
ip-192-168-49-55.ec2.internal   Ready       58m   v1.14.7-eks-1861c5 
6
7
Administrator:~/environment $ 

Step 3 - Deploying Kubernetes Dashboard

Next we would deploy the Kubernetes dashboard to Kubernetes cluster by using Heapster and InfluxDB. These two tools will help our sample application to be shown in the dashboard. We will start with deploying our Kubernetes dashboard as the first step.

Heapster

Heapster is a performance monitoring and metrics collection system compatible with Kubernetes (versions 1.0.6 and above). It allows for the collection of not only performance metrics about your workloads, pods, and containers, but also events and other signals generated by your cluster. The great thing about Heapster is that it is fully open source as part of the Kubernetes project, and supports a multitude of backends for persisting the data, including but not limited to, Influxdb, Elasticsearch, and Graphite.

InfluxDB


InfluxDB is a time series database designed to handle high volume of writing and query loads.

  1. Deploying the Kubernetes Dashboard to your EKS cluster:
Shell
xxxxxxxxxx
1
 
1
Administrator:~/environment $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml 
2
secret/kubernetes-dashboard-certs created 
3
serviceaccount/kubernetes-dashboard created 
4
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created 
5
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created 
6
deployment.apps/kubernetes-dashboard created 
7
service/kubernetes-dashboard created 
8
Administrator:~/environment $ 
Shell
xxxxxxxxxx
1
 
1
Administrator:~/environment $ kubectl proxy --port=8080 --address='0.0.0.0' --disable-filter=true & [1] 336 
2
Administrator:~/environment $ W0125 10:28:19.746961     336 proxy.go:140] Request filter disabled, your proxy is vulnerable to XSRF attacks, please be cautious Starting to serve on [::]:8080 
3
4
Administrator:~/environment $  


Deploying Heapster:

Shell
xxxxxxxxxx
1
 
1
Administrator:~/environment $  kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml serviceaccount/heapster created deployment.extensions/heapster created service/heapster created Administrator:~/environment $  


Deploying InfluxDB:

Shell
xxxxxxxxxx
1
 
1
Administrator:~/environment $  kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml 
2
deployment.extensions/monitoring-influxdb created 
3
service/monitoring-influxdb created 
4
Administrator:~/environment $  


Creating Heapster cluster role binding for the Dashboard.

Shell
xxxxxxxxxx
1
 
1
Administrator:~/environment $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml 
2
clusterrolebinding.rbac.authorization.k8s.io/heapster created 
3
Administrator:~/environment $ 


The next step is to create an eks-admin service account. It will let you connect to the Kubernetes Dashboard with admin permissions.

To authenticate and use the Kubernetes Dashboard:

Shell
xxxxxxxxxx
1
 
1
Administrator:~/environment $ kubectl apply -f https://raw.githubusercontent.com/tammybutow/eks-aws/master/eks-admin-service-account.yaml serviceaccount/eks-admin created 
2
clusterrolebinding.rbac.authorization.k8s.io/eks-admin created 
3
Administrator:~/environment $ 


To access the Kubernetes Dashboard:

  • In your Cloud9 environment, click Tools > Preview > Preview Running Application to open the Dashboard URL.
  • Append the following to the end of the URL:
Plain Text
xxxxxxxxxx
1
 
1
/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
Shell
xxxxxxxxxx
1
 
1
aws eks get-token --cluster-name gremlin-eksctl | jq -r '.status.token'


checking token
Select Token and then copy the output of the command above and paste it to the text field as shown below:

Step 4 - Installing Gremlin using Helm

Download your Gremlin certificates:


Start by signing-in to your Gremlin account. If you don't have one, create an account here. Navigate to Team Settings and click on your Team. Click the Download button to download and save certificates to your local drive. Please note that the downloaded certificate.zip contains both a public-key certificate and a matching private key.

Setting private key


Unzip the certificate.zip and save it to your Gremlin folder on your desktop. Rename your certificate as gremlin.cert and key files as gremlin.key.

Renaming certificate

Gremlin certificate and key

Creating Gremlin Namespace:

Shell
 




xxxxxxxxxx
1


 
1
Administrator:~/environment $ kubectl create namespace gremlin 
2
namespace/gremlin created 
3
Administrator:~/environment $ 



Creating Kubernetes namespace


Create a Kubernetes Secret for your certificate and private key, copy gremlin.cert and gremlin.key to Cloud9. A quick tip is to create these by the Vim Editor instead of copying from your local computer.

Shell
 




xxxxxxxxxx
1


 
1
Administrator:~/environment $ kubectl create secret generic gremlin-team-cert \ 
2
> \--namespace=gremlin  \ 
3
> \--from-file=/home/ec2-user/environment/gremlin.cert \ 
4
> \--from-file=/home/ec2-user/environment/gremlin.key 
5
secret/gremlin-team-cert created 
6
Administrator:~/environment $ 



Check the files on Dashboard whether they are deployed.

Checking files on Dashboard


Installation With Helm

The simplest way of installing the Gremlin client on your Kubernetes cluster is to use Helm. Once Helm is installed and configured, the next steps will be to add the Gremlin repo and to install the client.

Installing Helm source code and making it executable:

Shell
 




xxxxxxxxxx
1
13


 
1
Administrator:~/environment $ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh  
2
% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current                                                               Dload  Upload   Total   Spent    Left  Speed 
3
100  7164  100  7164    0     0  45630      0 --:--:-- --:--:-- --:--:-- 45341 
4
Administrator:~/environment $ 
5
Administrator:~/environment $ chmod +x get_helm.sh 
6
Administrator:~/environment $ 
7
Administrator:~/environment $ ./get_helm.sh 
8
Downloading https://get.helm.sh/helm-v2.16.1-linux-amd64.tar.gz 
9
Preparing to install helm and tiller into /usr/local/bin 
10
helm installed into /usr/local/bin/helm 
11
tiller installed into /usr/local/bin/tiller 
12
Run 'helm init' to configure helm. 
13
Administrator:~/environment $ 



Configuring Helm to access with RBAC

Helm relies on a service called Tiller which requires special permission on the Kubernetes cluster, for which we will need to build a Service Account for using Tiller. Next step is to then apply this RBAC to the cluster.

Creating a new service account:

Shell
 




xxxxxxxxxx
1
22


 
1
Administrator:~/environment $ cat < ~/environment/rbac.yaml 
2
> --- 
3
> apiVersion: v1 
4
> kind: ServiceAccount 
5
> metadata: 
6
>   name: tiller 
7
>   namespace: kube-system 
8
> --- 
9
> apiVersion: rbac.authorization.k8s.io/v1beta1 
10
> kind: ClusterRoleBinding 
11
> metadata: 
12
>   name: tiller 
13
> roleRef: 
14
>   apiGroup: rbac.authorization.k8s.io 
15
>   kind: ClusterRole 
16
>   name: cluster-admin 
17
> subjects: 
18
>   - kind: ServiceAccount 
19
>     name: tiller 
20
>     namespace: kube-system 
21
> EoF 
22
Administrator:~/environment $ 



Applying configurations:

Shell
 




xxxxxxxxxx
1


 
1
Administrator:~/environment $ kubectl apply -f ~/environment/rbac.yaml 
2
serviceaccount/tiller created 
3
clusterrolebinding.rbac.authorization.k8s.io/tiller created 
4
Administrator:~/environment $ 



Installing Tiller for Helm:

Tiller

A companion server component, tiller, that runs on your Kubernetes cluster, listens for commands from helm, and handles the configuration and deployment of software releases on the cluster.

Shell
 




xxxxxxxxxx
1
19


 
1
Administrator:~/environment $ helm init --service-account tiller 
2
Creating /home/ec2-user/.helm 
3
Creating /home/ec2-user/.helm/repository 
4
Creating /home/ec2-user/.helm/repository/cache 
5
Creating /home/ec2-user/.helm/repository/local 
6
Creating /home/ec2-user/.helm/plugins 
7
Creating /home/ec2-user/.helm/starters 
8
Creating /home/ec2-user/.helm/cache/archive 
9
Creating /home/ec2-user/.helm/repository/repositories.yaml 
10
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com 
11
Adding local repo with URL: http://127.0.0.1:8879/charts 
12
$HELM_HOME has been configured at /home/ec2-user/.helm. 
13

          
14
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. 
15

          
16
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy. 
17
To prevent this, run `helm init` with the --tiller-tls-verify flag. 
18
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation 
19
Administrator:~/environment $ 



This will install tiller into the cluster and will give access to managed resources in your cluster. Please note the security policy alert as shown above, which you can feel free to ignore or follow as per your policy settings.

Activating bash-completion for Helm:

Shell
 




xxxxxxxxxx
1


 
1
Administrator:~/environment $ helm completion bash >> ~/.bash_completion 
2
Administrator:~/environment $ . /etc/profile.d/bash_completion.sh 
3
Administrator:~/environment $ . ~/.bash_completion 
4
Administrator:~/environment $ 



To run the Helm install, you will need your Gremlin Team ID. It can be found in the Gremlin app on the Team Settings page, where you downloaded your certificates earlier. Click on your Team in the list. The ID you’re looking for can be found under Configuration as Team ID.

Getting Gremlin Team Id


Export your Team ID as an environment variable:

Shell
 




xxxxxxxxxx
1


 
1
Administrator:~/environment $ export GREMLIN_TEAM_ID=13ba2df0-5b0b-572c-a479-8136af79da66 
2
Administrator:~/environment $ 



Next, export your cluster ID, by giving a name for your Kubernetes cluster.

PowerShell
 




xxxxxxxxxx
1


 
1
Administrator:~/environment $ export GREMLIN_CLUSTER_ID=boraozkanchaos 
2
Administrator:~/environment $  



Now add the Gremlin Helm repo, and install Gremlin:

Shell
 




xxxxxxxxxx
1
42


 
1
Administrator:~/environment $ helm repo add gremlin https://helm.gremlin.com 
2
"gremlin" has been added to your repositories 
3
Administrator:~/environment $ 
4
Administrator:~/environment $ helm install gremlin/gremlin \ 
5
> \--namespace gremlin \ 
6
> \--name gremlin \ 
7
> \--set gremlin.teamID=$GREMLIN_TEAM_ID \ 
8
> \--set gremlin.clusterID=$GREMLIN_CLUSTER_ID 
9
NAME:   gremlin 
10
LAST DEPLOYED: Sat Jan 25 12:37:45 2020 
11
NAMESPACE: gremlin 
12
STATUS: DEPLOYED 
13

          
14
RESOURCES: 
15
==> v1/ClusterRole 
16
NAME             AGE 
17
gremlin-watcher  0s 
18

          
19
==> v1/ClusterRoleBinding 
20
NAME  AGE 
21
chao  0s 
22

          
23
==> v1/DaemonSet 
24
NAME     AGE 
25
gremlin  0s 
26

          
27
==> v1/Deployment 
28
NAME  AGE 
29
chao  0s 
30

          
31
==> v1/Pod(related) 
32
NAME                   AGE 
33
chao-698b9fbfb4-5thjp  0s 
34
gremlin-425wv          0s 
35
gremlin-krdlt          0s 
36
gremlin-l252z          0s 
37

          
38
==> v1/ServiceAccount 
39
NAME  AGE 
40
chao  0s 
41

          
42
Administrator:~/environment $ 



Kubernetes workload statuses

Step 5 - Deploying a Microservice Demo Application

The demo environment we are going to deploy on to our EKS cluster is the Hipster Shop: Cloud-Native Microservices Demo Application

Clone repo of app source code:

Shell
 




xxxxxxxxxx
1


 
1
Administrator:~/environment $ git clone https://github.com/GoogleCloudPlatform/microservices-demo.git 
2
Cloning into 'microservices-demo'... 
3
remote: Enumerating objects: 30, done. 
4
remote: Counting objects: 100% (30/30), done. 
5
remote: Compressing objects: 100% (21/21), done. 
6
remote: Total 2987 (delta 16), reused 16 (delta 7), pack-reused 2957 
7
Receiving objects: 100% (2987/2987), 5.08 MiB | 38.24 MiB/s, done. 
8
Resolving deltas: 100% (2039/2039), done. 
9
Administrator:~/environment $ 



Change directory to the one just created:

Shell
 




xxxxxxxxxx
1


 
1
Administrator:~/environment $ cd microservices-demo/ 
2
Administrator:~/environment/microservices-demo (master) $ 



Deploying the application:

Shell
 




xxxxxxxxxx
1
25


 
1
Administrator:~/environment/microservices-demo (master) $ kubectl apply -f ./release/kubernetes-manifests.yaml deployment.apps/emailservice created 
2
service/emailservice created 
3
deployment.apps/checkoutservice created 
4
service/checkoutservice created 
5
deployment.apps/recommendationservice created 
6
service/recommendationservice created 
7
deployment.apps/frontend created 
8
service/frontend created 
9
service/frontend-external created 
10
deployment.apps/paymentservice created 
11
service/paymentservice created 
12
deployment.apps/productcatalogservice created 
13
service/productcatalogservice created 
14
deployment.apps/cartservice created 
15
service/cartservice created 
16
deployment.apps/loadgenerator created 
17
deployment.apps/currencyservice created 
18
service/currencyservice created 
19
deployment.apps/shippingservice created 
20
service/shippingservice created 
21
deployment.apps/redis-cart created 
22
service/redis-cart created 
23
deployment.apps/adservice created 
24
service/adservice created 
25
Administrator:~/environment/microservices-demo (master) $ 



Wait until pods are in a ready state.

Shell
 




xxxxxxxxxx
1
15


 
1
Administrator:~/environment/microservices-demo (master) $ kubectl get pods 
2
NAME                                     READY   STATUS    RESTARTS   AGE 
3
adservice-84449b8756-qj4sp               1/1     Running   0          4m57s 
4
cartservice-6cbc9b899c-ww5rf             1/1     Running   0          4m58s 
5
checkoutservice-56b48b77c8-fszpx         1/1     Running   0          4m58s 
6
currencyservice-b9fcb4c98-hx9kq          1/1     Running   0          4m58s 
7
emailservice-797cdcc76d-d257v            1/1     Running   0          4m58s 
8
frontend-785c44fd98-zvgj4                1/1     Running   0          4m58s 
9
loadgenerator-665c4ddb74-xwm8m           1/1     Running   3          4m58s 
10
paymentservice-84d7bf956-fdgxr           1/1     Running   0          4m58s 
11
productcatalogservice-5664f59f54-b5mk5   1/1     Running   0          4m58s 
12
recommendationservice-7f9855d7c6-b2zv6   1/1     Running   0          4m58s 
13
redis-cart-6448dcbdcc-97d55              1/1     Running   0          4m58s 
14
shippingservice-6b6f49747d-svb5f         1/1     Running   0          4m58s 
15
Administrator:~/environment/microservices-demo (master) $ 



Getting the frontend IP address:

Shell
 




xxxxxxxxxx
1


 
1
Administrator:~/environment/microservices-demo (master) $ kubectl get svc frontend-external -o wide 
2
NAME                TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)        AGE     SELECTOR 
3
frontend-external   LoadBalancer   10.100.226.157   a0c9bed9a3f7111eab7c912c03cd100e-265884000.us-east-1.elb.amazonaws.com   80:30461/TCP   6m31s   app=frontend 
4

          
5
Administrator:~/environment/microservices-demo (master) $ 



Visit the URL on your browser:

Website example

Step 6 - Running a Shutdown Container Attack using Gremlin

We are going to create our first Chaos Engineering experiment where we would validate the EKS reliability. Our hypothesis is, “After shutting down my cart service container, we will not suffer from downtime and EKS will give us a new one.”

Going back to the Gremlin UI, select Attacks from the menu on the left and select New Attack. We’re going to target a Kubernetes resource, so click on Kubernetes on the upper right.

Targeting Kubernetes resource


Choose cartservice:

Choosing cartservice


Choose State and Shutdown:

Choosing state and shutdown


Attacking this pod with our Gremlin UI:

Attacking a certain pod on Gremlin


We will be shutting down the cartservice containers. As a test, we attacked twice and the cartservice pods restarted itself. Which signifies that it is working as expected. Note that it re-generates itself even when you attack to shut down your pods.

When we attacked our containers, the cluster resisted to failure and restarted itself, which symbolizes that our system is now resistant to failure. We have seen what happens when a failure occurs, in this example the failure is shutting down the pods. As a result, we understand that our cluster already has auto-scaling feature.

Auto-scaling feature


As a reminder; do not forget to delete your cluster and Cloud9 ide.

Shell
 




x


 
1
eksctl delete cluster --name=gremlin-eksctl


Conclusion

Congrats! You’ve installed an AWS EKS cluster, deployed the Kubernetes Dashboard, deployed a microservice demo application, installed the Gremlin agent as a daemon-set, and ran your first Chaos Engineering attack to validate Kubernetes reliability!

Kubernetes cluster Gremlin (query language) Chaos engineering shell Testing

Published at DZone with permission of Sudip Sengupta. See the original article here.

Opinions expressed by DZone contributors are their own.

Trending

  • Exploratory Testing Tutorial: A Comprehensive Guide With Examples and Best Practices
  • Database Integration Tests With Spring Boot and Testcontainers
  • Reactive Programming
  • Getting Started With the YugabyteDB Managed REST API

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com

Let's be friends: