Monitoring OpenShift PODS With Ansible and Zabbix Sender
Learn how to monitor OpenShift or Kubernetes PODS using Ansible and Zabbix Sender.
Join the DZone community and get the full member experience.
Join For FreeThis quickstart describes how to monitor OpenShift or k8s PODS using Ansible and Zabbix Sender.
All source code used in this article is available on GitHub.
Prerequisites
- OpenShift or generic Kubernetes cluster;
- OpenShift client (oc) or kubectl;
- Zabbix Server;
- Podman or Docker.
Creating OpenShift Projects
In this session, we'll create OpenShift projects to deploy APIs examples and CronJobs to get container metrics and send them to Zabbix Server.
Monitoring API Project
In this project, we'll deploy ansible-agent4ocp to monitor APIs' PODs.
$ oc new-project apis-monitoring
Now using project "apis-monitoring" on server "omitted". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby.
Build ansible-agent4ocp
Now we'll build the container image using a podman a for ansible-agent4ocp. Next, the dockerfile image to build a container.
#base image
FROM quay.io/centos/centos:stream8
USER root
#workdir folder
ENV HOME=/opt/scripts
WORKDIR ${HOME}
#CentoOS Stream extras repositories
RUN yum install epel-release -y && \
#updating SO
yum update -y && \
yum install ansible.noarch -y && \
#install ansible
yum clean all && \
#download and install OpenShift client
curl https://mirror.openshift.com/pub/openshift-v4/clients/oc/latest/linux/oc.tar.gz --output /tmp/oc.tar.gz && \
tar xvzf /tmp/oc.tar.gz && \
cp oc /usr/local/bin && \
rm oc kubectl && \
rm /tmp/oc.tar.gz && \
#Granting permissions to folders and files
mkdir -pv ${HOME} && \
mkdir -pv ${HOME}/.ansible/tmp && \
mkdir -pv ${HOME}/.kube/ && \
mkdir -pv ${HOME}/playbooks && \
chown -R 1001:root ${HOME} && \
chgrp -R 0 ${HOME} && \
chmod -R g+rw ${HOME}
#folder to save playbooks
VOLUME ${HOME}/playbooks
USER 1001
#ansible example file
ADD example.yml ${HOME}/example.yml
$ podman build . --tag ansible-agent4ocp
STEP 1/8: FROM quay.io/centos/centos:stream8 STEP 2/8: USER root omitted 3dae1ac7584c97e57296f674b42ac1886a88c180aec715860c8118712a81d683
Testing ansible-agent4ocp
Now we'll execute container image in localhost to sure that our container works.
$ podman run -it ansible-agent4ocp:latest ansible-playbook example.yml
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [This is a ansible script hello-world] ************************************************************************************************************************************************************************************************** TASK [Hello Ansible] ************************************************************************************************************************************************************************************************************************* changed: [localhost] PLAY RECAP *********************************************************************************************************************************************************************************************************************************** localhost : ok=1 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Pushing Image to OCP
In this session, we'll tag the container image, log in on OpenShift public registry and push the image.
1. Log in on OpenShift public registry
$ podman login -u $(oc whoami) -p $(oc whoami -t) <ocp-public-registry>
Login Succeeded!
2. Tagging image
$ podman tag localhost/ansible-agent4ocp <ocp-public-registry>/apis-monitoring/ansible-agent4ocp
3. Pushing image
$ podman push <ocp-public-registry>/apis-monitoring/ansible-agent4ocp
Getting image source signatures Copying blob e7a4bda8f16d done omitted Storing signatures
APIs Project
In this session, we'll create an OpenShift project to deploy our APIs. The objective is to simulate two applications in containers that will be monitored by the Ansible agent. For this it is necessary to install Zabbix Sender in the application's container.
The applications that will be installed on OpenShift already have the Zabbix Sender, so no changes are needed. The following dockerfile shows an example of installing Zabbix Sender at container construction time.
FROM quay.io/centos/centos:stream8
USER root
WORKDIR /work/
RUN chown 1001 /work && \
chmod "g+rwX" /work && \
chown 1001:root /work && \
#SO Update
yum update -y && \
#Zabbix Sender Install
curl https://repo.zabbix.com/zabbix/3.0/rhel/7/x86_64/zabbix-sender-3.0.9-1.el7.x86_64.rpm --output /tmp/zabbix-sender-3.0.9-1.el7.x86_64.rpm && \
yum install /tmp/zabbix-sender-3.0.9-1.el7.x86_64.rpm -y && \
yum clean all
COPY --chown=1001:root target/*-runner /work/application
EXPOSE 8080
USER 1001
CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]
1. Creating OpenShift Project
$ oc new-project api
Now using project "api" on server "omitted". You can add applications to this project with the 'new-app' command. For example, try: oc new-app centos/ruby-25-centos7~https://github.com/sclorg/ruby-ex.git to build a new example application in Ruby.
2. Deploying customer API
$ oc new-app https://github.com/pedroarraes/ocp-zabbix-monitoring.git --context-dir=/customer-api --strategy=docker --name=customer-api
--> Found Docker image dc28896 (5 weeks old) from quay.io for "quay.io/centos/centos:stream8" CentOS Stream 8 --------------- The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly. omitted Run 'oc status' to view your app.
3. Exposing customer API
$ oc expose svc/customer-api
route.route.openshift.io/customer-api exposed
4. Testing customer API
$ curl $(oc get route customer-api | awk 'FNR==2{print $2}')/hello
Hello RESTEasy
5. Testing Zabbix Sender
$ oc get pods
NAME READY STATUS RESTARTS AGE customer-api-1-build 0/1 Completed 0 12m customer-api-1-c2rrx 1/1 Running 0 10m customer-api-1-deploy 0/1 Completed 0 10m
$ oc rsh customer-api-1-c2rrx zabbix_sender
zabbix_sender [23]: either '-c' or '-z' option must be specified usage: omitted command terminated with exit code 1
6. Deploying inventory API
$ oc new-app https://github.com/pedroarraes/ocp-zabbix-monitoring.git --context-dir=/inventory-api --strategy=docker --name=inventory-api
--> Found Docker image dc28896 (5 weeks old) from quay.io for "quay.io/centos/centos:stream8" CentOS Stream 8 --------------- The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly. omitted Run 'oc status' to view your app.
7. Exposing inventory API
$ oc expose svc/inventory-api
route.route.openshift.io/inventory-api exposed
8. Testing inventory API
$ curl $(oc get route inventory-api | awk 'FNR==2{print $2}')/hello
Hello RESTEasy
9. Testing Zabbix Sender
$ oc get pods
customer-api-1-build 0/1 Completed 0 35m customer-api-1-c2rrx 1/1 Running 0 33m customer-api-1-deploy 0/1 Completed 0 33m inventory-api-1-deploy 0/1 Completed 0 2m19s inventory-api-1-fswtb 1/1 Running 0 2m15s inventory-api-3-build 0/1 Completed 0 4m46s
$ oc rsh inventory-api-1-fswtb zabbix_sender
zabbix_sender [23]: either '-c' or '-z' option must be specified usage: omitted command terminated with exit code 1
Configuring the Service Account
In this session, we will create and configure service account permissions to access OpenShift PODS, get metrics, and send them to Zabbix Server using Zabbix Sender.
1. Creating a service account
$ oc create sa sa-apis-monitoring -n apis-monitoring
serviceaccount/sa-apis-monitoring created
2. Creating custom role bindings to get and exec pods
$ oc create role podview --verb=get,list,watch --resource=pods -n api
role.rbac.authorization.k8s.io/podview created
$ oc create role podexec --verb=create --resource=pods/exec -n api
role.rbac.authorization.k8s.io/podexec created
$ oc create role projectview --verb=get,list --resource=project -n api
role.rbac.authorization.k8s.io/projectview created
3. Adding policy to the service account
$ oc adm policy add-role-to-user podview system:serviceaccount:apis-monitoring:sa-apis-monitoring --role-namespace=api -n api
role "podview" added: "system:serviceaccount:apis-monitoring:sa-apis-monitoring"
$ oc adm policy add-role-to-user podexec system:serviceaccount:apis-monitoring:sa-apis-monitoring --role-namespace=api -n api
role "podview" added: "system:serviceaccount:apis-monitoring:sa-apis-monitoring"
$ oc adm policy add-role-to-user podexec system:serviceaccount:apis-monitoring:sa-apis-monitoring --role-namespace=api -n api
role "podexec" added: "system:serviceaccount:apis-monitoring:sa-apis-monitoring"
$ oc adm policy add-role-to-user projectview system:serviceaccount:apis-monitoring:sa-apis-monitoring --role-namespace=api -n api
role "projectview" added: "system:serviceaccount:apis-monitoring:sa-apis-monitoring"
Scheduling OpenShift CronJobs
In this session we'll scheduler OpenShift CronJobs to get metrics PODS and send to Zabbix Server.
Getting the Service Account Token
$ oc describe secret $(oc describe sa sa-apis-monitoring -n apis-monitoring | awk '{if(NR==8) print $2}') -n apis-monitoring | grep token | awk '{if(NR==3) print $2'}
omitted
Creating an Ansible File as a Config MAP to Get Free Memory PODS
- name: Get POD free memory
hosts: localhost
tasks:
- name: OCP Autentication
#Use the script at last session to take token
shell: oc login --token=<omitted> --server=<omitted>
- name: Get PODS
hosts: localhost
tasks:
- name: Go to API project
shell: oc project api
- name: Get PODs
shell: oc get pods -n api | grep Running | awk {'print $1'}
register: pods_list
- name: Get free memory
shell: oc rsh {{ item }} zabbix_sender -vv -z <zabbix_server_host> -s <zabbix_registered_api> -k free_memory -o $(oc rsh {{ item }} free | awk '{if(NR==2) print $4}')
with_items: "{{ pods_list.stdout_lines }}"
$ oc create configmap free-memory --from-file=ansible-scripts/free-memory.yml -n apis-monitoring
configmap/free-memory created
Creating an Ansible File as a Config MAP to Get Used Memory PODS
- name: Get POD used memory
hosts: localhost
tasks:
- name: OCP Autentication
#Use the script at last session to take token
shell: oc login --token=<omitted> --server=<omitted>
- name: Get PODS
hosts: localhost
tasks:
- name: Go to API project
shell: oc project api
- name: Get PODs
shell: oc get pods -n api | grep Running | awk {'print $1'}
register: pods_list
- name: Get used memory
shell: oc rsh {{ item }} zabbix_sender -vv -z <zabbix_server_host> -s <zabbix_registered_api> -k free_memory -o $(oc rsh {{ item }} free | awk '{if(NR==2) print $3}')
with_items: "{{ pods_list.stdout_lines }}"
$ oc create configmap used-memory --from-file=ansible-scripts/used-memory.yml -n apis-monitoring
configmap/used-memory created
Configuring a CronJob for customer-api
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: free-memory
spec:
schedule: '*/5 * * * *'
concurrencyPolicy: Allow
suspend: false
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
volumes:
- name: free-memory
configMap:
name: free-memory
defaultMode: 420
containers:
- name: ansible-agent4ocp
image: >-
image-registry.openshift-image-registry.svc:5000/apis-monitoring/ansible-agent4ocp
args:
- /bin/sh
- '-c'
- ansible-playbook playbooks/free-memory.yml
resources: {}
volumeMounts:
- name: free-memory
mountPath: /opt/scripts/playbooks
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: OnFailure
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
$ oc create -f cronjobs/free-memory.yml
cronjob.batch/free-memory created
Configuring a CronJob for inventory-api
kind: CronJob
apiVersion: batch/v1beta1
metadata:
name: used-memory
spec:
schedule: '*/5 * * * *'
concurrencyPolicy: Allow
suspend: false
jobTemplate:
metadata:
creationTimestamp: null
spec:
template:
metadata:
creationTimestamp: null
spec:
volumes:
- name: used-memory
configMap:
name: used-memory
defaultMode: 420
containers:
- name: ansible-agent4ocp
image: >-
image-registry.openshift-image-registry.svc:5000/apis-monitoring/ansible-agent4ocp
args:
- /bin/sh
- '-c'
- ansible-playbook playbooks/used-memory.yml
resources: {}
volumeMounts:
- name: used-memory
mountPath: /opt/scripts/playbooks
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: Always
restartPolicy: OnFailure
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
successfulJobsHistoryLimit: 3
failedJobsHistoryLimit: 1
$ oc create -f cronjobs/used-memory-jobs.yml -n apis-monitoring
cronjob.batch/used-memory created
Final Considerations
- All source code used is available on GitHub (https://github.com/pedroarraes/ocp-zabbix-monitoring).
- The example can be derived for other cluster k8s and container applications of all kinds.
- Contribute to improving this article by reporting bugs.
Published at DZone with permission of Pedro Arraes. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments