{{announcement.body}}
{{announcement.title}}

How to Deploy Nebula Graph on Kubernetes

DZone 's Guide to

How to Deploy Nebula Graph on Kubernetes

In this article, take a look at a tutorial on how to deploy Nebula Graph on Kubernetes.

· Database Zone ·
Free Resource

What Is Kubernetes

Kubernetes (commonly stylized as k8s) is an open-source container-orchestration system, aiming to provide a simple yet efficient platform for automating deployment, scaling, and operations of application containers across clusters of hosts.

Kubernetes has a series of components architecturally, enabling a mechanism that can provide deploymentmaintenance, and extension of applications.

The components are designed to be loosely coupled and scalable so that they can meet various kinds of workloads.

The scalability of the system is largely provided by the Kubernetes API which is used mainly as a scalable internal component and as a container running on Kubernetes.

Kubernetes consists mainly of the following core components:

  • etcd is used as Kubernetes’ backing store for all cluster data
  • apiserver provides a unique entry for resource operations and provides mechanisms for authentication, authorization, access control, API registration, and discovery
  • controller manager is responsible for maintaining the state of the cluster, such as fault detection, automatic expansion, rolling updates, etc.
  • scheduler is responsible for scheduling resources, and scheduling Pods to corresponding machines according to a predetermined scheduling policy
  • kubelet is responsible for maintaining the life cycle of the container, and is also responsible for the management of Volume and Network
  • Container runtime is responsible for image management and the runtime of the Pod and container (CRI)
  • kube-proxy is responsible for providing service discovery and load balancing within the cluster for the kubernetes-service

In addition to the core components, there are some recommended Add-ons:

  • kube-dns is responsible for providing DNS services for the entire cluster
  • Ingress Controller provides external network access for services
  • Heapster provides resource monitoring
  • Dashboard provides GUI
  • Federation provides clusters management across Availability Zones
  • Fluentd-elasticsearch provides cluster log collection, storage and query

Kubernetes and Databases

Database containerization is a hot topic recently, and what benefits can Kubernetes bring to databases?

  • Fault recovery: Kubernetes restarts database applications when that fail, or migrates database to other health nodes in the cluster
  • Storage management: Kubernetes provides various solutions on storage management so that databases can adopt different storage systems transparently
  • Load balancing: Kubernetes Service provides load-balance by distributing external network traffic evenly to different database replications
  • Horizontal scalability: Kubernetes can scale the replicas based on the resource utilization of the current database cluster, thereby improving resource utilization rate

Currently many databases such as MySQL, MongoDB and TiDB all work fine on Kubernetes.

Nebula Graph on Kubernetes

Nebula Graph is a distributed, open source graph database that is comprised of graphd (the query engine), storaged (data storage) and metad (meta data). Kubernetes brings the following benefits to Nebula Graph:

  • Kubernetes adjust the workload between the different replicas of the graphd, metad and storaged. The three can discover each other by the dns service provided by Kubernetes.
  • Kubernetes encapsulate the details of the underlying storage by storageclass, pvc and pv, no matter what kind of storage-system such as cloud-disk or local-disk.
  • Kubernetes can deploy Nebula Graph cluster within seconds and upgrade cluster automatically without perception.
  • Kubernetes supports self-healing. Kubernetes can restart the crashed single replica without operations engineer.
  • Kubernetes scales the cluster horizontally based on the cluster utility to improve the nebula performance.

We will show you the details on deploying Nebula Graph with Kubernetes in the following part.

Deploy

Software and Hardware Requirements 

The following list is software and hardware requirements involved in the deployment in this post:

  • The operation system is CentOS-7.6.1810 x86_64.
  • Virtual machine configuration:
    • 4 CPU
    • 8G memory
    • 50G system disk
    • 50G data disk A
    • 50G data disk B
  • Kubernetes cluster is version v1.16.
  • Use local PV as data storage.

Cluster Topology

Following is the cluster topology:

Server IP Nebula Services Role
192.168.0.1

k8s-master
192.168.0.2
graphd, metad-0, storaged-0
k8s-slave
192.168.0.3
graphd, metad-1, storaged-1
k8s-slave
192.168.0.4
graphd, metad-2, storaged-2
k8s-slave

Components to Be Deployed

  • Install Helm
  • Prepare local disks and install local volume plugin
  • Install Nebula Graph cluster
  • Install ingress-controller

Install Helm

Helm is the Kubernetes package manager similar to yum on CentOS, or apt-get on Ubuntu. Helm makes deploying clusters more easily with Kubernetes. Since this article does not give a detailed introduction to Helm, read the Helm Getting Started Guide to understand more about Helm.

Download and Install Helm

Installing Helm with the following command in your terminal:

Shell
 




xxxxxxxxxx
1


1
[root@nebula ~]# wget https://get.helm.sh/helm-v3.0.1-linux-amd64.tar.gz
2
[root@nebula ~]# tar -zxvf helm/helm-v3.0.1-linux-amd64.tgz
3
[root@nebula ~]# mv linux-amd64/helm /usr/bin/helm
4
[root@nebula ~]# chmod +x /usr/bin/helm



View the Helm Version

You can view Helm version with the command helm version and the output is like the following:

Shell
 




xxxxxxxxxx
1


1
version.BuildInfo{
2
    Version:"v3.0.1",
3
    GitCommit:"7c22ef9ce89e0ebeb7125ba2ebf7d421f3e82ffa",
4
    GitTreeState:"clean",
5
    GoVersion:"go1.13.4"
6
}



Prepare Local Disks

Configure each node as follows:

Create Mount Directory

Shell
 




xxxxxxxxxx
1


 
1
[root@nebula ~]# sudo mkdir -p /mnt/disks



Format Data Disks

Shell
 




xxxxxxxxxx
1


 
1
[root@nebula ~]# sudo mkfs.ext4 /dev/diskA
2
[root@nebula ~]# sudo mkfs.ext4 /dev/diskB



Mount Data Disks

Shell
 




xxxxxxxxxx
1


1
[root@nebula ~]# DISKA_UUID=$(blkid -s UUID -o value /dev/diskA)
2
[root@nebula ~]# DISKB_UUID=$(blkid -s UUID -o value /dev/diskB)
3
[root@nebula ~]# sudo mkdir /mnt/disks/$DISKA_UUID
4
[root@nebula ~]# sudo mkdir /mnt/disks/$DISKB_UUID
5
[root@nebula ~]# sudo mount -t ext4 /dev/diskA /mnt/disks/$DISKA_UUID
6
[root@nebula ~]# sudo mount -t ext4 /dev/diskB /mnt/disks/$DISKB_UUID
7
 
          
8
[root@nebula ~]# echo UUID=`sudo blkid -s UUID -o value /dev/diskA` /mnt/disks/$DISKA_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab
9
[root@nebula ~]# echo UUID=`sudo blkid -s UUID -o value /dev/diskB` /mnt/disks/$DISKB_UUID ext4 defaults 0 2 | sudo tee -a /etc/fstab



Deploy Local Volume Plugin

Shell
 




xxxxxxxxxx
1


1
[root@nebula ~]# curl https://github.com/kubernetes-sigs/sig-storage-local-static-provisioner/archive/v2.3.3.zip
2
[root@nebula ~]# unzip v2.3.3.zip



Modify the v2.3.3/helm/provisioner/values.yaml file.

Shell
 




xxxxxxxxxx
1
157


1
#
2
# Common options.
3
#
4
common:
5
  #
6
  # Defines whether to generate service account and role bindings.
7
  #
8
  rbac: true
9
  #
10
  # Defines the namespace where provisioner runs
11
  #
12
  namespace: default
13
  #
14
  # Defines whether to create provisioner namespace
15
  #
16
  createNamespace: false
17
  #
18
  # Beta PV.NodeAffinity field is used by default. If running against pre-1.10
19
  # k8s version, the `useAlphaAPI` flag must be enabled in the configMap.
20
  #
21
  useAlphaAPI: false
22
  #
23
  # Indicates if PVs should be dependents of the owner Node.
24
  #
25
  setPVOwnerRef: false
26
  #
27
  # Provisioner clean volumes in process by default. If set to true, provisioner
28
  # will use Jobs to clean.
29
  #
30
  useJobForCleaning: false
31
  #
32
  # Provisioner name contains Node.UID by default. If set to true, the provisioner
33
  # name will only use Node.Name.
34
  #
35
  useNodeNameOnly: false
36
  #
37
  # Resync period in reflectors will be random between minResyncPeriod and
38
  # 2*minResyncPeriod. Default: 5m0s.
39
  #
40
  #minResyncPeriod: 5m0s
41
  #
42
  # Defines the name of configmap used by Provisioner
43
  #
44
  configMapName: "local-provisioner-config"
45
  #
46
  # Enables or disables Pod Security Policy creation and binding
47
  #
48
  podSecurityPolicy: false
49
#
50
# Configure storage classes.
51
#
52
classes:
53
- name: fast-disks # Defines name of storage classes.
54
  # Path on the host where local volumes of this storage class are mounted
55
  # under.
56
  hostDir: /mnt/fast-disks
57
  # Optionally specify mount path of local volumes. By default, we use same
58
  # path as hostDir in container.
59
  # mountDir: /mnt/fast-disks
60
  # The volume mode of created PersistentVolume object. Default to Filesystem
61
  # if not specified.
62
  volumeMode: Filesystem
63
  # Filesystem type to mount.
64
  # It applies only when the source path is a block device,
65
  # and desire volume mode is Filesystem.
66
  # Must be a filesystem type supported by the host operating system.
67
  fsType: ext4
68
  blockCleanerCommand:
69
  #  Do a quick reset of the block device during its cleanup.
70
  #  - "/scripts/quick_reset.sh"
71
  #  or use dd to zero out block dev in two iterations by uncommenting these lines
72
  #  - "/scripts/dd_zero.sh"
73
  #  - "2"
74
  # or run shred utility for 2 iteration.s
75
     - "/scripts/shred.sh"
76
     - "2"
77
  # or blkdiscard utility by uncommenting the line below.
78
  #  - "/scripts/blkdiscard.sh"
79
  # Uncomment to create storage class object with default configuration.
80
  # storageClass: true
81
  # Uncomment to create storage class object and configure it.
82
  # storageClass:
83
    # reclaimPolicy: Delete # Available reclaim policies: Delete/Retain, defaults: Delete.
84
    # isDefaultClass: true # set as default class
85
 
          
86
#
87
# Configure DaemonSet for provisioner.
88
#
89
daemonset:
90
  #
91
  # Defines the name of a Provisioner
92
  #
93
  name: "local-volume-provisioner"
94
  #
95
  # Defines Provisioner's image name including container registry.
96
  #
97
  image: quay.io/external_storage/local-volume-provisioner:v2.3.3
98
  #
99
  # Defines Image download policy, see kubernetes documentation for available values.
100
  #
101
  #imagePullPolicy: Always
102
  #
103
  # Defines a name of the service account which Provisioner will use to communicate with API server.
104
  #
105
  serviceAccount: local-storage-admin
106
  #
107
  # Defines a name of the Pod Priority Class to use with the Provisioner DaemonSet
108
  #
109
  # Note that if you want to make it critical, specify "system-cluster-critical"
110
  # or "system-node-critical" and deploy in kube-system namespace.
111
  # Ref: https://k8s.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/#marking-pod-as-critical
112
  #
113
  #priorityClassName: system-node-critical
114
  # If configured, nodeSelector will add a nodeSelector field to the DaemonSet PodSpec.
115
  #
116
  # NodeSelector constraint for local-volume-provisioner scheduling to nodes.
117
  # Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
118
  nodeSelector: {}
119
  #
120
  # If configured KubeConfigEnv will (optionally) specify the location of kubeconfig file on the node.
121
  #  kubeConfigEnv: KUBECONFIG
122
  #
123
  # List of node labels to be copied to the PVs created by the provisioner in a format:
124
  #
125
  #  nodeLabels:
126
  #    - failure-domain.beta.kubernetes.io/zone
127
  #    - failure-domain.beta.kubernetes.io/region
128
  #
129
  # If configured, tolerations will add a toleration field to the DaemonSet PodSpec.
130
  #
131
  # Node tolerations for local-volume-provisioner scheduling to nodes with taints.
132
  # Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/
133
  tolerations: []
134
  #
135
  # If configured, resources will set the requests/limits field to the Daemonset PodSpec.
136
  # Ref: https://kubernetes.io/docs/concepts/configuration/manage-compute-resources-container/
137
  resources: {}
138
#
139
# Configure Prometheus monitoring
140
#
141
prometheus:
142
  operator:
143
    ## Are you using Prometheus Operator?
144
    enabled: false
145
 
          
146
    serviceMonitor:
147
      ## Interval at which Prometheus scrapes the provisioner
148
      interval: 10s
149
 
          
150
      # Namespace Prometheus is installed in
151
      namespace: monitoring
152
 
          
153
      ## Defaults to what is used if you follow CoreOS [Prometheus Install Instructions](https://github.com/coreos/prometheus-operator/tree/master/helm#tldr)
154
      ## [Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/prometheus/templates/prometheus.yaml#L65)
155
      ## [Kube Prometheus Selector Label](https://github.com/coreos/prometheus-operator/blob/master/helm/kube-prometheus/values.yaml#L298)
156
      selector:
157
        prometheus: kube-prometheus



Modify hostDir: /mnt/fast-disks and # storageClass: true to hostDir: /mnt/disks and storageClass: true respectively, then run:

Shell
 




xxxxxxxxxx
1


1
# Installing
2
[root@nebula ~]# helm install local-static-provisioner v2.3.3/helm/provisioner
3
# List local-static-provisioner deployment
4
[root@nebula ~]# helm list



Deploy Nebula Graph Cluster

Download nebula helm-chart Package

Shell
 




xxxxxxxxxx
1


1
# Downloading nebula
2
[root@nebula ~]# wget https://github.com/vesoft-inc/nebula/archive/master.zip
3
# Unzip
4
[root@nebula ~]# unzip master.zip



Label Kubernetes Slave Nodes

The following is a list of Kubernetes nodes. We need to set the scheduling labels of the worker nodes. We can label 192.168.0.2, 192.168.0.3, 192.168.0.4 with label nebula: "yes".

Server IP Kubernetes roles nodeName
192.168.0.1 master 192.168.0.1
192.168.0.2
worker 192.168.0.2
192.168.0.3
worker
192.168.0.3
192.168.0.4
worker
192.168.0.4


Detailed operations are as follows:

Shell
 




xxxxxxxxxx
1


1
[root@nebula ~]# kubectl  label node 192.168.0.2 nebula="yes" --overwrite
2
[root@nebula ~]# kubectl  label node 192.168.0.3 nebula="yes" --overwrite
3
[root@nebula ~]# kubectl  label node 192.168.
4
### Deploying Ingress-controller on one Node



Modify the Default Values for nebula helm chart

Following is the directory list of nebula helm-chart:

YAML
 




xxxxxxxxxx
1
15


1
master/kubernetes/
2
└── helm
3
    ├── Chart.yaml
4
    ├── templates
5
    │   ├── configmap.yaml
6
    │   ├── deployment.yaml
7
    │   ├── _helpers.tpl
8
    │   ├── ingress-configmap.yaml\
9
    │   ├── NOTES.txt
10
    │   ├── pdb.yaml
11
    │   ├── service.yaml
12
    │   └── statefulset.yaml
13
    └── values.yaml
14
 
          
15
2 directories, 10 files



We need to adjust the value of MetadHosts in the yaml file master/kubernetes/values.yaml, and replace the IP list with the IPs of the 3 k8s workers in our environment.

Shell
 




xxxxxxxxxx
1


1
MetadHosts:
2
  - 192.168.0.2:44500
3
  - 192.168.0.3:44500
4
  - 192.168.0.4:44500



Install Nebula Graph via Helm

Shell
 




xxxxxxxxxx
1
17


1
# Installing
2
[root@nebula ~]# helm install nebula master/kubernetes/helm
3
# Checking
4
[root@nebula ~]# helm status nebula
5
# Checking nebula deployment on the k8s cluster
6
 
          
7
[root@nebula ~]# kubectl get pod  | grep nebula
8
nebula-graphd-579d89c958-g2j2c                   1/1     Running            0          1m
9
nebula-graphd-579d89c958-p7829                   1/1     Running            0          1m
10
nebula-graphd-579d89c958-q74zx                   1/1     Running            0          1m
11
nebula-metad-0                                   1/1     Running            0          1m
12
nebula-metad-1                                   1/1     Running            0          1m
13
nebula-metad-2                                   1/1     Running            0          1m
14
nebula-storaged-0                                1/1     Running            0          1m
15
nebula-storaged-1                                1/1     Running            0          1m
16
nebula-storaged-2                                1/1     Running            0          1m
17
 
          



Deploy Ingress-controller

Ingress-controller is one of the Add-Ons of Kubernetes. Kubernetes exposes services deployed internally to external users through ingress-controller. Ingress-controller also provides load balancing function, which can distribute external access to different replicas of applications in k8s.

Select a Node to Deploy Ingress-Controller

Shell
 




xxxxxxxxxx
1


1
[root@nebula ~]# kubectl get node
2
NAME              STATUS     ROLES    AGE   VERSION
3
192.168.0.1       Ready      master   82d   v1.16.1
4
192.168.0.2       Ready      <none>   82d   v1.16.1
5
192.168.0.3       Ready      <none>   82d   v1.16.1
6
192.168.0.4       Ready      <none>   82d   v1.16.1
7
[root@nebula ~]# kubectl label node 192.168.0.4 ingress=yes



Edit the ingress-nginx.yaml deployment file.

Shell
 




xxxxxxxxxx
1
274


1
apiVersion: v1
2
kind: Namespace
3
metadata:
4
  name: ingress-nginx
5
  labels:
6
    app.kubernetes.io/name: ingress-nginx
7
    app.kubernetes.io/part-of: ingress-nginx
8
---
9
kind: ConfigMap
10
apiVersion: v1
11
metadata:
12
  name: nginx-configuration
13
  namespace: ingress-nginx
14
  labels:
15
    app.kubernetes.io/name: ingress-nginx
16
    app.kubernetes.io/part-of: ingress-nginx
17
---
18
kind: ConfigMap
19
apiVersion: v1
20
metadata:
21
  name: tcp-services
22
  namespace: ingress-nginx
23
  labels:
24
    app.kubernetes.io/name: ingress-nginx
25
    app.kubernetes.io/part-of: ingress-nginx
26
---
27
kind: ConfigMap
28
apiVersion: v1
29
metadata:
30
  name: udp-services
31
  namespace: ingress-nginx
32
  labels:
33
    app.kubernetes.io/name: ingress-nginx
34
    app.kubernetes.io/part-of: ingress-nginx
35
---
36
apiVersion: v1
37
kind: ServiceAccount
38
metadata:
39
  name: nginx-ingress-serviceaccount
40
  namespace: ingress-nginx
41
  labels:
42
    app.kubernetes.io/name: ingress-nginx
43
    app.kubernetes.io/part-of: ingress-nginx
44
 
          
45
---
46
apiVersion: rbac.authorization.k8s.io/v1beta1
47
kind: ClusterRole
48
metadata:
49
  name: nginx-ingress-clusterrole
50
  labels:
51
    app.kubernetes.io/name: ingress-nginx
52
    app.kubernetes.io/part-of: ingress-nginx
53
rules:
54
  - apiGroups:
55
      - ""
56
    resources:
57
      - configmaps
58
      - endpoints
59
      - nodes
60
      - pods
61
      - secrets
62
    verbs:
63
      - list
64
      - watch
65
  - apiGroups:
66
      - ""
67
    resources:
68
      - nodes
69
    verbs:
70
      - get
71
  - apiGroups:
72
      - ""
73
    resources:
74
      - services
75
    verbs:
76
      - get
77
      - list
78
      - watch
79
  - apiGroups:
80
      - "extensions"
81
      - "networking.k8s.io"
82
    resources:
83
      - ingresses
84
    verbs:
85
      - get
86
      - list
87
      - watch
88
  - apiGroups:
89
      - ""
90
    resources:
91
      - events
92
    verbs:
93
      - create
94
      - patch
95
  - apiGroups:
96
      - "extensions"
97
      - "networking.k8s.io"
98
    resources:
99
      - ingresses/status
100
    verbs:
101
      - update
102
---
103
apiVersion: rbac.authorization.k8s.io/v1beta1
104
kind: Role
105
metadata:
106
  name: nginx-ingress-role
107
  namespace: ingress-nginx
108
  labels:
109
    app.kubernetes.io/name: ingress-nginx
110
    app.kubernetes.io/part-of: ingress-nginx
111
rules:
112
  - apiGroups:
113
      - ""
114
    resources:
115
      - configmaps
116
      - pods
117
      - secrets
118
      - namespaces
119
    verbs:
120
      - get
121
  - apiGroups:
122
      - ""
123
    resources:
124
      - configmaps
125
    resourceNames:
126
      # Defaults to "<election-id>-<ingress-class>"
127
      # Here: "<ingress-controller-leader>-<nginx>"
128
      # This has to be adapted if you change either parameter
129
      # when launching the nginx-ingress-controller.
130
      - "ingress-controller-leader-nginx"
131
    verbs:
132
      - get
133
      - update
134
  - apiGroups:
135
      - ""
136
    resources:
137
      - configmaps
138
    verbs:
139
      - create
140
  - apiGroups:
141
      - ""
142
    resources:
143
      - endpoints
144
    verbs:
145
      - get
146
---
147
apiVersion: rbac.authorization.k8s.io/v1beta1
148
kind: RoleBinding
149
metadata:
150
  name: nginx-ingress-role-nisa-binding
151
  namespace: ingress-nginx
152
  labels:
153
    app.kubernetes.io/name: ingress-nginx
154
    app.kubernetes.io/part-of: ingress-nginx
155
roleRef:
156
  apiGroup: rbac.authorization.k8s.io
157
  kind: Role
158
  name: nginx-ingress-role
159
subjects:
160
  - kind: ServiceAccount
161
    name: nginx-ingress-serviceaccount
162
    namespace: ingress-nginx
163
 
          
164
---
165
apiVersion: rbac.authorization.k8s.io/v1beta1
166
kind: ClusterRoleBinding
167
metadata:
168
  name: nginx-ingress-clusterrole-nisa-binding
169
  labels:
170
    app.kubernetes.io/name: ingress-nginx
171
    app.kubernetes.io/part-of: ingress-nginx
172
roleRef:
173
  apiGroup: rbac.authorization.k8s.io
174
  kind: ClusterRole
175
  name: nginx-ingress-clusterrole
176
subjects:
177
  - kind: ServiceAccount
178
    name: nginx-ingress-serviceaccount
179
    namespace: ingress-nginx
180
 
          
181
---
182
apiVersion: apps/v1
183
kind: DaemonSet
184
metadata:
185
  name: nginx-ingress-controller
186
  namespace: ingress-nginx
187
  labels:
188
    app.kubernetes.io/name: ingress-nginx
189
    app.kubernetes.io/part-of: ingress-nginx
190
spec:
191
  selector:
192
    matchLabels:
193
      app.kubernetes.io/name: ingress-nginx
194
      app.kubernetes.io/part-of: ingress-nginx
195
  template:
196
    metadata:
197
      labels:
198
        app.kubernetes.io/name: ingress-nginx
199
        app.kubernetes.io/part-of: ingress-nginx
200
      annotations:
201
        prometheus.io/port: "10254"
202
        prometheus.io/scrape: "true"
203
    spec:
204
      hostNetwork: true
205
      tolerations:
206
        - key: "node-role.kubernetes.io/master"
207
          operator: "Exists"
208
          effect: "NoSchedule"
209
      affinity:
210
        podAntiAffinity:
211
          requiredDuringSchedulingIgnoredDuringExecution:
212
            - labelSelector:
213
                matchExpressions:
214
                  - key: app.kubernetes.io/name
215
                    operator: In
216
                    values:
217
                      - ingress-nginx
218
              topologyKey: "ingress-nginx.kubernetes.io/master"
219
      nodeSelector:
220
        ingress: "yes"
221
      serviceAccountName: nginx-ingress-serviceaccount
222
      containers:
223
        - name: nginx-ingress-controller
224
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller-amd64:0.26.1
225
          args:
226
            - /nginx-ingress-controller
227
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
228
            - --tcp-services-configmap=default/graphd-services
229
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
230
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
231
            - --annotations-prefix=nginx.ingress.kubernetes.io
232
            - --http-port=8000
233
          securityContext:
234
            allowPrivilegeEscalation: true
235
            capabilities:
236
              drop:
237
                - ALL
238
              add:
239
                - NET_BIND_SERVICE
240
            # www-data -> 33
241
            runAsUser: 33
242
          env:
243
            - name: POD_NAME
244
              valueFrom:
245
                fieldRef:
246
                  fieldPath: metadata.name
247
            - name: POD_NAMESPACE
248
              valueFrom:
249
                fieldRef:
250
                  fieldPath: metadata.namespace
251
          ports:
252
            - name: http
253
              containerPort: 80
254
            - name: https
255
              containerPort: 443
256
          livenessProbe:
257
            failureThreshold: 3
258
            httpGet:
259
              path: /healthz
260
              port: 10254
261
              scheme: HTTP
262
            initialDelaySeconds: 10
263
            periodSeconds: 10
264
            successThreshold: 1
265
            timeoutSeconds: 10
266
          readinessProbe:
267
            failureThreshold: 3
268
            httpGet:
269
              path: /healthz
270
              port: 10254
271
              scheme: HTTP
272
            periodSeconds: 10
273
            successThreshold: 1
274
            timeoutSeconds: 10



Deploying ingress-nginx.

Shell
 




xxxxxxxxxx
1


1
# Deployment
2
[root@nebula ~]# kubectl create -f ingress-nginx.yaml
3
# View deployment
4
[root@nebula ~]# kubectl get pod -n ingress-nginx
5
NAME                             READY   STATUS    RESTARTS   AGE
6
nginx-ingress-controller-mmms7   1/1     Running   2          1m



Access Nebula Graph Cluster in Kubernetes

View which node ingress-nginx is located in:

Shell
 




xxxxxxxxxx
1


1
[root@nebula ~]# kubectl get node -l ingress=yes -owide
2
NAME            STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
3
nebula.node23   Ready    <none>   1d   v1.16.1   192.168.8.23   <none>        CentOS Linux 7 (Core)   7.6.1810.el7.x86_64   docker://19.3.3



Access Nebula Graph Cluster:

Shell
 




xxxxxxxxxx
1


 
1
[root@nebula ~]# docker run --rm -ti --net=host vesoft/nebula-console:nightly --addr=192.168.8.23 --port=3699



FAQ

How to deploy Kubernetes cluster?

Please refer to the Official Doc on deployment of high-availability Kubernetes clusters.

You can also refer to Installing Kubernetes with Minikube on how to deploy local Kubernetes cluster with minikube.

How to modify the Nebula Graph cluster parameters?

When using helm install, you can use --set to override the default variables in values.yaml. Please refer to Helm on details.

How to observe nebula cluster status?

You can use the kubectl get pod | grep nebula command or via the kubernetes dashboard.

How to use other disk types?

Please refer to the Storage Classes doc.

If you have any questions, welcome to join the Nebula Graph Slack channel and talk with the community! Follow the official Twitter handle for more updates of Nebula Graph. 

Topics:
container, database, graph database, k8s, kubernetes, nebula graph, tutorial

Published at DZone with permission of Jamie Liu . See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}