DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Keep Your Application Secrets Secret
  • Auto-Scaling a Spring Boot Native App With Nomad
  • Google Cloud Pub/Sub: Messaging With Spring Boot 2.5
  • How to Setup MuleSoft Runtime Fabric on Self-Managed Kubernetes With AKS

Trending

  • Scalable System Design: Core Concepts for Building Reliable Software
  • Stateless vs Stateful Stream Processing With Kafka Streams and Apache Flink
  • Transforming AI-Driven Data Analytics with DeepSeek: A New Era of Intelligent Insights
  • Problems With Angular Migration
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. JobRunr + Kubernetes + Terraform

JobRunr + Kubernetes + Terraform

Deploy the JobRunr application to a Kubernetes cluster on the Google Cloud Platform (GCP) using Terraform.

By 
Ronald Dehuysser user avatar
Ronald Dehuysser
·
May. 11, 20 · Tutorial
Likes (4)
Comment
Save
Tweet
Share
5.9K Views

Join the DZone community and get the full member experience.

Join For Free

In this new tutorial, we will build further upon on our first tutorial — Easily process long-running jobs with JobRunr — and deploy the JobRunr application to a Kubernetes cluster on the Google Cloud Platform (GCP) using Terraform. We then scale it up to 10 instances to have a whopping 869% speed increase compared to only one instance!

This tutorial is a beginners guide on the topic cloud infrastructure management. Feel free to skip to the parts that interest you.

Kubernetes, also known as k8s, is the hot new DevOps tool for deploying high-available applications. Today, there are a lot of providers all supporting Kubernetes including the well known Google Kubernetes Engine (GKE), Azure Kubernetes Service (AKS), and Amazon Elastic Kubernetes Service (EKS).

Although the world is currently in difficult times because of COVID-19, Acme Corp (see the first tutorial) hired so many people that there are now about 10.000 employees working for them. Acme Corp's CEO insists that all employees get their weekly salary slip before Sunday 11pm but this has now become impossible - the amount of time it takes for generating that many salary slips is just too long.

Luckily, JobRunr is here to help as it is a distributed background job processing framework. In this tutorial we will:

  • Create a Docker image from our SalarySlipMicroservice JobRunr application using Jib by Google
  • Upload the Docker image to a private Docker registry at Google
  • Use Terraform to define our infrastructure as code which includes a Google Cloud Sql instance.
  • Deploy a Kubernetes cluster using Terraform to Google Cloud
  • Deploy one instance of the SalarySlipMicroservice JobRunr Docker image to the Kubernetes cluster
  • Start generating all the employee slips
  • Scale to 10 instances of the SalarySlipMicroservice JobRunr application and all of this without any change to our production java code!
TLDR; you can find the complete project on our GitHub repository:   https://github.com/jobrunr/example-salary-slip/tree/kubernetes

Postgres as Database

In the first version of our application, we used an embedded H2 Database. As we now go for a deployment on Google Cloud Platform (GCP), we will use a Cloud Sql Postgres instance. To do so, we need to change our DataSource in the SalarySlipMicroService as follows:

Java
 




x
10


 
1
@Bean
2
public DataSource dataSource() {
3
    HikariConfig config = new HikariConfig();
4
    config.setJdbcUrl(String.format("jdbc:postgresql:///%s", System.getenv("DB_NAME")));
5
    config.setUsername(System.getenv("DB_USER"));
6
    config.setPassword(System.getenv("DB_PASS"));
7
    config.addDataSourceProperty("socketFactory", "com.google.cloud.sql.postgres.SocketFactory");
8
    config.addDataSourceProperty("cloudSqlInstance", System.getenv("CLOUD_SQL_INSTANCE"));
9
    return new HikariDataSource(config);
10
}




The DataSource now uses environment variables to connect to the Postgres Cloud SQL instance.

Dockerize It!

Since Kubernetes runs Pods - which are in fact one or more Docker Containers - we first need to create a Docker Image from our application. Jib is a tool from Google to easily create Docker images from your Java application using only Maven or Gradle.

In our build.gradle file, we add the following plugin:

Java
 




xxxxxxxxxx
1
20


 
1
plugins {
2
    ...
3
    id 'com.google.cloud.tools.jib' version '2.2.0'
4
}
5
 
           
6
...
7
 
           
8
jib {
9
    from {
10
        image = "gcr.io/distroless/java:11"
11
    }
12
    to {
13
        image = "gcr.io/jobrunr-tutorial-kubernetes/jobrunr-${project.name}:1.0"
14
    }
15
    container {
16
        jvmFlags = ["-Duser.timezone=Europe/Brussels"]
17
        ports = ["8000", "8080"]
18
    }
19
}




We configure the jib plugin and tell it to build further upon the distroless Java 11 base image. We tag it with gcr.io/jobrunr-tutorial-kubernetes/jobrunr-${project.name}:1.0 so that it will be available later in GCP, specify the timezone and tell it to expose some ports.

If we now run the gradle command: ./gradlew jibDockerBuild it will create a new Docker image for us, ready to run on Docker!

Install the Necessary Tools

We now need to install all the necessary tools and create a Google Cloud account:

  • Google Cloud SDK: Google Cloud SDK is a set of tools that you can use to manage resources and applications hosted on Google Cloud Platform.
  • Kubectl: Kubectl is a command line tool for controlling Kubernetes clusters.
  • Terraform: Terraform is an open-source infrastructure as code software tool created by HashiCorp. It enables users to define and provision a data center infrastructure using a high-level configuration language known as Hashicorp Configuration Language.

The installation for these tools is well explained and differs for each OS. Follow the installation guide for them and come back to the tutorial once you have done so.

We also need an account for Google Cloud. Using your browser navigate to https://console.cloud.google.com/  — when you first login to the Google Cloud Platform you get 300 € of free credit, more than enough for us. You can activate it on the top right.

The Google console dashboard with the free trial at the top The Google console dashboard with the free trial at the top


Create the GCP Project

In this tutorial, we will use the terminal as much as possible - so fire up a terminal and login to gcloud using the command: ~$ gcloud auth login - this will allow you to login only once for all future gcloud commands.

To deploy a Kubernetes cluster to GCP, we first need to create a new GCP project, add a billing account to it, enable the container API's and upload our docker image:

Shell
 




xxxxxxxxxx
1


 
1
~$ gcloud projects create jobrunr-tutorial-kubernetes --name="JobRunr K8s Tutorial" --set-as-default
2
~$ gcloud beta billing accounts list
3
~$ gcloud beta billing projects link jobrunr-tutorial-kubernetes --billing-account ${accountId}
4
~$ gcloud services enable container.googleapis.com
5
~$ gcloud services enable sqladmin.googleapis.com
6
~$ docker push gcr.io/jobrunr-tutorial-kubernetes/jobrunr-example-paycheck:1.0




The first command creates the GCP project and makes it the default project. The second command will list an account id, account name and some other data. Use the account id in the fourth command to link billing to your GCP project. Next, some Google API's need to enabled. The last command uploads the Docker image to a private Docker registry at Google.

We also need a Terraform service account with the necessary rights to create the Kubernetes cluster in the GCP project.

Shell
 




xxxxxxxxxx
1


 
1
~$ gcloud iam service-accounts create terraform --display-name "Terraform admin account"
2
~$ gcloud projects add-iam-policy-binding jobrunr-tutorial-kubernetes --member='serviceAccount:terraform@jobrunr-tutorial-kubernetes.iam.gserviceaccount.com' --role='roles/editor'
3
~$ gcloud projects add-iam-policy-binding jobrunr-tutorial-kubernetes --member='serviceAccount:terraform@jobrunr-tutorial-kubernetes.iam.gserviceaccount.com' --role='roles/resourcemanager.projectIamAdmin'
4
~$ gcloud projects add-iam-policy-binding jobrunr-tutorial-kubernetes --member='serviceAccount:terraform@jobrunr-tutorial-kubernetes.iam.gserviceaccount.com' --role='roles/cloudsql.client'
5
~$ gcloud iam service-accounts keys create ~/.config/gcloud/jobrunr-tutorial-kubernetes-terraform-admin.json --iam-account=terraform@jobrunr-tutorial-kubernetes.iam.gserviceaccount.com
6
~$ export TF_CREDS=~/.config/gcloud/jobrunr-tutorial-kubernetes-terraform-admin.json
7
~$ export GOOGLE_APPLICATION_CREDENTIALS=${TF_CREDS}




First, a service account for Terraform is created. It is given the roles   editor,   resourcemanager.projectIamAdmin and   cloudsql.client. Finally, a private key is created which is saved to a json file and exported so that it can be used by Terraform.

Terraform Deep Dive

Now we're all setup, we can start defining our infrastructure as code using Terraform.

In Terraform, several concepts exist:

  • Providers: a provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services
  • Resources: resources are the most important element in the Terraform language. Each resource block describes one or more infrastructure objects, such as virtual networks, compute instances
  • Variables: a variable can have a default value. if you omit the default value, Terraform will ask you to provide it when running a terraform command
  • Modules: a module is nothing more than a folder which combines related terraform files
  • Outputs: sometimes a variable is needed which is only known after terraform has done a change on a cloud provider — think of ip-addresses that are given to your application. An output takes that value and exposes it to your variables

In Terraform, you can organize your code the anyway you like it - Terraform itself figures out how to deploy it. In this tutorial, we will use two modules:

  • gke module: this module is responsible for setting up a Kubernetes Cluster and a Postgres CloudSql instance.
  • k8s module: this module will deploy our application to the Kubernetes Cluster and expose it to the internet via a service.

Our entry point in Terraform is the main.tf configuration file. Next to it, are two directories: gke and k8s. The final directory layout is as follows:

  • gke
    • variables.tf
    • gcp.tf
    • cluster.tf
    • cloudsql.tf
  • k8s
    • variables.tf
    • k8s.tf
    • deployments.tf
    • services.tf
  • main.tf


Entrypoint for Terraform - main.tf

main.tf is the entry point in our infrastructure as code.

Java
 




xxxxxxxxxx
1
41


 
1
#####################################################################
2
# Variables
3
#####################################################################
4
variable "project" {
5
  default = "jobrunr-tutorial-kubernetes"
6
}
7
variable "region" {
8
  default = "europe-west1"
9
}
10
variable "username" {
11
  default = "admin"
12
}
13
variable "password" {
14
  default = "cluster-password-change-me"
15
}
16
 
           
17
#####################################################################
18
# Modules
19
#####################################################################
20
module "gke" {
21
  source = "./gke"
22
  project = var.project
23
  region = var.region
24
  username = var.username
25
  password = var.password
26
}
27
 
           
28
module "k8s" {
29
  source = "./k8s"
30
  host = module.gke.host
31
  username = var.username
32
  password = var.password
33
 
           
34
  client_certificate = module.gke.client_certificate
35
  client_key = module.gke.client_key
36
  cluster_ca_certificate = module.gke.cluster_ca_certificate
37
  cloudsql_instance = module.gke.cloudsql_db_instance
38
  cloudsql_db_name = module.gke.cloudsql_db_name
39
  cloudsql_db_user = module.gke.cloudsql_db_user
40
  cloudsql_db_password = module.gke.cloudsql_db_password
41
}




main.tf: in this file the GCP project that was created earlier on, is reused. Other variables are also defined - like the region where the application will run and a username and password for the Kubernetes cluster. Next, two modules are defined which consume the variables. The   k8s module reuses outputs from the   gke module.

GKE Module

Our GKE module will create a container cluster on Google Cloud and provision a Postgres Cloud Sql instance. We start by defining some variables that can then be used in the other Terraform files.

Java
 




xxxxxxxxxx
1


 
1
#####################################################################
2
# GKE Variables
3
#####################################################################
4
variable "project" {}
5
variable "region" {}
6
variable "username" {}
7
variable "password" {}




gke/variables.tf: this file defines all the variables that are needed for the Kubernetes engine in Google Cloud. The values for the variables itself are provided in the   main.tf file.
Java
 




xxxxxxxxxx
1


 
1
#####################################################################
2
# GKE Provider
3
#####################################################################
4
provider "google" {
5
  project = var.project
6
  region  = var.region
7
}




gke/gcp.tf: the google provider allows to create a container cluster and a Postgres Cloud Sql instance
Java
 




xxxxxxxxxx
1
51


 
1
#####################################################################
2
# GKE Cluster
3
#####################################################################
4
resource "google_container_cluster" "jobrunr-tutorial-kubernetes" {
5
  name               = "jobrunr-tutorial-kubernetes"
6
  location           = var.region
7
  initial_node_count = 1
8
 
           
9
  master_auth {
10
    username = var.username
11
    password = var.password
12
  }
13
 
           
14
  node_config {
15
    machine_type = "n1-standard-2"
16
    oauth_scopes = [
17
      "https://www.googleapis.com/auth/devstorage.read_only",
18
      "https://www.googleapis.com/auth/logging.write",
19
      "https://www.googleapis.com/auth/monitoring",
20
      "https://www.googleapis.com/auth/service.management.readonly",
21
      "https://www.googleapis.com/auth/servicecontrol",
22
      "https://www.googleapis.com/auth/trace.append",
23
      "https://www.googleapis.com/auth/compute",
24
      "https://www.googleapis.com/auth/cloud-platform", //needed for sqlservice
25
      "https://www.googleapis.com/auth/sqlservice.admin"
26
    ]
27
  }
28
}
29
 
           
30
#####################################################################
31
# Output for K8S
32
#####################################################################
33
output "client_certificate" {
34
  value     = google_container_cluster.jobrunr-tutorial-kubernetes.master_auth[0].client_certificate
35
  sensitive = true
36
}
37
 
           
38
output "client_key" {
39
  value     = google_container_cluster.jobrunr-tutorial-kubernetes.master_auth[0].client_key
40
  sensitive = true
41
}
42
 
           
43
output "cluster_ca_certificate" {
44
  value     = google_container_cluster.jobrunr-tutorial-kubernetes.master_auth[0].cluster_ca_certificate
45
  sensitive = true
46
}
47
 
           
48
output "host" {
49
  value     = google_container_cluster.jobrunr-tutorial-kubernetes.endpoint
50
  sensitive = true
51
}



gke/cluster.tf: for the GKE cluster, a machine of type n1-standard-2 is defined, equaling to 2 virtual CPU's. Various oauth_scopes are given - the important ones are compute, cloud-platform and sqlservice.admin. They are needed to interact with the compute engine for our Kubernetes Cluster and with the Postgres Cloud Sql instance. Some outputs are defined which will be consumed by the Terraform Kubernetes resource.
Java
 




xxxxxxxxxx
1
51


 
1
#####################################################################
2
# GKE Cloud SQL
3
#####################################################################
4
resource "google_sql_database_instance" "postgres" {
5
  database_version = "POSTGRES_11"
6
 
           
7
  settings {
8
    tier = "db-g1-small"
9
    database_flags {
10
      name = "max_connections"
11
      value = 100
12
    }
13
  }
14
  timeouts {
15
    delete = "10m"
16
  }
17
}
18
 
           
19
resource "google_sql_user" "users" {
20
  name = "jobrunr"
21
  instance = google_sql_database_instance.postgres.name
22
  password = "changeme"
23
}
24
 
           
25
resource "google_sql_database" "database" {
26
  name = "jobrunr"
27
  instance = google_sql_database_instance.postgres.name
28
}
29
 
           
30
#####################################################################
31
# Output for K8S
32
#####################################################################
33
output "cloudsql_db_name" {
34
  value = google_sql_database.database.name
35
  sensitive = true
36
}
37
 
           
38
output "cloudsql_db_user" {
39
  value = google_sql_user.users.name
40
  sensitive = true
41
}
42
 
           
43
output "cloudsql_db_password" {
44
  value = google_sql_user.users.password
45
  sensitive = true
46
}
47
 
           
48
output "cloudsql_db_instance" {
49
  value = "${var.project}:${var.region}:${google_sql_database_instance.postgres.name}"
50
  sensitive = true
51
}




gke/cloudsql.tf: a Postgres Cloud Sql instance is defined together with a user and a database. Again various outputs are defined which will be consumed by our k8s module.

k8s Module

The k8s module will deploy our docker image we created earlier on and provide it with the environment variables to connect to the Postgres Cloud Sql instance. It will also create a Kubernetes service to expose the application via an Ingress load-balancer to the internet.

We again start with the variables that can be used in the other Terraform files from the k8s module.

Java
 




xxxxxxxxxx
1
14


 
1
#####################################################################
2
# K8S Variables
3
#####################################################################
4
variable "username" {}
5
variable "password" {}
6
variable "host" {}
7
variable client_certificate {}
8
variable client_key {}
9
variable cluster_ca_certificate {}
10
 
           
11
variable cloudsql_instance {}
12
variable cloudsql_db_name {}
13
variable cloudsql_db_user {}
14
variable cloudsql_db_password {}




k8s/variables.tf: the values for these variables are all passed from the   main.tf file which acts as a bridge between the gke module and k8s module.
Java
 




xxxxxxxxxx
1
12


 
1
#####################################################################
2
# K8S Provider
3
#####################################################################
4
provider "kubernetes" {
5
  host     = var.host
6
  username = var.username
7
  password = var.password
8
 
           
9
  client_certificate     = base64decode(var.client_certificate)
10
  client_key             = base64decode(var.client_key)
11
  cluster_ca_certificate = base64decode(var.cluster_ca_certificate)
12
}




k8s/k8s.tf: the kubernetes provider allows us to interact with resources supported by Kubernetes. 
Java
 




xxxxxxxxxx
1
76


 
1
#####################################################################
2
# K8S Deployment
3
#####################################################################
4
resource "kubernetes_deployment" "jobrunr-tutorial" {
5
  metadata {
6
    name = "jobrunr"
7
 
           
8
    labels = {
9
      app = "jobrunr"
10
    }
11
  }
12
 
           
13
  spec {
14
    replicas = 1
15
 
           
16
    selector {
17
      match_labels = {
18
        app = "jobrunr"
19
      }
20
    }
21
 
           
22
    template {
23
      metadata {
24
        labels = {
25
          app = "jobrunr"
26
        }
27
      }
28
 
           
29
      spec {
30
 
           
31
        container {
32
          image = "gcr.io/jobrunr-tutorial-kubernetes/jobrunr-example-paycheck:1.0"
33
          name = "jobrunr"
34
 
           
35
          port {
36
            container_port = 8000
37
          }
38
          port {
39
            container_port = 8080
40
          }
41
 
           
42
          env {
43
            name = "CLOUD_SQL_INSTANCE"
44
            value = var.cloudsql_instance
45
          }
46
 
           
47
          env {
48
            name = "DB_NAME"
49
            value = var.cloudsql_db_name
50
          }
51
 
           
52
          env {
53
            name = "DB_USER"
54
            value = var.cloudsql_db_user
55
          }
56
 
           
57
          env {
58
            name = "DB_PASS"
59
            value = var.cloudsql_db_password
60
          }
61
 
           
62
          resources {
63
            limits {
64
              cpu = "0.5"
65
              memory = "1024Mi"
66
            }
67
            requests {
68
              cpu = "250m"
69
              memory = "512Mi"
70
            }
71
          }
72
        }
73
      }
74
    }
75
  }
76
}




k8s/deployment.tf: this is the deployment resource where our docker image is provisioned on the Kubernetes cluster. Currently, only 1 replica or instance is requested. The important part is everything under the container attribute - it contains the docker image which the pod should run, the ports that should be exposed and passes all the database credentials using environment variables. On top of that, resource limits and resource requests are defined.
Java
 




xxxxxxxxxx
1
25


 
1
#####################################################################
2
# K8S Service
3
#####################################################################
4
resource "kubernetes_service" "jobrunr-tutorial" {
5
  metadata {
6
    name = "jobrunr-tutorial"
7
  }
8
  spec {
9
    selector = {
10
      app = kubernetes_deployment.jobrunr-tutorial.spec.0.template.0.metadata[0].labels.app
11
    }
12
    port {
13
      name = "dashboard"
14
      port = 8000
15
      target_port = 8000
16
    }
17
    port {
18
      name = "rest-api"
19
      port = 8080
20
      target_port = 8080
21
    }
22
 
           
23
    type = "LoadBalancer"
24
  }
25
}




k8s/service.tf: the final piece of the puzzle - the Kubernetes Service makes sure that both the Dashboard and the Rest API are available on the internet

Deploy Time!

We now can use Terraform commands to provision our application to the Google Cloud Platform. Make sure you are in the directory which contains the main.tf file and the gke and k8s folders when issuing the following commands:

Java
 




xxxxxxxxxx
1


 
1
~/jobrunr/gcloud$ terraform init
2
~/jobrunr/gcloud$ terraform plan
3
~/jobrunr/gcloud$ terraform apply




The   terraform init command downloads the necessary plugins (google and kubernetes) to execute the requested infrastructure changes. The second command,   terraform plan lists all the required infrastructure changes. The last command,   terraform apply makes the actual infrastructure changes.

After you run the terraform apply command you have to wait... typical deploy time is about 5 minutes.

After the deployment succeeds, we can query kubernetes to find out the public ip-address.

Java
 




xxxxxxxxxx
1


 
1
~/jobrunr/gcloud$ gcloud container clusters get-credentials jobrunr-tutorial-kubernetes --region europe-west1
2
~/jobrunr/gcloud$ kubectl get services
3
~/jobrunr/gcloud$ kubectl get pods




The first command downloads credentials and makes them available to the kubectl command. kubectl allows to list all the services and their public ip-addresses. The last command kubectl get pods lists the pods - there should be one pod active.

Testing Time...

Since the salary slip microservice is now available on the internet, we can test it. First, we will create 10.000 employees in our database. To do so, fire up your favorite browser and go to the url http://${public-ip-from-the-service}:8080/create-employees?amount=10000. This takes about 15 seconds.

Now, visit the JobRunr dashboard - you can find it at http://${public-ip-from-the-service}:8000/dashboard. Navigate to the Recurring jobs tab and trigger the 'Generate and send salary slip to all employees' job. After about 15 seconds, you should have 10.000 enqueued jobs. Let's measure how long it takes to process them...

It takes 11.229 seconds or about 3 hours and 7 minutes to create all the salary slips.

Scale it up!

Now, let's add 10 instances of our application to the cluster by changing the replica attribute in the deployment.tf file.

Java
 




xxxxxxxxxx
1
13


 
1
#####################################################################
2
# K8S Deployment
3
#####################################################################
4
resource "kubernetes_deployment" "jobrunr-tutorial" {
5
  metadata {
6
    ...
7
  }
8
 
           
9
  spec {
10
    replicas = 10
11
    ...
12
  }
13
}




k8s/deployment.tf: the replica value is changed from 1 to 10 in the Kubernetes deployment resource

We now apply this change again using the Terraform apply command:
~/jobrunr/gcloud$ terraform apply

If you run the command ~/jobrunr/gcloud$ kubectl get pods you will now see 10 pods running our JobRunr application. Let's trigger the 'Generate and send salary slip to all employees' recurring job again and wait for it to finish.

It only took 1.292 seconds or 21 minutes and 30 seconds!

To keep your free credit for GCP, do not forget to issue the command   terraform destroy. It will stop all pods, remove the Kubernetes cluster and delete the Postgres Cloud Sql instance.

Conclusion

JobRunr can easily scale horizontally and allows to distribute all long-running background jobs over multiple instances without any change to the Java code. In an ideal world, we would have seen a 900% speed increase instead of the 869% we see now as we added 9 extra pods. As JobRunr only performs each job only once, there is some overhead when pulling jobs from the queue explaining the difference.

Learn more

I hope you enjoyed this tutorial and you can see the benefits of JobRunr, Terraform, and Kubernetes — it allows you to easily scale horizontally and distribute all long-running background jobs over multiple instances without any change to the Java code.

To learn more, check out these guides:

  • JobRunr — Java batch processing made easy...
  • Terraform — Provision servers in the cloud with Terraform
  • Kubernetes — Getting started with Kubernetes
  • Jib — Create fast and easy docker images with Jib
Kubernetes Terraform (software) Docker (software) Cloud application Google (verb) Command (computing) Java (programming language)

Published at DZone with permission of Ronald Dehuysser. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Keep Your Application Secrets Secret
  • Auto-Scaling a Spring Boot Native App With Nomad
  • Google Cloud Pub/Sub: Messaging With Spring Boot 2.5
  • How to Setup MuleSoft Runtime Fabric on Self-Managed Kubernetes With AKS

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!