End-to-End Automation for a Docker Hazelcast Cluster on Any Cloud & DCHQ Using Hazelcast for Data Caching In DEV
Learn how to achieve end-to-end automation of a Hazelcast Cluster using DCHQ and 12 different clouds.
Join the DZone community and get the full member experience.
Join For FreeBackground
Containerizing enterprise applications is still a challenge mostly because existing application composition frameworks do not address complex dependencies, external integrations or auto-scaling workflows post-provision. DCHQ, available in hosted and on-premise versions, addresses all of these challenges and simplifies the containerization of enterprise applications through an advance application composition framework that extends Docker Compose with cross-image environment variable bindings, extensible BASH script plug-ins that can be invoked at request time or post-provision, and application clustering for high availability across multiple hosts or regions with support for auto scaling. Once an application is provisioned, a user can monitor the CPU, Memory, & I/O of the running containers, get notifications & alerts, and get access to scheduled backups, scale in/out policies, and continuous delivery. In this blog, we will go over the end-to-end automation of a Hazelcast Cluster. DCHQ not only automates the application deployments – but it also integrates with 12 different clouds to automate the provisioning and auto-scaling of clusters with software-defined networking. DCHQ has also started using Hazelcast for data caching in development. We will cover:
- Building the application template for the Hazelcast Cluster that can re-used on any Linux host running anywhere
- Provisioning & auto-scaling the underlying infrastructure on any cloud (with Rackspace being the example in this blog)
- Deploying the Hazelcast Cluster on a Rackspace Cloud Server
- Monitoring the CPU, Memory & I/O of the Running Containers
- Adding Hazelcast to the DCHQ application stack for data caching
Building the Application Template for the Hazelcast Cluster
Once logged in to DCHQ (either the hosted DCHQ.io or on-premise version), a user can navigate to Manage > Templates and then click on the + button to create a new Docker Compose template. We have created an application template using the official image from Docker Hub for Hazelcast. You will notice that this template has the Hazelcast Management Center and clustered Hazelcast nodes. You will notice that the cluster_size parameter used in Hazelcast allows you to specify the number of containers to launch (with the same application dependencies). You will also notice that you can distribute containers on different hosts to achieve high availability. This is done using the host parameter. The host parameter allows you to specify the host you would like to use for container deployments. That way you can ensure high-availability for your application server clusters across different hosts (or regions) and you can comply with affinity rules to ensure that the database runs on a separate host for example. Here are the values supported for the host parameter:
- host1, host2, host3, etc. – selects a host randomly within a data-center (or cluster) for container deployments
- <IP Address 1, IP Address 2, etc.> -- allows a user to specify the actual IP addresses to use for container deployments
- <Hostname 1, Hostname 2, etc.> -- allows a user to specify the actual hostnames to use for container deployments
- Wildcards (e.g. “db-*”, or “app-srv-*”) – to specify the wildcards to use within a hostname
Additionally, a user can create cross-image environment variable bindings by making a reference to another image’s environment variable. In this case, we have made several bindings – including HAZELCAST_IP={{Hazelcast | container_private_ip}} – in which the container private IP’s of the server nodes in the cluster are resolved dynamically at request time. Here is a list of supported environment variable values:
{{alphanumeric | 8}}
– creates a random 8-character alphanumeric string. This is most useful for creating random passwords.{{<Image Name> | ip}}
– allows you to enter the host IP address of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a connection with the database.{{<Image Name> | container_ip}}
– allows you to enter the internal IP of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a secure connection with the database (without exposing the database port).{{<Image Name> | port _<Port Number>}}
– allows you to enter the Port number of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a connection with the database. In this case, the port number specified needs to be the internal port number – i.e. not the external port that is allocated to the container. For example,{{PostgreSQL | port_5432}}
will be translated to the actual external port that will allow the middleware tier to establish a connection with the database.{{<Image Name> | <Environment Variable Name>}}
– allows you to enter the value an image’s environment variable into another image’s environment variable. The use cases here are endless – as most multi-tier applications will have cross-image dependencies.
Provisioning & Auto-Scaling the Underlying Infrastructure on Any Cloud
Once an application is saved, a user can register a Cloud Provider to automate the provisioning and auto-scaling of clusters on 12 different cloud end-points including OpenStack, CloudStack, Amazon Web Services, Rackspace, Microsoft Azure, DigitalOcean, HP Public Cloud, IBM SoftLayer, Google Compute Engine, and many others. First, a user can register a Cloud Provider for Rackspace (for example) by navigating to Manage > Repo & Cloud Provider and then clicking on the + button to select Rackspace. The Rackspace API Key needs to be provided – which can be retrieved from the Account Settings section of the Rackspace Cloud Control Panel. A user can then create a cluster with an auto-scale policy to automatically spin up new Cloud Servers. This can be done by navigating to Manage > Clusters page and then clicking on the + button. You can select a capacity-based placement policy and then Weave as the networking layer in order to facilitate secure, password-protected cross-container communication across multiple hosts within a cluster. The Auto-Scale Policy in this example sets the maximum number of VM’s (or Cloud Servers) to 10.
A user can now provision a number of Cloud Servers on the newly created cluster by navigating to Manage > Bare-Metal Server & VM and then clicking on the + button to select Rackspace. Once the Cloud Provider is selected, a user can select the region, size and image needed. A Data Center (or Cluster) is then selected and the number of Cloud Servers can be specified.
Deploying the Hazelcast Cluster on a Rackspace Cloud Server
Once the Cloud Servers are provisioned, a user can deploy the Hazelcast Cluster on the new Cloud Server. This can be done by navigating to the Self-Service Library and then clicking on Customize to request a multi-tier application. A user can select an Environment Tag (like DEV or QE) and the Rackspace Cluster created before clicking on Run. Once the Hazelcast Cluster is provisioned – you will notice that the Management Center and the two nodes of the cluster are all on completely different hosts. A user can then log in to the Hazelcast Management Center using the http://<IP>:8080/mancenter. A user will be prompted to “assign a web URL dynamically” by adding at least one of the “private” IP’s of the containers in the cluster. Once this is done – then the cluster will be discovered and can be managed from the Management Center. A user could have also leveraged the plug-in framework to push the hazelcast.xml file to all the nodes in the cluster and then restart the containers from a single workflow. This can be done by click on the Actions menu and then selecting Plug-ins.
Lastly, if a user wishes to create a Docker image with the hazelcast.xml file already baked in – then the build automation feature can be used to automate the creation of this image from a GitHub project and then pushing it into a Docker repository of the user’s choosing (e.g. Docker Hub, Quay, etc.). This can be done by navigating the Builds page.
Monitoring the CPU, Memory & I/O Utilization of the Running Containers
Once the application is up and running, a user can monitor the CPU, Memory, & I/O of the running containers to get alerts when these metrics exceed a pre-defined threshold. This is especially useful when our developers are performing functional & load testing. A user can perform historical monitoring analysis and correlate issues to container updates or build deployments. This can be done by clicking on the Actions menu of the running application and then on Monitoring. A custom date range can be selected to view CPU, Memory and I/O historically. Alerts and notifications are available for when containers or hosts are down or when the CPU & Memory Utilization of either hosts or containers exceed a defined threshold.
Adding Hazelcast to the DCHQ Application Stack for Data Caching
DCHQ itself runs on containers. We’ve recently added Hazelcast to our own template for data caching. Here’s we modeled DCHQ using DCHQ itself. You will notice that we’re using the official Hazelcast Docker image. We’re also everaging the environment variable bindings – e.g. spring.hazelcast.host={{DCHQ-Hazelcast | CONTAINER_PRIVATE_IP}}
to resolve the Hazelcast private container IP at request time so that our Tomcat application server can establish a connection.
Conclusion
Containerizing enterprise Java applications is still a challenge mostly because existing application composition frameworks do not address complex dependencies, external integrations or auto-scaling workflows post-provision. DCHQ, available in hosted and on-premise versions, addresses all of these challenges and simplifies the containerization of enterprise applications through an advance application composition framework that extends Docker Compose with cross-image environment variable bindings, extensible BASH script plug-ins that can be invoked at request time or post-provision, and application clustering for high availability across multiple hosts or regions with support for auto scaling.
Sign up for FREE on http://DCHQ.io
To get access this Hazelcast Cluster application template along with application lifecycle management functionality like monitoring, container updates, scale in/out and continuous delivery.
Published at DZone with permission of Amjad Afanah, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments