End-to-End Automation for a Docker-based Couchbase Cluster with DCHQ
How to use DCHQ to not only automate the Couchbase and deployments – but also integrate with 12 different clouds.
Join the DZone community and get the full member experience.
Join For FreeBackground
Containerizing enterprise applications is still a challenge mostly because existing application composition frameworks do not address complex dependencies, external integrations or auto-scaling workflows post-provision. DCHQ, available in hosted and on-premise versions, addresses all of these challenges and simplifies the containerization of enterprise applications through an advance application composition framework that extends Docker Compose with cross-image environment variable bindings, extensible BASH script plug-ins that can be invoked at request time or post-provision, and application clustering for high availability across multiple hosts or regions with support for auto scaling. Once an application is provisioned, a user can monitor the CPU, Memory, & I/O of the running containers, get notifications & alerts, and get access to scheduled backups, scale in/out policies, and continuous delivery. In this blog, we will go over the end-to-end automation of a Couchbase Cluster. DCHQ not only automates the application deployments – but it also integrates with 12 different clouds to automate the provisioning and auto-scaling of clusters with software-defined networking. We will cover:
- Building the application template for the Couchbase Cluster that can re-used on any Linux host running anywhere
- Provisioning & auto-scaling the underlying infrastructure on any cloud (with Rackspace being the example in this blog)
- Deploying the Couchbase Cluster on a Rackspace Cloud Server
- Monitoring the CPU, Memory & I/O of the Running Containers
- Scaling out the Couchbase Cluster by Adding Additional Server Nodes
Building the Application Template for the Couchbase Cluster
Once logged in to DCHQ (either the hosted DCHQ.io or on-premise version), a user can navigate to Manage > Templates and then click on the + button to create a new Docker Compose template. We have created an application template using the official image from Docker Hub for Couchbase. Even though Couchbase’s architecture does not have primary and secondary nodes – but for the purpose of simplifying the template, we have two entries:
- Couchbase-Primary – this will be the container on which we will initialize the cluster and add the other server nodes
- Couchbase-Server – this will be for the server nodes that can be scale in/out based on the parameter “cluster_size”
You will notice that Couchbase-Primary is invoking a BASH script plug-in that does the following:
- Initializes the cluster using “cluster-init”
- Adds a new bucket using “bucket-create”
- Adds the other server nodes to the cluster using “server-add”
- And finally, rebalances the load using “rebalance”
You will notice that the cluster_size parameter used in Couchbase-Server allows you to specify the number of containers to launch (with the same application dependencies). Additionally, a user can create cross-image environment variable bindings by making a reference to another image’s environment variable. In this case, we have made several bindings – including SERVER_IP={{Couchbase-Server | container_private_ip}} – in which the container IP’s of the server nodes in the cluster are resolved dynamically at request time and are used to configure the Couchbase cluster. Here is a list of supported environment variable values:
- {{alphanumeric | 8}} – creates a random 8-character alphanumeric string. This is most useful for creating random passwords.
- {{<Image Name> | ip}} – allows you to enter the host IP address of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a connection with the database.
- {{<Image Name> | container_ip}} – allows you to enter the internal IP of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a secure connection with the database (without exposing the database port).
- {{<Image Name> | port _<Port Number>}} – allows you to enter the Port number of a container as a value for an environment variable. This is most useful for allowing the middleware tier to establish a connection with the database. In this case, the port number specified needs to be the internal port number – i.e. not the external port that is allocated to the container. For example, {{PostgreSQL | port_5432}} will be translated to the actual external port that will allow the middleware tier to establish a connection with the database.
- {{<Image Name> | <Environment Variable Name>}} – allows you to enter the value an image’s environment variable into another image’s environment variable. The use cases here are endless – as most multi-tier applications will have cross-image dependencies.
Provisioning & Auto-Scaling the Underlying Infrastructure on Any Cloud
Once an application is saved, a user can register a Cloud Provider to automate the provisioning and auto-scaling of clusters on 12 different cloud end-points including OpenStack, CloudStack, Amazon Web Services, Rackspace, Microsoft Azure, DigitalOcean, HP Public Cloud, IBM SoftLayer, Google Compute Engine, and many others. First, a user can register a Cloud Provider for Rackspace (for example) by navigating to Manage > Repo & Cloud Provider and then clicking on the + button to select Rackspace. The Rackspace API Key needs to be provided – which can be retrieved from the Account Settings section of the Rackspace Cloud Control Panel. A user can then create a cluster with an auto-scale policy to automatically spin up new Cloud Servers. This can be done by navigating to Manage > Clusters page and then clicking on the + button. You can select a capacity-based placement policy and then Weave as the networking layer in order to facilitate secure, password-protected cross-container communication across multiple hosts within a cluster. The Auto-Scale Policy in this example sets the maximum number of VM’s (or Cloud Servers) to 10.
A user can now provision a number of Cloud Servers on the newly created cluster by navigating to Manage > Bare-Metal Server & VM and then clicking on the + button to select Rackspace. Once the Cloud Provider is selected, a user can select the region, size and image needed. A Data Center (or Cluster) is then selected and the number of Cloud Servers can be specified.
Deploying the Couchbase Cluster on a Rackspace Cloud Server
Once the Cloud Servers are provisioned, a user can deploy the Couchbase Cluster on the new Cloud Server. This can be done by navigating to the Self-Service Library and then clicking on Customize to request a multi-tier application. A user can select an Environment Tag (like DEV or QE) and the Rackspace Cluster created before clicking on Run.
Monitoring the CPU, Memory, and I/O Utilization of the Running Containers
Once the application is up and running, a user can monitor the CPU, Memory, & I/O of the running containers to get alerts when these metrics exceed a pre-defined threshold. This is especially useful when our developers are performing functional & load testing. A user can perform historical monitoring analysis and correlate issues to container updates or build deployments. This can be done by clicking on the Actions menu of the running application and then on Monitoring. A custom date range can be selected to view CPU, Memory and I/O historically.
Scaling out the Couchbase Cluster by Adding Additional Server Nodes
A user can to scale out the Couchbase Cluster to meet the increasing load. Moreover, a user can schedule the scale out during business hours and the scale in during weekends for example. To scale out the cluster from 1 to 2, a user can click on the Actions menu of the running application and then select Scale Out. A user can then specify the new size for the cluster and then click on Run Now. We then used the BASH plug-in to add the new server node into the cluster using the “server-add” command. The beauty of this BASH script plug-in is that you do not need to copy & paste the new IP’s – they’re all resolved dynamically. The BASH script plug-ins can also be scheduled to accommodate use cases like cleaning up logs or updating configurations at defined frequencies. An application time-line is available to track every change made to the application for auditing and diagnostics. To execute a plug-in on a running container, a user can click on the Actions menu of the running application and then select Plug-ins. A user can then select the Couchbase-Primary container and search for the plug-in that needs to be executed. The default argument for this plug-in will dynamically resolve all the container IP’s of the running Couchbase servers and add them to the cluster.
An application time-line is available to track every change made to the application for auditing and diagnostics. This can be accessed from the expandable menu at the bottom of the page of a running application. Alerts and notifications are available for when containers or hosts are down or when the CPU & Memory Utilization of either hosts or containers exceed a defined threshold.
Conclusion
Containerizing enterprise applications is still a challenge mostly because existing application composition frameworks do not address complex dependencies, external integrations or auto-scaling workflows post-provision. DCHQ, available in hosted and on-premise versions, addresses all of these challenges and simplifies the containerization of enterprise applications through an advance application composition framework that extends Docker Compose with cross-image environment variable bindings, extensible BASH script plug-ins that can be invoked at request time or post-provision, and application clustering for high availability across multiple hosts or regions with support for auto scaling. Sign Up for FREE on http://DCHQ.io to get access this Couchbase Cluster application template along with application lifecycle management functionality like monitoring, container updates, scale in/out and continuous delivery.
Published at DZone with permission of Amjad Afanah, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments