Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

DevOps With Corus Process Manager — 5.0 Release

DZone's Guide to

DevOps With Corus Process Manager — 5.0 Release

A new release of the Corus process manager tool, exploring its integrations with Docker and its interactions with the JVM.

· DevOps Zone
Free Resource

The Nexus Suite is uniquely architected for a DevOps native world and creates value early in the development pipeline, provides precise contextual controls at every phase, and accelerates DevOps innovation with automation you can trust. Read how in this ebook.

Corus 5.0 is out, and we figured it was a milestone worth sharing with the community. The highlight of this release is integration with Docker: the distributed management capabilities of Corus can be fully leveraged to manage clusters of Docker daemons. Also noteworthy: NUMA integration, whereby Corus is able to load-balance the processes on a machine over its different CPU nodes.

Corus is open source software that fits into the "process manager" category. It allows distributing applications over large numbers of machines, organized as clusters. To be more precise, Corus takes over the lifecycle of applications, from deployment through execution, to undeployment. In the course of application execution, Corus performs periodic health checks, and restarts unresponsive applications automatically (a behavior that can be disabled, but which may prove handy, for example with applications that suffer from memory leaks and leave the JVM in limbo).

In order to support flexible system administration and advanced automation needs, Corus can be controlled either through a command-line interface, or through a REST API. The API offers graceful deployment characteristics, such as: deploying on only a subset of nodes at a time in a cluster, until the whole cluster has been upgraded; and diagnostic functionality, used to automate health-checking of whole clusters as part of post-deployment (sparing deployment logic from having to connect to each application instance individually).

Furthermore, Corus does not force usage of containers. Currently, it also offers deploying Java/JVM-based applications as is, and starts the corresponding JVM processes cluster-wide. Since a JVM is already a sandbox, and is somewhat independent of the hardware and host OS (to the extent you have your JRE/JDK of choice installed on it and are not using native libraries), containerization in this case might appear redundant to some. Integration with the JVM is pretty tight, allowing for hot configuration, for example.

Deploying an application with Corus is quite simple: package your application resources in a zip file, in which you provide a so-called Corus descriptor (expected to be located under the "META-INF" directory) and deploy that package (a "distribution" in Corus terminology) to a Corus cluster. Here is a sample descriptor for a Java application based on Spring Boot:

<distribution name="echo" version="1.0" xmlns="http://www.sapia-oss.org/xsd/corus/distribution-5.0.xsd">
    <process name="server" invoke="true">
        <java mainClass="org.sapia.dzone.example.EchoServer" libDirs="${user.dir}:${user.dir}/" profile="dev" vmType="server">
            <arg value="-Xms16M" />
        </java>
        <java mainClass="org.sapia.dzone.example.EchoServer" libDirs="${user.dir}:${user.dir}/" profile="prod" vmType="server">
            <arg value="-Xms128M" />
        </java>
    </process>
</distribution>

The above descriptor could be inserted into the jar of the Spring Boot application (the "libDirs" attribute has been configured as to correspond to the predefined classpath structure of an executable jar). What the above descriptor also shows is that you only need specifying your application entry point - which should be a class with a "main" method, and Corus will be able to start your application over a whole cluster. Also note that Corus supports the notion of "profile" (which is hinted at in the descriptor). More on that further below...

To deploy our jar to Corus, we can use either the REST API, or the Corus command-line interface (dubbed the CLI). In the CLI, we would type the following:

deploy echo-1.0.jar -cluster

The -cluster option indicates to Corus that the deployment should be performed cluster-wide (that is, to all the nodes in the cluster). An important point to note is that Corus' architecture is peer-to-peer in essence: we connect to a single node in order to perform clustered operations. That node will replicate the commands across the cluster, which shields the user from having to know the cluster's topology.

Now, to start the application across the cluster, we would type:

exec -d echo -v 1.0 -n server -p dev -cluster

Note the -p option: it specifies the profile "under" which the application should be started. Corus will pick the right Java class on which to invoke the "main" method based on the matching profile in the descriptor.

In addition, Corus allows starting multiple processes on a single host, for example:

exec -d echo -v 1.0 -n server -p dev -i 4 -w -cluster

The -i option specifies how many processes should be started (when the option is not specified, a single process is started). In cases where the processes actually correspond to servers that can open one or more network ports, Corus also offers a port allocation capability, which comes handy in order to avoid port conflicts.

To kill application processes, one would type the following:

kill -d echo -v 1.0 -n server -w -cluster

The -w switch indicates that the CLI should wait until all processes have been killed before returning control to the user (a timeout in seconds can be specified to the option). The CLI has productivity features, such as the command below, which amounts to killing all processes:

kill all -w -cluster

Which also could be done as such:

kill -d * -v * -n * -w -cluster

And to close the loop, to undeploy, you'd type the following:

undeploy -d echo -v 1.0 -cluster

Note that the above interactions are the same in the context of the integation with Docker: one executes, kills, restarts Docker containers, cluster-wide, in the same manner as plain JVM-based applications.

Corus offers a lot more features: for example, it allows storing so-called "process properties", which are passed to applications at runtime. For example, in the following excerpt, we've modified our descriptor to be able to configure the value of the -Xms option dynamically, through a variable:

<java mainClass="org.sapia.dzone.example.EchoServer" libDirs="${user.dir}:${user.dir}/" profile="prod" vmType="server">
    <arg value="-Xms${echo.server.xms}" />
</java>

The value for that variable would be kept in Corus. Using the CLI, we'd add it to all the Corus nodes in the cluster:

conf add -p echo.server.xms=128M -cluster

This brief article offers just a glimpse of Corus. The website has exhaustive documentation, covering both basic and advanced functionality. Give it a try.

The DevOps Zone is brought to you in partnership with Sonatype Nexus.  See how the Nexus platform infuses precise open source component intelligence into the DevOps pipeline early, everywhere, and at scale. Read how in this ebook

Topics:
docker ,distributed applications ,deployment ,devops ,java ,spring boot

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}