A Look Inside JBoss Microcontainer - The Virtual Deployment Framework

DZone 's Guide to

A Look Inside JBoss Microcontainer - The Virtual Deployment Framework

· Java Zone ·
Free Resource

It's been a long time since our JBoss ClassLoading article. During this time we've been quite busy with the new Microcontainer 2.2.x version series, which is already included in the latest JBossAS6_M2.
In this article, I am going to talk about our new Virtual Deployment Framework (VDF).

One of the first things, besides MBeans-based modular JMX kernel, that users noticed (at least I did when I was still just a user) in previous versions of JBossAS (pre v5), was an elegant deployment extension mechanism in the form of deployers.

It was really easy to extend or enhance existing deployment behavior: simply extend one of the deployer helper classes and implement its deploy and undeploy methods with your custom logic, then just register this new deployer as yet another service in the modular kernel.

While this was all nice and easy, there were some design issues. For example, you had to copy/paste deployment structure recognition logic. Also it was hard to add small new deployment behavior, or change the existing behavior. Also, too much implementation details were exposed. This was all taken into account when re-writing the deployment layer.

We can sum up the new architecture by four features:

  • deployment-type-agnostic handling; e.g. no need for file-backed deployments
  • structure recognition split from actual deployment lifecycle logic
  • natural flow control in the form of attachments
  • separate client-, user- and server-side usage and implementation details

Let's now go over each feature in more details.

Read the other parts in DZone's exclusive JBoss Microcontainer Series:



Deployment-type-agnostic handling

Sometimes all we want to do is create a virtual deployment, based on programmatically described metadata; e.g. required classes already exist in some shared class-space/domain.

This would be a common scenario where you want to install a new service into the server from your admin client.
So, instead of uploading some descriptor file, you simply pass over the bytes and deserialize them into Deployment instance.

There are some limitations with this approach in the new VDF, but it should still be trivial to perform this task.

The other type of deployment (which in terms of implementation classes extends the first one) is plain filesystem-based deployment, backed up by our VFS (which was described in one of the previous articles).


Structure recognition split from actual deployment lifecycle logic

In order to do any real work on top of a deployment, we must first recognize its structure. By structure we mean its classpaths and metadata locations.

Metadata locations are where our configuration files reside; my-jboss-beans.xml, web.xml, ejb-jar.xml, ...
Classpaths are what constitutes deployment's classloader roots; WEB-INF/classes, myapp.ear/lib, ...

Only when we have successfully recognized the structure, can we proceed to actual deployment handling - with the structure info in mind.

This is what a simple deployment lifecycle diagram looks like:

[ Deployment ] --> MainDeployer  <---[ DeploymentContext ]---> Deployers <--[ DeploymentUnit ]--> real Deployers per deployment stages
                     recognize structure
                      StructuralDeployers <--> set of StructureDeployers


As we can see, once we pass a Deployment instance to MainDeployer, it is fed to StructuralDeployers for recognition.

In the case of virtual/programmatic deployment we require an existing predetermined StructureMetaData information to be available - this is where we read the structure information.

For VFS-based deployments we forward the structure recognition to a set of StructureDeployers.

For JEE-specification-defined structures we have matching StructureDeployer implementations:

  • EarStructure
  • WarStructure
  • JarStructure

In addition to that we also have DeclarativeStructure and FileStructure.

DeclarativeStructure looks for META-INF/jboss-structure.xml file inside your deployment, and parses it to construct a proper StructureMetaData.

FileStructure, on the other hand, simply recognizes known configuration files; e.g. -jboss-beans.xml, -service.xml, ...

<context comparator="org.jboss.test.deployment.test.SomeDeploymentComparatorTop">
<path name=""/>
<path name="META-INF"/>
<path name="lib" suffixes=".jar"/>

An example of jboss-structure.xml -- this is how you would describe the old JBoss .sar archive.

In the case of EarStructure we first recognize a top level deployment, then recursively process sub-deployments.

It's very easy to implement your own StructureDeployer, especially with the help of generic GroupingStructure.

At this point we have a recognized deployment structure, and it's time we feed this to real deployers.
It is the Deployers object (concrete implementation of Deployers interface) who knows how to deal with the real deployers - by using a chain of deployers per DeploymentStage.

public interface DeploymentStages
/** The not installed stage - nothing is done here */
DeploymentStage NOT_INSTALLED = new DeploymentStage("Not Installed");

/** The pre parse stage - where pre parsing stuff can be prepared; altDD, ignore, ... */
DeploymentStage PRE_PARSE = new DeploymentStage("PreParse", NOT_INSTALLED);

/** The parse stage - where metadata is read */
DeploymentStage PARSE = new DeploymentStage("Parse", PRE_PARSE);

/** The post parse stage - where metadata can be fixed up */
DeploymentStage POST_PARSE = new DeploymentStage("PostParse", PARSE);

/** The pre describe stage - where default dependencies metadata can be created */
DeploymentStage PRE_DESCRIBE = new DeploymentStage("PreDescribe", POST_PARSE);

/** The describe stage - where dependencies are established */
DeploymentStage DESCRIBE = new DeploymentStage("Describe", PRE_DESCRIBE);

/** The classloader stage - where classloaders are created */
DeploymentStage CLASSLOADER = new DeploymentStage("ClassLoader", DESCRIBE);

/** The post classloader stage - e.g. aop */
DeploymentStage POST_CLASSLOADER = new DeploymentStage("PostClassLoader", CLASSLOADER);

/** The pre real stage - where before real deployments are done */
DeploymentStage PRE_REAL = new DeploymentStage("PreReal", POST_CLASSLOADER);

/** The real stage - where real deployment processing is done */
DeploymentStage REAL = new DeploymentStage("Real", PRE_REAL);

/** The installed stage - could be used to provide valve in future? */
DeploymentStage INSTALLED = new DeploymentStage("Installed", REAL);

This is a set of preexisting deployment stages. These states are mapped to MC's built-in controller states. They provide a deployment-lifecycle-centric view around generic controller states.

Inside Deployers we convert the deployment into MC's component - DeploymentControllerContext - and leave it to the MC's state machine to properly handle dependencies (among deployments and services).

We manually go over matching deployment stages / states, and their corresponding deployers, executing deployments in a breadth-first fashion - meaning that we handle all given deployments for particular deployment stage first, only then advance to the next stage.

For each deployer we handle the whole deployment hierarchy - order depending on deployer's parent-first property (which is true by default).

We can also just specify which hierarchy level(s) our deployer handles -- all, just top level, components only, no components, ...(components are explained further on in "implementation details" section).

Everything we learned about component models, and MC's dependency handling, holds here as well. If there are some unresolved dependencies, the deployment will wait in that state, potentially reporting an error if current state is not the required state.

Adding a new deployer is trivial. Simply extend one of the many existing helper deployers. One thing to note - as we mentioned in item (a) - there are deployers that actually need VFS backed deployment, and there are those that can work off a general deployment. In most cases it's only the parsing deployers that need VFS backing.

Another important note is that the deployers must 'short-circuit'. Every deployer is run against every deployment, sub-deployment, component, ... which could result in unnecessary processing if deployer is not written corectly. It's important to determine as soon as possible if current deployment should actually be fully handled by the deployer.
public class StdioDeployer extends AbstractDeployer
public void deploy(DeploymentUnit unit) throws DeploymentException
System.out.println("Deploying unit: " + unit);

public void undeploy(DeploymentUnit unit)
System.out.println("Undeploying unit: " + unit);

A simple example of deployer that outputs the info about the deployment it's handling.

<bean name="StdioDeployer" class="org.jboss.acme.StdioDeployer"/>

Simply add this description into one of the -jboss-beans.xml files in deployers/ directory (in JBossAS), and our MainDeployerImpl bean will pick up this deployer via MC's IoC callback handling.


Natural flow control in the form of attachments

There needs to be some sort of a mechanism to facilitate the passing of information from one deployer to the next.

In VDF we call this mechanism 'attachments', and it's implemented as just-a-little-bit-enhanced java.util.Map, whose entries we call 'attachments'.

The idea is that some deployers are producers, while others are consumers (can of course be both). In our case this translates into some deployers creating metadata or util instances, putting them into 'attachments' map, and some other deployers just declaring their need for these attachments, and getting the data out from 'attachments' map, then doing additional work on that data.

The natural flow, that we mention, refers to how deployers are ordered. A simple, and common idea is to order things in relative terms (before/after). However, with 'attachments' mechanism already in place, we can simply order deployers by how they produce and/or consume the attachments.

Each attachment has a key, and deployers pass around keys to the attachments they produce. If the deployer produces an attachment, then that key is called output, if the deployer consumes an attachment, then that key is called input.

Deployers have 'ordinary' inputs, and 'required' inputs. Ordinary inputs are only used to help determine the natural order. Required inputs help determine order, but also if the deployer is actually relevant for given deployment, by checking if an attachment corresponding to that required input exists in 'attachments' map.

While we still support relative ordering, it is considered bad practice, and could go away in the next major release.


Separate client, user, and server side usage, and implementation details

This set of changes was mostly done to hide the implementation details, making the usage less error-prone, while at the same time making users/developers' life easier.

The idea is that clients only see a Deployment API, and deployer developers see a DeploymentUnit, while server implementation details are contained in DeploymentContext. This way we only expose the information needed to particular level of deployment's lifecycle.

We already mentioned components in deployer's hierarchy handling, but we didn't explain what they actually are or how, and why, they are used. While top level deployment and sub-deployments are a natural representation of the deployment's structure hierarchy, components are a somewhat new VDF concept.

The original idea of components is that they are 1-1 mappings with the ControllerContexts inside the MC.

There are a number of places that use that assumption, i.e. that the component unit's name is the same as the ControllerContext's name is going to be.

 The two most obvious ones are:

  1.  get*Scope() and get*MetaData()

 which will return the same MDR context that will be used by MC for that instance.

     2. IncompleteDeploymentException (IDE)


In order for the IDE to print out what dependencies are missing for a deployment, it needs to know the ControllerContext names.

It does this by collecting the Component DeploymentUnit's names in Component Deployers that specify this, e.g. BeanMetaDataDeployer or see setUseUnitName() in AbstractRealDeployer.


Hidden gems

I always like to mention how all of our MC components are handled by a single entry point - single state machine, and as we learned, deployments are no exception.

So, let us now see how we can take advantage of this feature -- by using jboss-dependency.xml configuration file in our deployments.

jboss-dependency.xml is a simple generic description of our deployment's dependencies.

<dependency xmlns="urn:jboss:dependency:1.0">
<item whenRequired="Real" dependentState="Create">TransactionManager</item> (1)
<item>my-human-readable-deployment-alias</item> (2)

With (1) we can see how we describe dependency on another service. In this case we require 'TransactionManager' to be created before our deployment is in 'Real' stage.

Item (2) looks a bit more confusing, since we're missing additional information. By default deployment names inside MC are 'ugly' URI names, which makes typing them by hand an error prone proposition.

So, in order to still be able to easily declare dependency on other deployments, we need an aliasing mechanism to avoid this 'ugly' URI names. Making this as simple as possible, simply drop plain txt file named aliases.txt into your deployment, where each line contains a new alias, thereby giving a deployment archive one or more simple names used to refer to it.

Another interesting feature we just added is lazy handling of deployment's ClassLoader -- by using jboss-deployment.xml configuration file.

<deployment xmlns="urn:jboss:deployment:1.0" required-stage="PreDescribe" lazy-resolve="true">

<lazy-start-filter recurse="false">org.foo.bar</lazy-start-filter>
<lazy-start-filter recurse="true">com.acme.somepackage</lazy-start-filter>


Declaring lazy-resolve attribute as true would cause our deployment to wait in required-stage (by default required-stage has a value of 'Describe') until some other deployment needs our deployment in order to resolve its ClassLoader (this functionality is integrated with MC ClassLoading).

If there are some lazy-start-filters or lazy-start flag is set to true, our deployment will wait in ClassLoader stage until some resource is loaded (and hits declared filters) from our deployment's ClassLoader. Only then will it move our deployment to Installed stage.

What that means in practical terms is that you can write, and deploy, a service that provides an API, but you don't have to instantiate the necessary runtime objects that provide a service at container start up time. They can get instantiated on-demand when some other running code first tries to load API classes provided by the service.

Current JEE specs reduced the number of configuration files, but they now require the container to do most of the job based on @annotations.

In order to get @annotation info, containers must scan classes. This is often a performance penalty.

MC is no exception, when it comes to the need for scanning. But to reduce the amount of scanning, we introduced yet another descriptor hook -- jboss-scanning.xml.

<scanning xmlns="urn:jboss:scanning:1.0">
<path name="myejbs.jar">
<include name="com.acme.foo"/>
<exclude name="com.acme.foo.bar"/>
<path name="my.war/WEB-INF/classes">
<include name="com.acme.foo"/>


Here we see a simple description of relative paths that we want to include or exclude when scanning for JEE5 annotated metadata info. This information will then be used in our VDF's MCScan scanning framework (currently the old MCAnn is used, MCScan is still a work in progress).

OK, we finally got properly introduced to Deployers and overall VDF. Something we already quite extensively used in previous articles, but never went into the details enough. As you can see, it's easy to extend existing deployment environment. This is already proven by an elegant solutions used in JBoss' TorqueBox and Weld-integration.

In the next article I'm going to talk about the work we're doing in our native OSGi framework - completely based on Microcontainer. I'll explain the new concepts of service mix - a new component model in MC - and how we're leveraging the full power of MC to write a new OSS OSGi framework.

P.S.: Again thanks to Marko for doing the editing of this article and Adrian for his feedback.


About the Author

Ales Justin was born in Ljubljana, Slovenia and graduated with a degree in mathematics from the University of Ljubljana. He fell in love with Java eight years ago and has spent most of his time developing information systems, ranging from customer service to energy management. He joined JBoss in 2006 to work full time on the Microcontainer project, currently serving as its lead. He also contributes to JBoss AS and is Seam, Weld and Spring integration specialist. He represent JBoss on 'OSGi' expert groups.


Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}