Changes in the Application Server Market
Changes in the Application Server Market
Join the DZone community and get the full member experience.Join For Free
There have been a number of significant developments in the enterprise software infrastructure market in the past several years. The application server market in particular, has been impacted and application server architectures are responding to new requirements. This article takes a look at a new breed of application server that is emerging.
It’s been about ten years since the Java application server emerged in response to the need for serving up dynamic content on the web. During that time, Java has become ubiquitous - estimates are that there are now three to five million Java developers word-wide, and the Java EE standard, now 10 years old, is complete and mature. The enterprise software industry has become crowded and the players compete aggressively to differentiate their offerings. Although the Java to EE application server market has consolidated considerably, the remaining vendors compete aggressively to differentiate their products through quality of service and extensions to the standard.
While the Java EE standard and the application servers that support that standard have matured, the industry has continued to evolve. Java developers are beginning to experiment with new programming models including frameworks and dynamic languages. Developers have also started looking for lighter-weight containers that can improve developer productivity.
Service Oriented Architecture (SOA) has also driven a number of new requirements onto the application server, including increased demand for specialized, loosely-couple, course-grained service endpoints, and event driven architectures.
Hardware commoditization has driven demand for horizontally scaled and grid architectures. And finally, data center consolidation has driven demand for virtualization and elastic computing.
All of these trends have had in impact on enterprise software designs, but in particular, the maturation of the Java EE standard, SOA, and data center consolidation have each had a profound effect on application server architectures. In the following sections we will take a look at how the convergence of these trends has influenced application server architectures and the application server market.
The Maturation of Java EE
As the Java EE standard has evolved and matured, the API has become increasingly rich and also increasingly complex. The application server vendors all participate in the standards process and they constantly jockey for competitive advantage by promoting standards that will create competitive advantage for their product. As a result, more “stuff” (API’s) keeps getting added to the spec and the standard has become overloaded. The result of that has been that the application servers that implement the standard have become bloated. WebLogic and WebSphere both have in-memory footprints of around one gigabyte!
As a result of the increasing complexity of the Java EE standard and the ever-growing footprint of the Java EE application servers, recently, Java developers have begun looking at alternate programming models and containers. Frameworks such as Spring, Struts and Hibernate have become popular. The Apache servlet container Tomcat, has also become very widely deployed. Tomcat is attractive to developers because it is open source and free, but also because it is lightweight, fast and easy to use.
In response to the demand for lighter-weight containers, Sun has proposed breaking up the Java EE standard into “profiles” Three profiles are proposed for Java EE 6. Each profile will include a subset of the Java EE technology. For example, the Web Profile is a subset targeted at web application development.Certification will be available at each profile level.
Recently, many of the Java EE vendors have begun to tackle the problem of complexity and footprint. Presumably, they are also preparing for the Java EE 6 profiles. A common approach has been to modularize the code base and to make the server configurable to run only specific containers and services.
However, the challenges in getting from a one-gig monolithic architecture to a modular, configurable architecture are significant. Compounding the problem is the requirement for the servers to preserve backwards compatibility for the installed-base applications. We will take a look at some of the approaches that the various vendors are taking in the following sections.
The impact of SOA on the enterprise software infrastructure market has been substantial. A variety of architectural components designed to orchestrate the creation, deployment and management of services have been brought to market in the past several years. However, application servers continue to play an important role as endpoints in a SOA environment. After all, the code still has to run somewhere! SOA has increased the importance of domain-specific containers such as web services servers, brokers and gateways. Frequently these domain-specific containers are implemented on top of the Java EE application servers.
Another significant impact is the increased importance of event driven architecture (EDA). With increasing frequency, code is executed as a result of an event trigger rather than as a result of a user interacting with a web browser.
What this all means for the application server is that there is an increasing demand on the server to support a larger number of container types and execution models. As the legacy Java EE application servers have continued to heap these containers onto their stack the footprint of the products has grown dramatically and the products have become increasingly unwieldy.
Data Center Consolidation
Data center consolidation has also had a substantial impact on the run-time aspects of the application server. The proliferation of data, code and hardware has increased data center utilization and cost to the point where CIOs have had to deal with the problem. One common approach to addressing this has been data center consolidation. Data center consolidation is intended to reduce cost through a variety of approaches but in the past few years the principal focus has been on real estate (measured in lab space required to host the hardware), and power consumption. We will examine each of these areas in more detail below.
Power consumption is a function of both the electricity that each server consumes to run it and the air conditioning needed to keep the servers cool. The power bill to run a typical server can exceed the cost of the hardware in about two years.
In order to address this problem CIOs are looking for technology that allows them to do more processing with less hardware. Server utilization has been studied and discussed extensively. Studies showing server utilization of less than 50% are common. A fully utilized server doesn’t use any more energy or take up any more space than a 50% utilized server. So, increasing utilization is generally recognized as a valid approach to the problem. Increasingly, the approach to increased server utilization has been through a combination of virtualization and elasticity.
Virtualization and Elasticity
There are multiple variations and approaches to virtualization but hardware virtualization is the most common. Hardware virtualization involves slicing up a physical server into multiple virtual spaces. Each space provides a complete compute environment. When fully optimized, hardware virtualization can drive additional requirements onto the application server.
In order to make efficient use of resources and keep down hardware footprint and power consumption, an application server design that is optimized for virtualized environments will be lightweight, fast and flexible.
A data center can be “right sized”, or built with minimal extra capacity if the application infrastructure can expand across additional compute resources elastically. Increasingly, administration tools must be able to interact with the virtualization layer to coordinate optimal resource utilization and expand to additional compute resources when needed. The tools must allow the administrator to define service level agreements, monitor resource utilization, provision additional servers and services to meet compute requirements and start and stop services on demand. The tools must also scale to large numbers of servers.
To get the workload to the right server, the system also requires a workload controller (a load balancing router that is payload-aware) and a service locator that knows how to find a server that has been provisioned to run the payload.
The overall management domain architecture must be flexible and highly scalable. It must accommodate mixed topologies spanning physical hardware, virtualized environments and elastic environments.
The Squeeze Effect
We have discussed how several industry trends are driving change in application server designs. On the one hand, the maturity of Java EE and SOA are both driving increased footprint in the Java EE app servers. On the other hand, developers are looking for better productivity with lightweight containers. Data center consolidation is also driving requirements for lightweight, fast and flexible deployment architectures. The convergence of these diametrically opposed requirements creates a challenge for the traditional J2EE application servers.
The Java EE 6 profiles are intended to address this by providing Java EE certification and branding at multiple tiers. However, the traditional application servers were not architected for modularity. They are not designed to provision, start and stop services on demand, and their designs are not optimized for virtual and elastic environments. They are anything but lightweight. The challenges of evolving these architectures to meet the new requirements while maintaining backwards compatibility are substantial. This is creating an opportunity for new entrants in the market. We will examine some of the key characteristics of these new entrants in the following sections.
The New Breed of Application Server
A new breed of application server is emerging. This new application server has a small, in-memory footprint. It starts fast and runs fast. It has a highly modular architecture and it can host a variety of services and domain-specific containers. It can respond to directives from a “grid controller” and it can provision, deploy, start, stop and tear down services on the fly. It runs efficiently in an elastic topology that may span heterogeneous physical hardware and virtualized hardware. We’ll take a look at a few of the important characteristics that define the breed.
The next generation application server has the ability to host multiple, domain-specific services or container types on a common backplane or microkernel. The application server can be configured to execute different services. The microkernel manages the lifecycle of services and the interdependencies between the services. Configuration may be either static (require reboot) or dynamic. The microkernel usually has a small in-memory footprint.
OSGi is a framework that defines an application life cycle model, a component model, a service registry and an execution environment for services (modules.) Applications are deployed against the OSGI backplane in the form of bundles. Bundles can be installed, started, stopped, updated and uninstalled without restarting the backplane. Services can detect the addition of new services and adapt appropriately.
OSGI is widely deployed in embedded systems running in cell phones, automobiles and PDAs. It is also beginning to emerge in grid computing and application servers. The Eclipse IDE embeds OSGI.
A number of the micro-kernel projects in development today incorporate OSGI. Some architectures use OSGI internally while others expose OSGI to the end-user application.
OSGI is backed by an organization called the OSGI Alliance. A number of the major vendors participate in the OSGI Alliance. Although enterprise software adoption of OSGI is still in the early stage, it is conceivable that OSGI will one day obtain the stature of Java EE as a component model for low-level Java infrastructure.
Distributed Data Cache
In order for distributed systems to execute efficiently, the data that each service needs to operate on needs to be readily accessible to that service. The data needs to either reside locally or be cached locally. Distributed data caches are designed to address this problem. Most of the application servers today already work with distributed data caches but over time we will see application servers use the cache more extensively for internal data stores such as replicated session state.
Event Driven Architecture
The next generation container doesn’t necessarily expect to be invoked by a browser. More frequently code execution will be triggered by a process generated event. Events frequently arrive in the form of a message. The architecture is optimized towards reacting to a variety of event stimuli.
Components of enterprise software solutions are frequently loosely coupled and communication between components is implemented by a messaging system or an enterprise service bus (ESB). The next generation platform will support a variety of messaging formats and standards including JMS. Messaging will also continue to be used heavily in the practice of software integration.
Integration with pre-existing systems, data formats, message formats and 3rd-party software, will continue to be important. The next generation application server will support both standards-based and pattern-based integration.
Whereas many enterprise applications currently run on fixed clusters of servers, in the future, applications will frequently run across a dynamic, loosely-coupled and elastic configuration of servers. This configuration is sometimes referred to as a service fabric. This concept applies equally to service oriented architectures and next generation compute environments. The fabric is usually heterogeneous. Some or all of the servers included are frequently virtualized. The fabric is managed by a controller that has the ability to discover the state of the servers and can also interact with the virtualization or grid layer to coordinate resource availability. The system includes a router, or software load balancer that can route work to servers based on payload and resource availability. The fabric has the ability to expand by adding servers including servers that run in a compute cloud environment.
The new application servers are architected to run efficiently in a fabric environment. They can be discovered, and dynamically provisioned. They can receive payloads and execute chunks of processing independently and asynchronously. The output of a service may trigger a subsequent service request.
There is a broad spectrum between a container that runs efficiently in a virtualized environment and the fully developed service fabric architecture. It is likely that there will be implementations at various points on that spectrum.
The State of the Art in Next-Generation Application Servers
A number of vendors are working on products that incorporate the concepts discussed above. In addition, a number of end-user, enterprise-IT organizations are building their own platforms based on similar architectures.
Prior to the Oracle acquisition, BEA invested significantly in the micro-services architecture (mSA). mSA served as a blue print for componentizing the BEA Platform. WebLogic Event Server (now Oracle Complex Event Processing, a component of the Oracle SOA Suite) is an excellent example of a micro-kernel architecture that runs entirely on OSGI. Oracle also has substantial grid and virtualization initiatives underway.
IBM is also participating in the OSGI alliance and using OSGI internally. WebSphere 7 runs in OSGI but the server runs as a small number of very large OSGI bundles so the benefits of this modularization are limited. OSGI is not available to the end-user applications. WebSphere XD, the high end WebSphere offering, adds “grid-like” features including asymmetric clustering techniques, an in-memory database cache and a highly advanced load balancer.
jBoss has invested in a micro-kernel for a number of years. jBoss implements a proprietary micro-kernel but jBoss also supports OSGI.
The SpringSource dm Server
The recently introduced SpringSource dm Server is a completely module-based Java application server that is designed to run next-generation enterprise Java applications and Spring applications. It is architected from the ground up to be an efficient platform for running Spring applications in a variety of environments including virtualized and elastic environments.
The heart of the dm Server is the SpringSource Dynamic Module Kernel™ (DM-Kernel) technology. The DM-Kernel provides a lightweight, fast, module-based backbone for the server. The DM-kernel has a footprint of about 3 megabytes. The server, including embedded Tomcat, has a footprint of about 15 megabytes. It boots in under 10 seconds. It does on-the-fly, dynamic provisioning of services. It runs Tomcat WAR files and Spring applications out of the box. Let’s take a look at why that makes a difference.
At SpringSource, we don’t have a legacy, application server code base to take forward, but we do have a very large number of Spring-based applications to consider. Our goal is to provide a seamless upgrade path for those applications. The advantage we have is that Spring is highly portable across a all of the Java EE application servers and Tomcat. That greatly simplifies the problem of bringing applications forward. But the engineers went one step further.
Since the majority of Spring applications run on Tomcat today, dm Server embeds Tomcat and dm Server runs WAR files out of the box. Spring applications and web applications can upgrade from Tomcat to dm Server completely seamlessly. Although WAR files run unchanged on dm Server, dm Server also provides the option of running those applications as OSGI bundles so that they can take full advantage of the capabilities of OSGI.
In designing dm Server, our goal was to build a container following Spring’s standards of simplicity, flexibility and power. dm Server offers significant benefits to both developers and to the operations team.
For the development team, dm Server provides a flexible and resilient development server that resolves the tricky application dependency problems frequently arise with Java EE application packaging. It also supports repeated incremental deployments without server restart which can shorten the iterative development-test cycle.
For the operations team, dm Server features seamless application and resource library upgrades, side-by-side version deployment and advanced serviceability features such as application monitoring and analysis from URL throughput to query, cache and transaction statistics.
The SpringSource dm Server is the foundation of the SpringSource Application Platform. The Platform is a collection of products designed to provide the optimal environment to build, run and manage enterprise Java applications that utilize Spring-based technologies. In addition to the SpringSource dm Server, the Platform includes SpringSource Enterprise that delivers the software, services, and technical support.
The design center for dm Server is for domains running across a mixed, elastic topology including physical and virtualized servers. Our vision is that Spring applications running on dm Server will be completely scalable and highly efficient running across these environments. When dm Server is deployed in a virtualized environment, the system initially only needs to boot the three-megabyte DM-Kernel. When the payload arrives and has been introspected, the required services are provisioned and started on the fly. Once the processing has been completed, the services are taken down and the resources immediately returned to the pool. Our clustering management architecture is designed from the ground up for this processing environment.
The SpringSource Enterprise Bundle Repository
SpringSource Bundle Repository provides OSGi-ready versions of hundreds of open source enterprise libraries that are commonly used when developing Spring applications for the dm Server. The repository contains jar files (bundles) and library definition (".libd") files. A library defines a collection of bundles that are often used together for some purpose (e.g. the "Spring Framework" library).
The application server market is in transition. The intersection of several trends is putting a squeeze on traditional Java EE application server architectures. On the one hand the maturation of the Java EE standard and the adoption of SOA is driving increasing footprint and complexity. On the other hand, developers are looking for lighter-weight containers that can help increase developer productivity. Virtualization and elastic computing are also driving requirements for lightweight, fast and flexible runtimes.
The Java EE 6 specification will provide profiles in response to these opposing requirements. Vendors are responding by moving towards more modular and flexible architectures. A new breed of application server is emerging. These application servers are lightweight, fast and flexible. They are frequently based a micro-kernel architecture, they can be configured to run multiple domain-specific services in combination and they are optimized for virtualized and elastic compute environments.
At SpringSource we are building a next-generation application server that we believe will address the requirements of the changing market.
Opinions expressed by DZone contributors are their own.