Q&A: Joe Ottinger on What's New in GigaSpaces XAP 6.6
The Integration Zone is brought to you in partnership with Red Hat. Download the IDC Report: The Business Value of Red Hat Integration Products to learn more about Red hat Integration.
GigaSpaces has just released GigaSpaces XAP 6.6 which brings improved ease of use, stronger alignment with Java EE technologies and remote service invocations. Today, I had a chance to catch up with Joseph Ottinger, former Editor-In-Chief of TheServerSide.com. Joseph is currently a software engineer at GigaSpaces.
Aslam Khan: Joe, thanks for taking the time to chat to us. For those of us that have not used GigaSpaces XAP, tell us a little bit about the product, the niche market that you are targeting, and who's using GigaSpaces.
Joseph Ottinger: First off, it's not a niche market! XAP is a scale-out application server, an application server that was designed from the ground up with scalability in mind. It scales out by distributing both processing capability and data into a distributed server, co-locating data and processing modules such that processes have access to most of their data at the speed of RAM - even if they're using transactions. We've always provided these capabilities to high and variable-load customers like various financials - and with this release, we're adding servlet container integration so you can apply space-based concepts to your web applications as well.
You've always been able to do that with GigaSpaces, but the new integration makes it, well, integral to the product and much easier to use.
Khan: With GigaSpaces XAP 6.6 you have focused on ease of use (amongst other things) and brought Jetty into the picture. How has Jetty's inclusion made it easier for developers using GigaSpaces XAP?
Ottinger: Jetty is a content-delivery service; what integration of Jetty gives you is the ability to deploy .war files directly into a GigaSpaces cluster, and have the cluster provide not only services like direct access to the space to the .war, but also offer SLA-driven management capabilities - so you can get an instant clustered version of Jetty, just by deploying into GigaSpaces.
Direct access to GigaSpaces from your .war (as opposed to connecting to GigaSpaces as a "client JVM" would) is faster, and more manageable from the GigaSpaces tools. It means you have to have fewer tools in your toolbox, without removing anything if you don't want to.
Khan: ...and the inclusion of Maven support?
Ottinger: Maven, of course, is a standard for project management.
What usually prevents people from leveraging the power of a distributed platform like GigaSpaces is the perception that it's "hard," or "very different from what we're used to." I know that feeling - I used to have it myself!
So what GigaSpaces has done is created a set of archetypes for Maven 2, that creates a valid space-based application, with one statement - so you can instantly get a working, valid, boilerplate project that can scale out as much as you're able to throw at it, and asynchronously writes your data into a database for archiving (if that's something you need.) We also have a Mule archetype.
With this, you can be up and running, plugging your data model and your business logic into a standardized project layout within a few minutes, something anyone can do - and that means you can show your managers a working application with your data in no time.
Khan: The Service Virtualization Framework is also included in this release 6.6. So, massively parallel algorithms should be painless to implement. What pains have you taken care of for developers writing, e.g. Map/Reduce, based applications.
Ottinger: SVF is designed to be as easy to use as Spring Remoting, with the advantage of scaling a given call out taken care of by a proxy. What this means is that all the pain - the work of creating the call, setting it up to run remotely, making sure it returns synchronously or asynchronously - is taken care of for you.
That's only half of the puzzle, of course - it doesn't matter if you scale out to 500 processors or 5000, if you can't get data to all of them simultaneously. It's Amdahl's Law in motion; if you can only feed data to one process at a time, you might as well use only one process, and handle your data serially.
That's where the colocation of data and processes comes in.
One of my co-workers likes to use the analogy of sweet and savory popcorn: if you had a room full of people, and a bowl of sweet popcorn and another bowl of savory popcorn, you're going to see the most efficiency if people stay near the popcorn they like the most.
With data and process co-location, you tell the cluster how to route your data to a cluster node with a simple annotatation, and the processes located on that instance can access the data - locally - with no network impact, no external transactions (with some exceptions, particularly based around synchronization to backup servers.) You get the best of both worlds: reliability, through hot backup instances, and speed, because everything happens as fast as your RAM can handle.
Khan: What other external grids, if any, are supported?
Ottinger: We can support any grid platform that can run a JVM. Most people using an external grid are likely to use Amazon's EC2, and we have some simple documentation on how to use EC2 for running cluster nodes.
Khan: How close are we to reaching the "ultimate" in cloud computing and how is GigaSpaces helping us get there?
Ottinger: Wow... "the ultimate." I'm not even sure I know what the ultimate would look like. I'm biased, of course, but I think GigaSpaces is heading down the right direction, because it addresses both processing and data, with a lot of APIs available - JMS, JCache, JDBC, JavaSpaces, and others - and because we're constantly focusing on alignment with the technologies developers are using, like Spring, Mule, etc.
In my humble opinion, I think the "winner" of the grid platform competition - as if it's a zero-sum game - is going to look a lot like GigaSpaces, if it's not GigaSpaces. The capabilities are just too common a need to ignore. You can't ignore data when you provide clustered processing capabilities; you can't ignore processing power when you provide distributed data.
Khan: In my opinion we moving away from the "heavyweight" EJB containers to "lightweight" containers that seem to take on a personality of web application and integration servers. Also, my observation is the GigaSpaces is moving in the same direction, but with a specific focus on scalability as well. What is your opinion of the state of the Java EE application server market and where does GigaSpaces fit in?
Ottinger: I think it's amazingly healthy. There are three main "thrusts" of application servers now: you have the traditional Java EE platforms, which do it all (EJB, servlets, JMS, JCA, etc.), and the lighter platforms like the web servers (Jetty, Tomcat), and the outliers, like the SpringSource Application Platform and GigaSpaces.
The outliers are neat. S2AP offers core OSGi integration, so you can leverage its built-in web container to provide services backed by OSGi; I can't begin to say how cool this is. GigaSpaces does the same sort of thing, except instead of focusing on dependency provision like OSGi does, it provides scalability through a messaging platform.
I'm a results guy; I totally dig OSGi, but I'd rather deliver results than a technology platform. For me, XAP does exactly that.
None of this means that XAP is going to kill Java EE - that's never been the goal, and I think it's unrealistic to even try to undermine Java EE like that.
Java EE does extremely well in the middle-ground of applications: apps that need some reliability, but not too much reliability, and apps that need some scalability, but not too much scalability.
XAP is designed to provide any end-user application reliability and scalability, so I think that XAP and your traditional application servers work very well together; what the recent inclusion of the servlet technologies does is simply lower the barrier to seeing how XAP can help a web application leverage a messaging paradigm to scale up, up, up.
Khan: With almost every app server vendor hopping on the SOA bandwagon, do you see GigaSpaces playing in this space as well, now or in the foreseeable future? No pun intended :-)
Ottinger: We'll always play in every space... pun definitely intended. :-)
I don't see how SOA and GigaSpaces wouldn't fit together, actually. SOA is based around messaging between logical silos; GigaSpaces is based around messaging between logical silos. If you can't tell, there's a similarity there.
SOA and GigaSpaces definitely work together like chocolate and peanut butter; we provide direct Mule integration for this purpose, and there's no reason XAP wouldn't serve as an excellent container for any message-oriented architecture, whether it's SOA or not.
In fact, since the messages can be accessed in a variety of languages and environments - .Net, Java, any JSR-223-compatible scripting language, even C++ - you could enhance your SOA by using GigaSpaces XAP as the communications layer, by not having to rely on things like SOAP or some other protocol to transfer data.
Khan: 8. What can we expect beyond release 6.6?
Ottinger: Well... I don't think we should say too much. You should expect it to be even easier to use XAP in the future, and we've recently joined the OSGi Alliance... I think you can look at our current direction of integrating Java EE technologies and our other actions and sort of figure out where we might be going. :)
Khan: Joe, once again thank you for sharing with us and we look forward to more great things from GigaSpaces.