How Java EE Can Get Its Groove Back
How Java EE Can Get Its Groove Back
There is excitement and optimism over the Eclipse Foundation taking over stewardship of Java EE from Oracle. See what's in store for enterprise development.
Join the DZone community and get the full member experience.Join For Free
How do you break a Monolith into Microservices at Scale? This ebook shows strategies and techniques for building scalable and resilient microservices.
One of the most intriguing developments in the Java landscape is the transition of governance of the Java EE platform from Oracle’s JCP process to Eclipse Foundation. We’re anticipating a reveal this summer of more details around the technical directions of Java EE from this new governance. To help DZone readers understand some of the key considerations ahead of how Java EE stays relevant in the new, distributed computing, cloud-native trends in enterprise computing, we caught up with Lightbend CTO, Akka creator and original author of the Reactive Manifesto, Jonas Bonér.
DZone: Other JVM languages like Scala — and the many frameworks that target distributed systems challenges like Akka — saw an opportunity to tackle use cases in new ways beyond the classic Java / Java EE approach. What would you say are some of the key ways that the Java ecosystem needs to evolve to keep evolving the Java / Java stack capabilities moving forward?
JB: When it comes to high-level abstractions for distributed computing and concurrency, Java EE has fallen off the pace, and Java programmers either have to resort to quite low level and primitive programming models, or they have to bring in a third-party library like Akka to tackle these challenges.
Java EE has also to a large extent missed the train on streaming data, the concept of data-in-motion. The good news is that there are new initiatives in front of the Eclipse Foundation—proposals trying to address these shortcomings, using the Reactive Streams specification—and it seems likely that some of these will make it through the Jakarta EE process.
There are also proposals in front of the JDK itself, for example, a Reactive Streams-based (available in the JDK as the java.util.concurrent.Flow API) version of java.util.stream. Having a native implementation of Reactive Streams in the JDK would make it easier to build reactive and stream-based JDK components on top, for example async HTTP, async JDBC, and support for streaming in WebSockets.
We also have proposals—currently discussed in the MicroProfile group — that are trying to push Java more into the event-driven space. The JMS and Message Driven Beans specs are very outdated and there’s a need for a new messaging standard that more fully understands this new world of event-driven systems, real-time data, and data-in-motion. More details about these proposals can be found in this article.
DZone: There are a huge number of systems out there today running on Java—especially when you think about arenas like financial services and other major systems built for huge scale, etc. where the JVM offers so much stability. How would you describe the sorts of modernization efforts that you are seeing at big enterprises in how they keep trying to extend the lives of these systems and what their modernization projects typically look like?
JB: Most people that want to modernize their applications hit the ceiling when it comes to the monolith. The monolith can strangle productivity, time to market, development time, and getting features out to customers. You can reach a threshold where you have to coordinate too many things across too many teams in lock-step, in order to get features rolled out at all, when this happens the whole development organization slows down to halt. This forces many organizations to move to microservices, where they can have autonomous teams delivering features independently of each other.
There are many ways you can do microservices. But the naïve way of chopping up services, turning method calls into synchronous RPC calls, and trying to maintain strong consistency of data across services, maintains the strong coupling that microservices can liberate you from. So, what's wrong with that? What's wrong with it is that you've now paid the technical cost associated with microservices—more expensive communication between components, higher chances and rates of failure—but you haven't got any of the technical benefits. So from this perspective, you're now in a worse off position than you were with the monolith.
If you fully embrace the fact that you now have a distributed system, embrace eventual consistency and asynchronous communication/coordination, then you are in a position to take advantage of the benefits of moving to the cloud: loose coupling, system elasticity and scalability, and a higher degree of availability. Here, an event-driven and reactive design can really help, which is why I believe that Java EE needs to embrace event-driven messaging and reactive.
In cloud environments, it’s pay-as-you-go. The problem with operating a monolith is that the granularity of how you can operate your system is a system of “one”—which is too coarse-grained, making it very expensive and hard to scale. By splitting the system up into multiple independent and autonomous services, you can scale different services according to their different needs. For example, one service might need 10x of the memory of another service, or 20x of the CPU processing at peak times. A microservices-based design allows you to fine-tune each of these services to their specific resource needs, independently. Whereas with the monolith, you need to scale your infrastructure based on your highest demand piece (with limited options for scaling down during low-traffic times), which means that you have to pay for much more hardware resources than you actually need. So we’re seeing scale and costs as the two main imperatives for modernization to the cloud for big enterprise Java shops.
DZone: What do you think about the Eclipse Foundation taking over stewardship of Java EE from Oracle, and the opportunities that they have to rethink its governance and pace of innovation?
JB: I’m excited about it after talking to the Jakarta EE leads. It’s clear that what they want to do now is focus on innovation—which has been a problem in the JCP. And they want to do that by borrowing the great and proven ideas from Open Source models: focusing on code first—encouraging experimentation and working code that’s been tested and vetted by many contributors—and more of a focus on an open process—not “design by committee” like the JCP the last 20 years, which we all know doesn’t work. Closed processes, working in a closed room with limited connection to reality, don’t get in touch with the realities of usage until the very end. This new proposed open governance model has made it really exciting not only for the users of Java EE, but the vendors like us that want to contribute.
DZone: You are very focused on streaming data and the path to systems that are always-on, built for data in motion. How is this changing the game for both the developers and the operators supporting those types of data-driven systems? How does this trend from batch to streaming likely change the JVM ecosystem ongoing?
JB: Most of the APIs working with data in the JDK and the Java EE are designed around a world where data is at rest. The problem is that the world we live in today is radically different, most systems today need a way to manage massive amounts of data that is in motion, and often need to do this is a close to real-time fashion. In order to do this, we need to fully embrace streaming as a first-class concept. The APIs in Java needs to evolve to treat streams as values, have high-level DSLs for managing streams (transforming, joining, splitting, etc.), and have APIs for consuming and producing streams of data, with mechanisms for flow control/backpressure.
With streaming, there are also huge implications on DevOps/Operations on things like monitoring, visualization, and general transparency into the system—how events flow, where bottlenecks arise, how to fine-tune the data pipelines, and so on. As soon as you have continuous streams that might never end these things becomes a real challenge, and its still an area where we as an industry—not just in Java—are lacking in standards and good tools.
Also, the move towards microservices and making Java applications more appealing for the microservices world—running JVMs in Docker containers, deployed by Kubernetes, and the likes—requires a focus on reducing memory footprint. In relation to alternatives like Node.js and Golang, Java consumes too much memory and that needs to be a target for improvement.
DZone: You had an interesting take that “streaming is the new integration”—what do you mean by that?
JB: Historically, approaches to enterprise integration are all coming from the legacy messaging tradition and products like Tibco and WebSphere MQ. It’s mainly cast in the direction that messages flow between different integration as one-offs, one message at a time—a view that was maintained by ESBs and SOA. But that approach doesn’t really work well with streams of data, in particular when working with streams that potentially never ends.
There are endless amounts of data we need to get into systems these days. Mobile users alone produce massive amounts of streaming data, and we have the upcoming wave of IoT just around the corner. How do you deal with all these streams of data? You need to better tools for ingesting, joining, splitting, transforming, mining knowledge from, and passing streams of data on to other systems and users—which calls for a new type of integration DSLs and Enterprise Integration Patterns (EIPs). Viktor Klang and I recently wrote an article on the subject, discussing these challenges and opportunities in more detail.
One of the issues is flow control, support for backpressure between different producers of streaming data and their consumers. Here, Reactive Streams is an excellent protocol to lean on, giving us a standardized way for realizing and controlling backpressure between different systems, products, and libraries, and has proved to be a great foundation for doing integration in a fully stream-oriented and asynchronous fashion. One great example of this new approach to enterprise integration is the Alpakka project.
Opinions expressed by DZone contributors are their own.