Why Reactive Programming Is Not a Fad
Why Reactive Programming Is Not a Fad
Reactive Programming yields faster processing times and better use of hardware, which results in cheaper operating costs; many large-scale systems in use today are based on the principles of the Reactive Manifesto.
Join the DZone community and get the full member experience.Join For Free
xMatters delivers integration-driven collaboration that relays data between systems, while engaging the right people to proactively resolve issues. Read the Monitoring in a Connected Enterprise whitepaper and learn about 3 tools for resolving incidents quickly.
Architecture and system design has changed over the years. Most of the time, these changes have a close relationship with the hardware they operate on. It can be widely debated where Software Architecture actually started, and with many different definitions of what actually constitutes an architecture. This story will start with the rise of the monolithic application.
With all your resources on a single machine, putting all your code in the same place made sense and was the gold standard for software design. This pattern continued into the J2EE era with monolithic Application Containers. The J2EE architecture was designed to take advantage of Moore’s law because this was the correct way to design a system got larger and large single-core CPU’s.
"Moore's law" is the observation that, over the history of computing hardware, the number of transistors in a dense integrated circuit has doubled approximately every two years.
This architecture has been the gold standard for a few decades because if we needed to scale a system, we just “threw” more hardware at it. Add a faster CPU and more RAM to make your application go faster. This is an application of Moore’s law in action.
The Rise of Multi-Core Processors
A “few” years ago, CPU manufacturers started hitting a wall with CPU design and speeds. They simply cannot get a single-core CPU to go any faster. In response to this, chip manufacturers have started “going wide” with multiple cores on a single chip to get more capacity for speed. This means that the old way of getting more speed for a J2EE Application by adding a CPU with a higher clock speed is no longer possible. If the CPUs are not getting faster, then how will applications scale with these multi-core next-generation processors like it did in the past. A shift in how applications are designed and run had to be made to stay competitive.
Furthermore, It has become clear the synchronous and IO blocking architectures like Java Enterprise cannot make use of all cores of these new processors. The main reason for this is that their thread model is “one-request-per-thread” and those threads spend a significant amount of time in “IO Waiting” states due to blocking I/O calls and not doing work.
This is where Amdahl’s Law starts to become important. In the current generation of processors, this law is driving today's new architectures. Since we have more cores to work with, we need a solution that allows us to make use of the CPU we are paying for. To do this we need to reduce the “IO-waiting” that our application performs by using non-blocking I/O calls. This is a radical change to how we have operated for the last several decades.
Java Enterprise and the One-Thread-Per-Request Model
You can easily see that the Java Enterprise architecture was designed when single-core CPUs were the norm. It employs a “one-thread-per-request” mentality for every request the is sent to the server. Once you have a thread, it is yours for the entirety of the processing of the request you need to perform. Popular libraries that work in this space even rely on this model holding true like, Hibernate and Spring Security. Both libraries use “Thread-local” variables to hold “session” state since they know that the same thread will be used throughout the lifecycle of the request. The big downside here is that this “behavior” cannot be changed as doing so will completely break data persistence and application security code used by a majority of JEE applications in use today.
Lightbend and the Reactive Manifesto
Lightbend (formerly Typesafe) published the Reactive Manifesto to capture the changes needed in how we design software for the future and scalability in the new world on multi-core CPUs being generated today. The real gap between these two architectural styles cannot be easily explained due to this paradigm shift. It is like comparing apples to oranges. This shift has led to some disruption in our industry and will continue to do so until we complete this transition to leveraging multi-core CPUs to their full potential.
This manifesto sets out four principles that should be heavily considered when architecting a system so that it will scale to levels of processing needed in these new systems. There are two concepts that are directly applicable to problems Java Enterprise apps have. They are non-blocking I/O and asynchronous processing. When both are done well, the application can get more work done with fewer CPU’s and lower memory requirements, enabling a single process to scale higher on any given system than a Java Enterprise on the same hardware. Here is a chart that shows the benefits of this parallelism.
Faster, Better, Cheaper
This new approach to software architecture yields faster processing times and better use of hardware, which results in cheaper operating costs. Many of the large-scale systems in use today are based on the principles of the Reactive Manifesto and its tenets. Systems like what LinkedIn, Twitter, Facebook, and many others use are built on asynchronous and non-blocking I/O technology stacks so that their applications are optimized to make the most of the hardware resources that are run on. This is the new way to build scalable apps and it is growing very rapidly. The “Reactive Way” is NOT a FAD—it is the future of how we write software.
Opinions expressed by DZone contributors are their own.