We Should Write Java Code Differently
This article focuses on how you can improvise coding in Java to utilize it to its fullest.
Join the DZone community and get the full member experience.Join For Free
For the last few years, I’m writing articles that describe a new, more functional way to write Java code. But the question of why we should use this new coding style remains largely unanswered. This article is an attempt to fill this gap.
Just like any other language, Java evolves over time. So does the style in which Java code is written. Code written around Y2K is significantly different from code written after 2004-2006 when Java5 and then Java6 was released. Generics and annotations are so widespread now, that it’s hard to even imagine Java code without them.
Then came Java8 with lambdas,
Optional<T>. Those functional elements should be revolutionizing Java code, but largely they don’t. In a sense, they definitely affected how we write Java code, but there was no revolution. Rather slow evolution. Why? Let’s try to find the answer.
I think that there were two main reasons.
The first reason is that even Java authors felt uncertainty about how new functional elements fit into the existing Java ecosystem. To see this uncertainty, it’s enough to read
API Note: Optional is primarily intended for use as a method return type where there is a clear need to represent “no result,” and where using null is likely to cause errors.
API also shows the same: the presence of
get() method (which may throw NPE) as well as a couple of
orElseThrow() methods are clear reverences to traditional imperative Java coding style.
The second reason is that existing Java code, especially libraries and frameworks, was incompatible with functional approaches -
null and business exceptions were the idiomatic Java code.
Fast-forward to the present time: Java 17 was released a few weeks ago, Java 11 is quickly getting wide adoption, replacing Java 8, which was ubiquitous a couple of years ago. Yet, our code looks almost the same as 7 years ago, when Java 8 was released.
Perhaps it’s worth stepping back and answering another important question: do we need to change the way in which we’re writing Java code at all? It served us well enough for a long time, we have skills, guides, best practices, and tons of books that teach us how to write code in this style. Do we actually need to change that?
I believe the answer to this question could be derived from the answer to another question: do we need to improve the development performance?
I bet we do. Business pushes developers to deliver apps faster. Ideally, projects we’re working on should be written, tested, and deployed before the business even realizes what actually needs to be implemented. Just kidding, of course, but delivery date “yesterday” is a dream of many business people.
So, we definitely need to improve development performance. Every single framework, IDE, methodology, design approach, etc., etc., focuses on improving the speed at which software (of course, with necessary quality standards) is implemented and deployed. Nevertheless, despite all these, there are no visible development performance breakthroughs.
Of course, there are many elements that define the pace at which software is delivered. This article focuses only on development performance.
From my perspective, most attempts to improve development performance are assuming that writing less code (and less code in general) automatically means better performance. Popular libraries and frameworks like Spring, Lombok, Feign - all trying to reduce the amount of code. Even Kotlin was created with an obsession with brevity as opposed to Java “verbosity”. History did prove this assumption wrong many times (Perl and APL, perhaps, most notable examples), nevertheless, it’s still alive and drives most efforts.
Any developer knows that writing code is a tiny portion of the development activities. Most of the time we’re reading code. Is reading less code more productive? The first intent is to say yes, but in practice, the amount of code and its readability are barely related. Reading and writing of the same code often have different “impedance” in the form of mental overhead.
Probably the best examples of this difference in the “impedance” are regular expressions. Regular expressions are quite compact and in most cases rather easy to write, especially using countless dedicated tools. But reading regular expressions usually is painful and consumes much more time. Why? The reason is the lost context. When we’re writing regular expression, we know the context: what we want to match, which cases should be considered, how possible input may look like, and so on and so forth. The expression itself is a compressed representation of this context. But when we’re reading them, the context is lost or, to be precise, squeezed and packed using very compact syntax. An attempt to “decompress” it from the regular expression is a quite time-consuming task. In some cases, rewriting from scratch takes significantly less time than an attempt to understand existing code.
The example above gives one important hint: reducing the amount of code is meaningful only to the point where context remains preserved. As soon as reducing code causes loss of context, it starts to be counterproductive and harms development performance.
So, if code size is not so relevant, then how we really can improve productivity?
Obviously, by preserving and/or restoring lost context. But when and why context is getting lost?
Context Eaters are coding practices or approaches which result in context loss. Idiomatic Java code has several such context eaters. Popular frameworks often add their context eaters. Let’s take a look at the two most ubiquitous context eaters.
Yes, you read it correctly. Nullable variables hide part of the context - cases when variable value might be missing. Look at this <a name=“code-example”></a>code example:
String value = service.method(parameter);
Just by looking at this code, you can’t tell if
value can be null or not. In other words, part of the context is lost. To restore it, one needs to take a look into the code
service.method() and analyze it. Navigation to that method, reading its code, returning - all these are a distraction from the current task. And the constant need to keep in mind that a variable might be
null, cause a mental overhead. Experienced developers are good at keeping such things in mind, but this does not mean that this mental overhead does not affect their development performance.
Let’s sum up:
Nullable variables are context eaters, development performance killers and source of run-time errors.
Idiomatic Java uses business exceptions for error propagation and handling. There are two types of exceptions - checked and unchecked. The use of checked exceptions is usually discouraged and often considered an antipattern because they cause deep code coupling. Although the initial intent of introduction of checked exceptions, by the way, was preserving the context. And compiler even helps to preserve it. Nevertheless, over time, we’ve switched to unchecked exceptions. Unchecked exceptions were designed for the technical errors - accessing the null variable, attempting to access value outside the array bounds, etc.
Think about this for a moment: we’re using technical unchecked exceptions for business error handling and propagation.
Use of the language feature outside the area it was designed for, results in loss of context and issues similar to ones described for nullable variables. Even reasons are the same - unchecked exceptions require navigation and reading code (often quite deep in the call chain). They also require switching back and forth between the current tasks and error handling. And just like nullable variables, exceptions can be a source of run time errors if not processed correctly.
Business exceptions are context eaters, development performance killers and source of bugs.
Since frameworks are usually specific to a particular project, issues caused by them are also project-specific. Nevertheless, if you got the idea of context loss/preservation, you might notice that popular frameworks like Spring and others, which use classpath scan, “convention over configuration” idiom, and other “magic”, intentionally remove a large part of the context and replace it with implicit knowledge of the default setup (i.e. mental overhead). With this approach, the application gets broken into a set of loosely related classes. Without IDE support, it’s even hard to navigate between components, so disconnected they are. Besides the loss of a huge part of the context, there is another significant problem, which negatively impacts productivity: a significant number of errors are shifted from compile-time to run-time. Consequences are devastating:
- More tests are necessary. Famous
contextLoads()the test is a clear sign of this problem.
- Software support and maintenance require significantly more time and effort.
So, by reducing typing for a few lines of code, we’re getting a lot of headaches and decreased development performance. This is the real price of the “magic”
The Pragmatic Functional Java is an attempt to solve problems some problems mentioned above. While the initial intent was to just preserve context by encoding special states into variable types, practical use did show a number of other benefits of taken approach:
- Significantly reduced navigation.
- A number of errors are shifted from run-time to compile-time which, in turn, improved reliability and reduced the number of necessary tests.
- Removed significant portion of boilerplate and even type declarations - less typing, less code to read, business logic is less cluttered with technical details.
- Sensibly less mental overhead and need to keep in mind technical things not related to the current task.
Published at DZone with permission of Sergiy Yevtushenko. See the original article here.
Opinions expressed by DZone contributors are their own.