Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service
Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
Effective Tips for Debugging Complex Code in Java
Send Your Logs to Loki
Low Latency? In computing, latency is defined as the length of time to perform some task. This could be the time it takes to respond to an interrupt from hardware or the time it takes for a message sent by one component to be available to its recipient. In many cases, latency is not seen as a primary non-functional concern when designing an application, even when considering performance. Most of the time, after all, computers seem to do their work at speeds that are well beyond human perception, typically using scales of milliseconds, microseconds, or even nanoseconds. The focus is often more on throughput - a measure of how many events can be handled within a given time period. However, basic arithmetic tells us that if a service can handle an event with low latency (for example, microseconds), then it will be able to handle far more events within a given time period, say 1 second, than a service with millisecond event handling latency. This can allow us to avoid, in many cases, the need to implement horizontal scaling (starting new instances) of a service, a strategy that introduces significant complexity into an application and may not even be possible for some workloads. Additionally, there are many application domains where consistently low latency is a critical element of an application’s success, for example: Electronic trading systems must be able to respond to changes in event loads based on market conditions fast enough to take advantage of these before competitors in the market - huge sums of money may be gained by being able to do this (or lost by missing such opportunities). There is not enough time to respond to these load “spikes” by scaling horizontally — which could take up to a second — before opportunities are lost. Systems that monitor equipment, such as those found in the IoT space, need to be able to react to indications from that equipment with minimal delays. Alarms, for example, security or environmental alarms, must be notified and responded to as quickly as possible. The overhead introduced by monitoring itself must be minimal to avoid becoming a factor that affects the data being recorded. Some machine learning or AI algorithms need to react to input data as it arrives or as near to it as possible, making them more effective in areas such as pricing, threat detection, sentiment analysis, or buy/sell decisions. Online gaming software must be able to react to input from potentially large numbers of users, adjusting feedback and strategies in as near real-time as possible. At Chronicle Software, our primary focus is to develop software that minimizes latency. It’s often felt that Java is not a suitable language to use for such software; however, it is possible to achieve latency figures that approach those of lower-level languages such as C++ and Rust. Challenges in Building Low-Latency Software Modern applications tend to be implemented using architectural approaches based on loosely coupled components (microservices) that interact with each other based on asynchronous message passing. Several toolkits and frameworks exist that help in implementing such microservices in Java. However, it is not straightforward to build truly low-latency software that follows this approach. Latency creeps in at many different levels. Existing microservice toolkits tend to focus on the quality of the abstractions provided in order to protect their users from lower-level APIs and features. This higher level of abstraction often comes at the price of the creation of large numbers of intermediate objects, placing a significant load on the memory management subsystem of the JVM — something that is anathema to low-latency coding. Other approaches lean towards stripping away almost all abstractions, exposing developers to the lowest level of detail. While this clearly dispenses with overhead, it pushes more complexity into the application-level code, making it more error-prone and significantly more difficult to maintain and evolve. Even at this level of detail, however, it is often necessary to understand and be able to tune operating system level parameters to achieve consistent low latency. Chronicle Tune is a product that can be used to perform this level of analysis and configuration based on Chronicle’s extensive knowledge and experience in this area. Introducing Chronicle Services Over many years, Chronicle Software has been involved in building libraries, applications, and systems that operate in environments where low latency is critical, primarily in the financial sector. Based on the experience gained in this work, we have developed an architectural approach for constructing low-latency applications based on event-driven microservices. We have created the Chronicle Services framework to support this approach, taking care of necessary software infrastructure and enabling developers to focus on implementing business logic based on their functional requirements. Chronicle Services presents an opinionated view of several of the specialized libraries we have developed to support low-latency applications. Philosophy A key requirement in achieving the strict requirements of minimal latency is the elimination of accidental complexity. Frameworks such as Spring Boot, Quarkus, and Micronaut offer rich sets of abstractions to support the construction of microservices and patterns such as event sourcing and CQRS. These are useful parts of frameworks that are necessarily designed to support general-purpose applications, but they can introduce complexity that should be avoided when building highly focused, low-latency components. Chronicle Services offers a smaller set of abstractions, leading to considerable simplification in the framework, less load on the underlying JVM, and, hence, much smaller overhead in processing events. This leads to a throughput of 1 million events per second for a single service. We have also been able to help customers refactor systems that were required to be run on multiple servers to run on a single server (plus one server for continuity in the event of failure). How It Works There are two key concepts in Chronicle Services: Services and Events. A Service is a self-contained processing component that accepts input from one or more sources and outputs to a single sink. Service input and output are in the form of Events, where an Event is an indication that something has happened. By default, events are transmitted between services using Chronicle Queue, a persisted low-latency messaging framework offering the ability to send messages with latencies of under 1 microsecond. Events are transmitted in a compact proprietary binary format. Encoding and decoding are extremely efficient in terms of both time and space and require no additional code generation on the sending or receiving side. Building a Service The public interface of a Service is defined by the types of Events it expects as input and the types of Events that it outputs. The Service implementation itself provides implementations of handlers for each of the input events. There is a clean separation of the Service from the underlying infrastructure for event delivery, so the developer can focus on implementing the business logic encapsulated in the event handlers. A Service handles all incoming events in a single thread, removing the need for dealing with concurrency, another common source of accidental complexity. Detailed functional testing is available through a powerful testing framework, where input events are supplied as YAML, together with expected output events. Configuration of Services is available through APIs or using a declarative approach based on external files, or even dynamic configuration updates through events. An example of the static configuration file for a simple Services application is shown below: YAML !ChronicleServicesCfg { queues: { sumServiceIn: { path: data/sumServiceIn }, sumServiceOut: { path: data/sumServiceOut }, sumServiceSink: { path: data/sumServiceSink }, }, services: { sumService: { inputs: [ sumServiceIn ], output: sumServiceOut, implClass: !type software.chronicle.services.ex1.services.SumServiceImpl, }, sumUpstream: { inputs: [ ], output: sumServiceIn, implClass: !type software.chronicle.services.ex1.services.SumServiceUpstream, }, sumDownstream: { inputs: [ sumServiceOut ], output: sumServiceSink, implClass: !type software.chronicle.services.ex1.services.SumServiceDownstream, } } } Each Service is defined in terms of its implementation class and the Chronicle Queues that are used for the transmission of Events. There is enough information here for the Chronicle Services runtime to create and start each service. Diagrammatically, the application described in the above file would appear like this: Deploying a Service Chronicle Services supports many options for deploying Services. Multiple services can share a single thread, can be run on multiple threads, or spread across multiple processes. Chronicle Queue is a shared memory-based IPC mechanism, so message exchange between Services in different processes is extremely fast. Services can be further packaged into containers, which can simplify deployment, especially in Cloud environments. Enterprise Class Features Chronicle Services is based on the Enterprise edition of Chronicle Queue, which offers cluster-based replication of event storage, along with other Enterprise features. Replication is based on the single leader/multiple followers model, with both Active/Passive and Active/Active approaches to high availability available in the event of failure of the cluster leader. Chronicle Services applications can also integrate with industry-standard monitoring and observability components such as Prometheus and Grafana to provide visualizations of their operation. For example, we can have a snapshot of the overall state of an application: or specific latency statistics from individual services: Monitoring solutions are described in more detail in this article. Conclusion In order to achieve the best latency figures from an application, it is often necessary to depart from idiomatic techniques for developing in the chosen language. It takes time to acquire the skills to do this effectively, and even if it can be done in the business logic layers of code, supporting frameworks do not always provide the same level of specialization. Chronicle Services is a highly opinionated framework that leverages concepts implemented in libraries that have been developed by Chronicle Software to support the development of asynchronous message-passing applications with market-leading latency performance. It does not aim to compete with general-purpose microservice frameworks like Spring Boot or Quarkus. Instead, it provides a low-latency platform on which business logic can be layered using a simple computational model, bringing the benefits of low latency without the pain.
Parallel garbage collector (Parallel GC) is one of the oldest Garbage Collection algorithms introduced in JVM to leverage the processing power of modern multi-core systems. Parallel GC aims to reduce the impact of GC pauses by utilizing multiple threads to perform garbage collection in parallel. In this article, we will delve into the realm of Parallel GC tuning specifically. However, if you want to learn more basics of Garbage Collection tuning, you may watch this JAX London conference talk. When To Use Parallel GC You can consider using Parallel GC for your application if you have any one of the requirements: Throughput emphasis: If your application has high transactional throughput requirements and can tolerate long, occasional pauses for garbage collection, Parallel GC can be a suitable choice. It focuses on maximizing throughput by allowing garbage collection to occur concurrently with application execution. Batch processing: Applications that involve batch processing or data analysis tasks can benefit from Parallel GC. These types of applications often perform extensive computations, and Parallel GC helps minimize the impact of garbage collection on overall processing time. Heap size considerations: Parallel GC is well-suited for applications with moderate to large heap sizes. If your application requires a substantial heap to accommodate its memory needs, Parallel GC can efficiently manage memory and reduce the impact of garbage collection pauses. How To Enable Parallel GC To explicitly configure your application to use Parallel GC, you can pass the following argument when launching your Java application: -XX:+UseParallelGC This JVM argument instructs the JVM to use the Parallel GC algorithm for garbage collection. However, please note that if you don’t explicitly specify a garbage collection algorithm, in all the server class JVMs until Java 8, the default garbage collector is set to Parallel GC. Most Used Parallel GC JVM Arguments In the realm of Java Parallel GC tuning, there are a few key JVM arguments that provide control over crucial aspects of the garbage collection process. We have grouped those JVM arguments into three buckets: a. Heap and generation size parameters b. Goal-based tuning parameters c. Miscellaneous parameters Let’s get on to the details: A. Heap and Generation Size Parameters Garbage collection (GC) tuning for the Parallel Collector involves achieving a delicate balance between the size of the entire heap and the sizes of the Young and Old Generations. While a larger heap generally improves throughput, it also leads to longer pauses during GC. Consequently, finding the optimal size for the heap and generations becomes crucial. In this section, we will explore key JVM arguments that enable the adjustment of heap size and generation sizes to achieve an efficient GC configuration. -Xmx: This argument sets the maximum heap size, which establishes the upper limit for memory allocation. By carefully selecting an appropriate value for -Xmx, developers can control the overall heap size to strike a balance between memory availability and GC performance. -XX:NewSize and -XX:MaxNewSize or -XX:NewRatio: These arguments govern the size of the Young Generation, where new objects are allocated. -XX:NewSize sets the initial size, while -XX:MaxNewSize or -XX:NewRatio control the upper limit or the ratio between the young and tenured generations, respectively. Adjusting these values allows for fine-tuning the size and proportion of the Young Generation. Here is a success story of a massive technology company that reduced its young generation size and saw significant improvement in its overall application response time. -XX:YoungGenerationSizeIncrement and -XX:TenuredGenerationSizeIncrement: These arguments define the size increments for the Young and Tenured Generations, respectively. The size increments of the young and tenured generations are crucial factors in memory allocation and garbage collection behavior. Growing and shrinking are done at different rates. By default, a generation grows in increments of 20% and shrinks in increments of 5%. The percentage for growth is controlled by the command-line option -XX:YoungGenerationSizeIncrement=<Y> for the young generation and -XX:TenuredGenerationSizeIncrement=<T> for the tenured generation. -XX:AdaptiveSizeDecrementScaleFactor: This argument determines the scale factor used when decrementing generation sizes during shrinking. The percentage by which a generation shrinks is adjusted by the command-line flag -XX:AdaptiveSizeDecrementScaleFactor=<D>. If the growth increment is X percent, then the decrement for shrinking is X/D percent. B. Goal-Based Tuning Parameters To achieve optimal performance in garbage collection, it is crucial to control GC pause times and optimize the GC throughput, which represents the amount of time dedicated to garbage collection compared to application execution. In this section, we will explore key JVM arguments that facilitate goal-based tuning, enabling developers to fine-tune these aspects of garbage collection. -XX:MaxGCPauseMillis: This argument enables developers to specify the desired maximum pause time for garbage collection in milliseconds. By setting an appropriate value, developers can regulate the duration of GC pauses, ensuring they stay within acceptable limits. -XX:GCTimeRatio: This argument sets the ratio of garbage collection time to application time using the formula 1 / (1 + N), where N is a positive integer value. The purpose of this parameter is to define the desired allocation of time for garbage collection compared to application execution time, for optimizing the GC throughput. For example, let’s consider the scenario where -XX:GCTimeRatio=19. Using the formula, the goal is to allocate 1/20th or 5% of the total time to garbage collection. This means that for every 20 units of time (e.g., milliseconds) of combined garbage collection and application execution, approximately 1 unit of time will be allocated to garbage collection, while the remaining 19 units will be dedicated to application execution. The default value is 99, which sets a goal of 1% of the time for garbage collection. -XX:GCTimePercentage: This argument allows developers to directly specify the desired percentage of time allocated to garbage collection in relation to application execution time (i.e., GC Throughput). For instance, setting ‘-XX:GCTimePercentage=5’ represents a goal of allocating 5% of the total time to garbage collection, with the remaining 95% dedicated to application execution. Note: Developers can choose to use either ‘-XX:GCTimeRatio‘ or ‘-XX:GCTimePercentage‘ as alternatives to each other. Both options provide flexibility in expressing the desired allocation of time for garbage collection. I would prefer using ‘-XX:GCTimePercentage’ over ‘-XX:GCTimeRatio’ because of its ease of understanding. C. Miscellaneous Parameters In addition to the previously discussed JVM arguments, there are a few other parameters that can be useful for tuning the Parallel GC algorithm. Let’s explore them. -XX:ParallelGCThreads: This argument allows developers to specify the number of threads used for garbage collection in the Parallel GC algorithm. By setting an appropriate value based on the available CPU cores, developers can optimize throughput by leveraging the processing power of multi-core systems. It’s important to strike a balance by avoiding too few or too many threads, as both scenarios can lead to suboptimal performance. -XX:-UseAdaptiveSizePolicy: By default, the ‘UseAdaptiveSizePolicy’ option is enabled, which allows for dynamic resizing of the young and old generations based on the application’s behavior and memory demands. However, this dynamic resizing can lead to frequent “Full GC – Ergonomics” garbage collections and increased GC pause times. To mitigate this, we can pass the -XX:-UseAdaptiveSizePolicy argument to disable the resizing and reduce GC pause times. Here is a real-world example and discussion around this JVM argument. Tuning Parallel GC Behavior Studying the performance characteristics of Parallel GC is best achieved by analyzing the GC log. The GC log contains detailed information about garbage collection events, memory usage, and other relevant metrics. There are several tools available that can assist in analyzing the GC log, such as GCeasy, IBM GC and Memory Visualizer, HP Jmeter, and Google Garbage Cat. By using these tools, you can visualize memory allocation patterns, identify potential bottlenecks, and assess the efficiency of garbage collection. This allows for informed decision-making when fine-tuning Parallel GC for optimal performance. Conclusion In conclusion, optimizing the Parallel GC algorithm through fine-tuning JVM arguments and studying its behavior enables developers to achieve efficient garbage collection and improved performance in Java applications. By adjusting parameters such as heap size, generation sizes, and goal-based tuning parameters, developers can optimize the garbage collection process. Continuous monitoring and adjustment based on specific requirements are essential for maintaining optimal performance. With optimized Parallel GC tuning, developers can maximize memory management, minimize GC pauses, and unlock the full potential of their Java applications.
In the video below, we'll cover the newly released Hibernate 6.3. With its annotation processing capabilities, it offers alternative approaches to frameworks like Spring Data JPA, and we'll explore those with a bit of live coding. What’s in the Video? We'll start off with a tiny story about how this webinar came about. I read the new "Introduction to Hibernate 6" written by Gavin King, which includes many opinions on how to do data persistence with Java in general. I thought it might make sense to not only have a theoretical discussion about this, but take an existing Spring Boot/Spring Data JPA project, and replace its bits and pieces one by one with the new approach offered by Hibernate 6.3. Hence, we'll set the baseline for this video by quickly going over my Google Photos Clone project, which lets you create thumbnails for directories on your hard drive, for example, and display them on a (not yet nice-looking) HTML page. There are just a couple of data queries the application currently executes, mainly to select all photos, check if they already exist in a database, and save them to a database. So we'll go about replacing those one by one. Let's start with the select query. We'll use the newly introduced @HQL annotation to replace the Spring Data JPA select query with it. Along the way, we'll learn that we don't need to encode the query into the method name itself and that we also have the flexibility to use helper objects like Order or Page to customize our queries. Once we restarted our application to find out it is still working, let's take care of the "exists" query. It needs a bit of custom-written HQL, but along the way, we'll learn about compile-time validation of our queries - the Hibernate annotation processor does that out of the box. Once the exists query is working, we'll take care of the last query, saving new images to the database. That gives us room to discuss architectural questions, like "Do we need another abstraction on top of our annotated queries?" and "How do we manage and structure queries in bigger projects?" In the last quarter of the live-stream we'll discuss other popular questions that arise with Hibernate on a day-to-day basis: Should you use sessions vs. stateless sessions? Should you use fetch profiles extensively? Is it ok to use plain SQL with Hibernate? Is it ok to use Hibernate-specific annotations as opposed to JPA ones? And many more All in all, the livestream should be of huge value for anyone using Hibernate in their projects (which the majority of Java projects likely do). Enjoy! Video
A new LTS version has always been the big news in the Java world, and JDK 21 is no exception. It was moved to Rampdown Phase One on June 16, meaning that the feature set has been frozen. An extensive set of 15 JEPs includes new, enhanced, and finalized functionality. Let’s take a dive into the upcoming LTS release — it might be just the version you will stick to for the years ahead! Novelties JEP 430: String Templates (Preview) String templates are a preview language feature and API aimed at facilitating the expression of strings that contain values computed at run time. The existing Java mechanisms of concatenating literal text and expressions produce hard-to-read code or are associated with verbosity. Other languages utilize string interpolation, which allows for conciseness but brings potential security risks. Template expressions help to achieve clarity of interpolation without introducing security vulnerabilities. Take a look at the code snippet with a template expression on the second line: String name = "Joan"; String info = STR."My name is \{name}"; assert info.equals("My name is Joan"); // true The expression consists of a template processor STR, a dot, and a template with an embedded expression \{name}. Embedded expressions can be strings, perform arithmetic operations, invoke methods, access fields, and even spread over multiple lines. Template processors can use only the values in the embedded expressions, and execute at run time only. In addition, it is impossible to use the templates without the template processor responsible for safe interpolation and validation of a result, which increases the safety of operations. JEP 431: Sequenced Collections Sequenced collections introduce collections with a defined encounter order and uniform APIs for accessing the first and last elements, and processing the elements in forward and reverse order. Three new interfaces — sequenced collections, sets, and maps — will be retrofitted into the existing collections type hierarchy. All three interfaces have new methods facilitating the development process: A sequenced collection has a reversed() method to view the collection in reversed order, process the elements in both directions, and perform all the usual iteration operations such as forEach(), stream(), etc. A sequenced set includes addFirst(E) and addLast(E) methods that can move the elements to the appropriate position if it is already present in the set. A sequenced map has the put*(K, V) methods whose functionality is similar to add*(E) methods of sequenced sets. JEP 443: Unnamed Patterns and Variables (Preview) The unnamed patterns and variables will improve the readability and maintainability of code by: Eliding the unnecessary type and name of a record component in pattern-matching Identifying variables that must be declared but will not be used. Both are denoted by the underscore character _. Unnamed patterns enable the developers to omit the components in record patterns that are not used; for instance: ... instanceof Point(int x, _) case Point(int x, _) Unnamed variables substitute the names of variables, which are not used (for example, in try-with-resources or try-catch blocks), e.g.: int _ = q.remove(); ... } catch (NumberFormatException _) { ... (int x, int _) -> x + x JEP 445: Unnamed Classes and Instance Main Methods (Preview) Students embarking on a Java development journey may find some enterprise-level features too difficult. The new feature is aimed at giving them the opportunity to write single-class programs and gradually expand them as their knowledge grows. It will also be useful for experienced developers who want to write simple, concise applications without programming-in-the-large composition of program components (i.e., when enterprise-level features interact with each other through well-defined protocols, but hide internal implementation details). For instance, the basic HelloWorld program contains several features that are hard to comprehend, but unnecessary for novices: Java public class HelloWorld { public static void main(String[] args) { System.out.println("Hello, World!"); } } It can be simplified to: Java class HelloWorld { void main() { System.out.println("Hello, World!"); } } This program can be made more complex with time as students learn the necessary concepts. But at the same time, there's no need to introduce a simplified Java dialect for educational purposes. JEP 451: Prepare To Disallow the Dynamic Loading of Agents In JDK 21, the users will receive warnings when agents are loaded dynamically into a running JVM. JEP 451 lays the ground for a future release that disallows the dynamic loading of agents by default in line with the ongoing process of enhancing Java integrity. Agents are components that can alter the code when the application is running. They are commonly used by serviceability tools such as profilers and troubleshooting instruments, but the developer must grant approval to alter the application. However, some libraries that use agents can bypass this requirement and attach to the running JVM silently, thus increasing the security risks. The proposal is to make the user explicitly allow dynamic loading with the -XX:+EnableDynamicAgentLoading option on the command line. Luckily, most serviceability tools do not use dynamic agent loading, and therefore, will not be affected. As for the libraries, they must load the agent at startup with the -javaagent/-agentlib options: the maintainers are encouraged to update their documentation with an explanation of how to load agents at startup. JEP 452: Key Encapsulation Mechanism API The Key Encapsulation Mechanism (KEM) API introduces an encryption technique for securing symmetric keys with asymmetric (public key) cryptography. KEM uses public key properties to derive a related symmetric key without padding. Right now, Java doesn’t have a standard KEM API. However, it is an important modern technique for defending against cyberattacks and will likely be part of the next generation of standard public key cryptography algorithms. Finalized Features These features were introduced in previous releases and after a series of improvements and follow-up changes have taken a final form in this LTS version. JEP 440: Record Patterns Record patterns, which are used to deconstruct record values to improve pattern matching, were first introduced in JDK 19. JEP 440 finalizes the feature with several enhancements based on the feedback. The most significant change is the removal of support for record patterns appearing in the header of an enhanced for statement. JEP 441: Pattern Matching for switch Pattern matching for switch expressions and statements was proposed in JDK 17 and refined in the following releases. The aim of the functionality is to enhance the expressiveness, applicability, and safety of switch statements. With pattern matching for switch, developers can test the expressions against specific patterns, thus making complex data queries more concise and reliable. The finalized feature includes the following improvements: The removal of parenthesized patterns Allowing for qualified enum constants as case constants in switch JEP 444: Virtual Threads Virtual threads enhance the concurrent programming in Java by providing a mechanism to create thousands of lightweight threads depending on the tasks at hand, which can be monitored, managed, and debugged like the usual platform threads. Virtual threads were included as a preview API in JDK 19. JEP 444 finalizes the feature and includes a few improvements based on the feedback: Always support thread-local variables belonging to the ThreadLocal API that enables the developers to store data accessible for a specific thread only. Virtual threads created directly with the Thread.Builder API are now monitored during their lifetime by default. They can also be observed via the new thread dump, which will group plentiful virtual threads in a meaningful way. Improved Features JEP 439: Generational ZGC ZGC is a scalable low-latency garbage collector that has consistently low pause times (measured in microseconds) regardless of the heap size. However, the current non-generational ZGC stores young and old objects together and has to collect all objects every time it operates. As most young objects die young, and old objects tend to stick around, collecting young objects requires fewer resources and yields more memory. Therefore, a Generational ZGC will maintain young and old objects separately and collect young objects more frequently, thus reducing the GC CPU overhead and heap memory overhead. JEP 442: Foreign Function and Memory API (Third Preview) Foreign function and memory API enables Java applications to safely interact with code and data outside of the Java runtime. The FFM API is aimed at replacing the Java Native Interface with a more reliable, pure Java development model. This is the third preview of the FFM API with the following amendments: Centralized management of the lifecycle of native segments through the Arena interface Enhanced layout paths with a new element to dereference address layouts A new linker option to optimize calls to short-lived functions that will not upcall to Java A new fallback native linker implementation, based on libffi, to facilitate porting Removed VaList class JEP 446: Scoped Values (Preview) Scoped values enable the developers to share immutable data within and across threads with the aim of more reliable and manageable data management in concurrent applications. Scoped values should be preferred to thread-local variables, because, unlike them, scoped values are immutable and are associated with smaller footprint and complexity. This feature was incubated in JDK 20 and is now a preview API. JEP 448: Vector API (Sixth Incubator) Vector API increases the performance of vector computations that compile reliably at run time to optimal vector instructions. The feature was first introduced in JDK 16. This is a sixth incubator with the following notable enhancements apart from bug fixes: Addition of the exclusive or (xor) operation to vector masks. Improved performance of vector shuffles, especially when used to rearrange the elements of a vector and when converting between vectors. JEP 453: Structured Concurrency (Preview) Structured concurrency enables the reliable coordination of virtual threads and improves observability and maintainability of concurrent code. The feature was included into JDK 19 as an incubating API and reincubated in subsequent releases. This is a preview API with one notable change: the StructuredTaskScope::fork(...) method returns a [Subtask] instead of a Future as before. The Future is more useful when multiple tasks are treated as individual tasks, and not as a single unit of work (which is the goal of structured concurrency). The Future involves calling a get() method, which blocks until a result is available. This is counterproductive in the case of StructuredTaskScope, which will now use a resultNow() that never blocks. Deprecated Functionality JEP 449: Deprecate the Windows 32-Bit x86 Port for Removal The last Windows OS that supports 32-bit operation (Windows 10) will reach end of life in October 2025. At the same time, the usage of virtual threads on Windows 88-32 doesn’t bring the expected benefits. Therefore, the Windows 32-bit x86 port becomes redundant and will be removed in a future release. To Upgrade or Stay Put? Early-access builds are already available. If you are going to upgrade to the new LTS version, you can test the new functionality and start planning the migration strategy now.
Having spent a few years in the technology industry, mostly developing software, I’ve been accustomed to the reality that most of my time is spent working with different teams and reviewing code written by other software developers. Over the years, I’ve had diverse experiences working with code written by different developers. These experiences have fundamentally reinforced my appreciation for clean code. “Indeed, the ratio of time spent reading versus writing is well over 10 to 1. We are constantly reading old code as part of the effort to write new code. …[Therefore,] making it easy to read makes it easier to write.”- from Clean Code: A Handbook of Agile Software Craftsmanship (Uncle Bob) Whether you have been a software developer for a while or just started, I’m confident that you can relate to the experience and effort required when inheriting a new codebase. Inheriting a codebase often involves familiarizing yourself with the code, its structure, functionality, programming language, libraries, framework, and any other technology used. Navigating unfamiliar code or code you wrote a while ago can be daunting. Whether you want to fix bugs, enhance functionality, improve performance, or join an ongoing project, the time and effort required depends on the state of the code. Code written with clean Java code principles can save you countless hours and frustrations. On the other hand, when working with messy code, you’re likely to spend most of your time deciphering tangled logic, uncommented code, and poorly named variables. What Clean Java Code Is and Its Benefits Clean code is the practice of writing straightforward, readable, testable, and easy-to-understand code. Other features of clean code include adherence to good conventions and best practices that promote expressiveness, conciseness, organization, and maintainability. Clean code should also be free from bugs, unjustifiable complexity, code smells, and redundant code. Robert C. Martin, popularly known as Uncle Bob, has written extensively on the subject of clean code. Programmers of all levels of experience who desire to write clean code can benefit from his book on clean code, articles, and a series of talks. In his book Clean Code: A Handbook of Agile Software Craftsmanship, he says, “Even bad code can function. But if code isn’t clean, it can bring a development organization to its knees. Every year, countless hours and significant resources are lost because of poorly written code. But it doesn’t have to be that way." The importance of writing clean Java code cannot be overstated. Some benefits that can be immediately realized from clean code include: Maintainability – Clean code is easy to modify and update. Debugging – Clean code is less prone to errors. It is also easier to isolate and fix issues within the code. Scalability – Clean code is modular, reusable, and accommodative of future changes. Collaboration – Clean code allows teammates to understand each other’s code. Documentation – Clan code is self-explanatory and reduces the need for excessive comments. Efficiency – Clean code removes code duplication and unnecessary complexity, which improves performance. Readability – Clean code is easy to read, reduces confusion, and improves maintainability. How To Write Clean Java Code Java is still a popular programming language. Since its an established language, legacy Java codebases are still critical in running important business software and infrastructure developed over a decade ago and are still in use by thousands of users. Due to the longevity of Java codebases, it’s important to write clean Java code that can be easily maintained by developers that come after you. Here are the best practices to help you write clean Java code. 1. Use a Standard Project Structure Project structure outlines how to arrange various components in your project, such as Java source files, test files, documentation files, build files, and configuration files. A clear project structure makes it easy to understand, navigate and modify a project codebase. On the other hand, a poor project structure can cause confusion, especially when working with projects with many files. Although Java doesn’t enforce a specific project structure, build tools such as Maven suggests a project structure you can follow. src ├── main │ ├── java Application/Library sources │ ├── resources Application/Library resources │ ├── filters Resource filter files │ └── webapp Web application sources │ └── test ├── java Test sources ├── resources Test resources └── filters Test resource filter files 2. Stick to Java Naming Convention Java naming conventions are a set of rules that dictate how Java developers should name identifiers. The Java Specification Document includes rules for naming variables, packages, classes, and methods. These naming conventions allow developers to keep things in order when writing code. Good naming improves code readability, consistency, and maintainability. Some of the Java naming conventions include: Class and interface names should be nouns and have the first letter capitalized. Method names should be verbs. Variable names should be short and meaningful. Package names should be lowercase. Constant names should be capitalized. Java package com.example.project; public class Person { private String firstName; private String lastName; public Person(String firstName, String lastName) { this.firstName = firstName; this.lastName = lastName; } public String getFullName() { return firstName + " " + lastName; } public static final int MAX_AGE = 100; public boolean hasValidName() { return firstName != null && lastName != null; } } You can find more information about Java naming conventions here. 3. Readability Over Reusability Reusability is one of the most advocated concepts in software development. It lessens development time and reduces the effort required to maintain software when developers understand the reusable components very well. While the concept of reusability sounds great and has many benefits, it also has many potential pitfalls, especially when working on an unfamiliar codebase. When working with large applications, code reusability can reduce readability, usability, and maintainability if a proper design is not in place. Code reusability affects the readability of code when it makes it difficult to understand the logical flow of the code without tracing execution. Poor code readability makes debugging difficult and increases the effort required to maintain the codebase. This can be challenging, especially when trying to onboard new developers into your project. Therefore, as you develop software, ensure that you don’t prioritize reusability over readability. 4. Lint Your Code With Static and Dynamic Analysis Tools Static and dynamic code analysis tools complement each other. Both dynamic and static analysis tools can help in writing clean Java code. Static analysis tools allow you to inspect your application source code and ensure adherence to coding standards, spot vulnerabilities, and detect bugs during development. On the other hand, dynamic analysis allows you to test your application at runtime. It allows you to measure your application’s performance, behavior, and functionality and identify runtime errors, memory leaks, and resource consumption reducing the chances of running into issues in production. 5. Use Meaningful Comments and Documentation Obsessive commenting is something I struggled with, especially early on in my software development career. This is something that most developers struggle with. Improper use of comments indicates that your code is a symptom of bad programming. Proper use of comments and documentation can serve an important role in writing clean Java code. While code should be readable and self-explanatory, sometimes it’s impossible to avoid complex logic. However, using strategic comments within the code, you can explain the logic behind some parts of the code that are not straightforward. In Java, developers can leverage two types of comments documentation comments and implementation comments. Documentation comments target codebase users, while implementation comments are meant for developers working on the codebase. Java /** * This class represents a RESTful controller for managing user resources. * It provides endpoints for creating, retrieving, updating, and deleting users. */ @RestController @RequestMapping("/api/users") public class UserController { /** * Retrieves a user by ID. * * @param id The ID of the user to retrieve. * @return The user with the specified ID. */ @GetMapping("/{id}") public ResponseEntity<User> getUserById(@PathVariable("id") Long id) { // Implementation omitted for brevity } /** * Creates a new user. * * @param user The user object to create. * @return The created user. */ @PostMapping public ResponseEntity<User> createUser(@RequestBody User user) { // Implementation goes here } //Rest of the code 6. Using Consistent and Proper Code Formatting: Whitespaces and Indentation Code formatting may not seem like a big issue when you’re working on a personal project whose code may never be maintained by another developer. However, consistent code formatting and style are critical when working with a team of other developers. Maintaining a consistent formatting and coding style in your team and codebase is important if you wish to write clean Java code as a team. Whitespace and indentation are essential in maintaining a consistent coding style. Good usage of whitespace between operators, commas, and around flow statements enhances code readability. For instance, you can organize code into logical groups using whitespaces, enhancing readability and visual clarity. Indentation refers to using tabs or spaces within loops, methods, and control structures. Although there is no enforced convention for code indention in Java, you can choose to adopt popular convection and use it consistently. Java import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class MyApplication { public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } // Rest of the code goes here // ... } 7. Restrict the Number of Method Parameters Parameters are necessary when working with methods. However, care should be taken to avoid using too many parameters in one method. Too many parameters may indicate that your method is addressing more than one concern and violates the single responsibility principle. Too many method parameters make your code less readable since keeping track of their types and meanings is difficult. To write clean Java code, you should limit the number of method parameters and use objects or data structures instead of individual parameters or group-related parameters into objects.Here is an example of a Java method with too many method parameters. Java public void processOrder(String customerName, String shippingAddress, String billingAddress, String productName, int quantity, double price, boolean isExpressShipping) { // Method implementation } Here is how we can refactor the code above by grouping related parameters into an object to improve readability. Java public class Order { private String customerName; private String shippingAddress; private String billingAddress; private String productName; private int quantity; private double price; private boolean isExpressShipping; // Constructors, getters, and setters // Other methods related to an order } public void processOrder(Order order) { // Method implementation } 8. Leverage Unit Tests and Test Driven Development (TDD) Unit testing and TDD are very common practices in software development. Unit testing involves writing tests for individual functions, methods, and classes to ensure that they work in isolation. TDD is also a popular development practice that involves writing tests before code. Both unit tests and a TDD approach can propel your efforts toward writing clean Java code. Unit tests allow you to verify correctness, detect bugs early, and write more modular code. TDD provides immediate feedback and increases your confidence in writing reliable and maintainable code. 9. SOLID Principles The SOLID principles by Robert C. Martin (Uncle Bob) are very popular; every developer should know them. These principles can also help you write clean, maintainable, and scalable code. Let’s discuss how each of the principles in the SOLID acronym can help you write clean Java code. Single responsibility principle (SRS): The single responsibility principle states that a class should only have one responsibility. Following the SRS principle guarantees that we write concise, readable, and maintainable code. Open-closed principle (OCP): The OCP states that classes should be open for extension but closed for modification except when fixing bugs. This principle allows you to add new features without introducing bugs or breaking existing functionality. In Java, you can use interfaces or abstract classes to extend the functionality of existing classes. Liskov substitution principle (LSP): The LSP principle states that we should be able to use superclasses with their respective subclasses interchangeably without breaking the functionality of our program. Using this principle allows you to use inheritance correctly and write decoupled clean Java code. Interface segregation principle: This principle states that we should opt for smaller focused interfaces instead of large monolithic ones. Using this principle allows us to write modular and clean Java code where the implementing classes focus only on methods that concern them. Dependency inversion principle (DIP): This principle emphasizes loose coupling, guaranteeing that components such as classes only depend on abstractions and not on their concrete implementations. DIP helps us enforce clean Java code using Inversion of Control (IoC) and dependency injection. 10. KISS and DRY Principles KISS and DRY are very fundamental concepts in software development and can help you write clean Java code. The DRY principle states that as a developer, you should ensure that your code is not duplicated multiple times across the system. Eliminating code duplication improves your code maintainability and makes finding and fixing bugs easier. The KISS principle emphasizes that we should strive to keep the design and development of the software we build simple and straightforward. By following this principle, you can avoid unnecessary complexity in your code and instead opt to write simple and understandable code. The KISS principle promotes the maintainability of your code and makes it more readable. Maintainable and readable code improves collaboration and eases the onboarding of developers into the project. 11. The Source Files Structure A typical source file in Java contains different elements that are key in running any Java program. To maintain the readability of your code, you should enforce a consistent source file structure. Although there is no one-size-fits approach to structuring your source file, there are popular style guides that you can follow. Normally a typical ordering of a source file structure in Java begins with package statements followed by static and non-static import statements, preceded by one primary top-level class. Java // Class variables private static int count; private String name; // Instance variables private int age; private List<String> hobbies; // Constructors public MyClass() { // Constructor implementation } public MyClass(String name, int age) { // Constructor implementation } // Methods public void setName(String name) { // Method implementation } public String getName() { // Method implementation } public void addHobby(String hobby) { // Method implementation } // Other methods } // Additional classes (if any) class MyStaticClass { // Class implementation } 12. Avoid Hard Coding Values Hardcoding refers to embedding values directly into your program’s source code instead of using variables. Altering the source code of your program is the only way to change values hardcoded into your programs. Hardcoded values limit reusability and testability and can also lead to duplication and undesired behavior from your program. To improve code reusability, testability, and maintainability, which are key features of clean Java code, it is important that you avoid hard coding values in your program source code. Instead, replace hard-coded values with abstractions such as constant variables or enums. Here’s an example of a Java program with hardcoded values. Java @RestController public class HelloWorldController { @GetMapping("/hello") public String sayHello() { return "Hello, World!"; } @GetMapping("/user") public String getUser() { // hardcode values String username = "John"; int age = 30; return "Username: " + username + ", Age: " + age; } // Other controller methods } Conclusion Writing clean Java code is key to developing high-quality software. In this article, we have shared some of the best and most common practices that can help you write clean Java code. However, it is important to note that this isn’t a definitive list of what it takes to write clean Java code. Other key factors that contribute to writing clean Java code include the culture, tools available, and goals of the team you’re working with. You can write readable, testable, extensible, and modular code by embracing these principles. FAQ What Is Clean Code? Clean code refers to readable, and maintainable code that is written based on best practices and conventions. It is code that is easy to understand, modify, and extend by both the original author and other developers. who inherited the code. Why Is It Important To Write Clean Java Code? Clean code improves readability, enhances collaboration, reduces the chances of introducing bugs, and makes maintainability easier. Are There Tools That Can Help in Writing Clean Java Code? There are many tools that aid the process of writing clean Java code. Some include static code analysis tools such as SonarQube, FindBugs, and Digma. IDEs such as IntelliJ and Eclipse can also be helpful. Which Resources Can I Use To Learn More About Writing Clean Java Code? Plenty of blogs and online tutorials exist, such as Baeldung. I would also recommend books such as “Clean Code: A Handbook of Agile Software Craftsmanship” by Robert C. Martin and “Effective Java” by Joshua Bloch.
I tweet technical content that I consider interesting, but the funny tweets are the ones that get the most engagement. I attended the JavaLand conference in March, stumbled upon the Gradle booth, and found this gem: Of course, at some point, a fanboy hijacked the thread and claimed the so-called superiority of Gradle. In this post, I'd like to shed some light on my stance, so I can direct people to it instead of debunking the same "reasoning" repeatedly. To manage this, I need to get back in time. Software development is a fast-changing field, and much of our understanding is based on personal experience. So here's mine. My First Build Tool: Ant I started developing in Java in 2002. At the time, there were no build tools: we compiled and built through the IDE. For the record, I first used Visual Age for Java; then, I moved to Borland JBuilder. Building with an IDE has a huge issue: each developer has dedicated settings, so artifact generation depends on the developer-machine combination. Non-repeatable builds are an age-old problem. My first experience with repeatable builds is Apache Ant: Apache Ant is a Java library and command-line tool whose mission is to drive processes described in build files as targets and extension points dependent upon each other. The main known usage of Ant is the build of Java applications. Ant supplies a number of built-in tasks allowing to compile, assemble, test and run Java applications. Ant can also be used effectively to build non Java applications, for instance C or C++ applications. More generally, Ant can be used to pilot any type of process which can be described in terms of targets and tasks. - Apache Ant website Ant is based on three main abstractions: A task is an atomic unit of work, e.g., javacto compile Java files, war to assemble a Web Archive, etc. Ant provides lots of tasks out-of-the-box but allows adding custom ones. A target is a list of tasks. You can define dependencies between tasks, such as package depending on compile. In this regard, you can see Ant as a workflow execution engine. I soon became "fluent" in Ant. As a consultant, I went from company to company, project to project. Initially, I mostly set up Ant, but Ant became more widespread as time passed, and I encountered existing Ant setups. I was consistent in my projects, but other projects were very different from each other. Every time, when arriving at a new project, you had to carefully read the Ant setup to understand the custom build. Moreover, each project's structure was different. Some put their sources in src, some in sources, some in a nested structure, etc. I remember once a generic build file that tried accommodating the whole of an organization's project needs. It defined over 80 targets in over 2,000 lines of XML. It took me a non-trivial amount of time to understand how to use it with help and even more time to be able to tweak it without breaking projects. My Second Build Tool: Maven The above project got me thinking a lot. I wanted to improve the situation as the maintainers had already pushed Ant's limits. At the time, I was working with my friend Freddy Mallet (of Sonar fame). We talked, and he pointed me to Maven. I had once built a project with Maven but had no other prior experience. I studied the documentation for hours, and through trial-and-error attempts, under the tutelage of Freddy, migrated the whole Ant build file to a simple parent POM. In Ant, you'd need to define everything in each project. For example, Ant requires configuring the Java files location for compilation; Maven assumes they are under src/main/java, though it's possible to override it. Maven did revolutionize the Java build field with its Convention over Configuration approach. Nowadays, lots of software offer sensible configuration by default. For developers who go from project to project, as I did, it means there's much less cognitive load when joining a new project. I expect Java sources to be located under src/main/java. Maven conventions continue beyond the project's structure. They also define the project's lifecycle, from compilation to uploading the artifact in a remote registry, via unit and integration testing. Finally, junior developers tend to be oblivious about it, but Maven defined the term dependency management. It introduced the idea of artifact registries, where one can download immutable dependencies from and push artifacts to. Before that time, each project had to store dependencies in its dedicated repository. For the record, there were a couple of stored dependencies on the abovementioned project. When I migrated from Ant to Maven, I had to find the exact dependency version. For most, it was straightforward, as it was in the filename or the JAR's manifest. One, however, had been updated with additional classes. So much for immutability. Maven had a profound influence on all later build tools: they defined themselves in reference to Maven. No Build Tool of Mine: Gradle Gradle's primary claim was to fix Maven's shortcomings, or at least what it perceived as such. While Maven is not exempt from reproach, Gradle assumed the most significant issue was its lack of flexibility. It's a surprising assumption because that was precisely what Maven improved over Ant. Maven projects have similar structures and use the same lifecycle: the principle of least surprise in effect. Conversely, Gradle allows customizing nearly every build aspect, including the lifecycle. Before going to confront the flexibility argument, let me acknowledge two great original Gradle features that Maven implemented afterward: the Gradle daemon and the Gradle wrapper. Maven and Gradle are both Java applications that run on the JVM. Starting a JVM is expensive in terms of time and resources. The benefit is that long-running JVM will optimize the JIT-ed code over time. For short-term tasks, the benefit is zero and even harmful if you take the JVM startup time into account. Gradle came up with the Gradle daemon. When you run Gradle, it will look for a running daemon. If not, it will start a new one. The command-line app will delegate everything to the daemon. As its name implies, the daemon doesn't stop when the command line has finished. The daemon leverages the benefits of the JVM. Chances are that your application will outlive your current build tools. What happens when you need to fix a bug five years from now, only to notice that the project's build tool isn't available online? The idea behind Gradle's wrapper is to keep the exact Gradle version along with the project and just enough code to download the full version over the Internet. As a side-effect, developers don't need to install Gradle locally; all use the same version, avoiding any discrepancy. Debunking Gradle's Flexibility Gradle brought the two above great features that Maven integrated, proving that competition is good. Despite this, I still find no benefit of Gradle. I'll try to push the emotional side away. At its beginning, Gradle marketing tried to put down Maven on every possible occasion, published crazy comparison charts, and generally was very aggressive in its communication. Let's say this phase lasted far more than would be acceptable for a young company trying to find its place in the market. You could say that Gradle was very Oedipian in its approach: trying to kill its Maven "father." Finally, after all those years, it seems it has wised up and now "loves Maven." Remember that before Maven took over, every Ant project was ad hoc. Maven did put an end to that. It brought law to the World Wild West of custom projects. You can disagree with the law, but it's the law anyway, and everybody needs to stand by it. Maven standards are so entrenched that even though it's possible to override some parameters, e.g., source location - nobody ever does it. I did experience two symptoms of Gradle's flexibility. I suspect far more exist. Custom Lifecycle Phases Maven manages integration testing in four phases, run in order: pre-integration-test: Set up anything the tests need. integration-test: Execute the tests. post-integration-test: Clean up the resources, if any. verify: Act upon the results of the tests. I never used the pre- and post-phases, as each test had a dedicated setup and teardown logic. On the other side, Gradle has no notion of integration tests whatsoever. Yet, Gradle fanboys will happily explain that you can add the phases you want. Indeed, Gradle allows lifecycle "customization" - you can add as many extra phases into the regular lifecycle as you want. It's a mess, for each project will need to come up with both the number of phases required and their name: integration-test, integration-tests, integration-testing, it (for the lazy), etc. The options are endless. The Snowflake Syndrome Maven treats every project as a regular standard project. And if you have specific needs, it's possible to write a plugin for that. Writing a Maven plugin is definitely not fun; hence, you only write one when it's necessary, not just because you have decided that the law doesn't apply to you. Gradle claims that lack of flexibility is an issue; hence, it wants to fix it. I stand by the opposite: lack of flexibility for my build tool is a feature, not a bug. Gradle makes it easy to hack the build. Hence, anybody who thinks their project is a special snowflake and deserves customization will happily do so. Reality check: it's rarely the case; when it is, it's for frameworks, not regular projects. Gradle proponents say that it still offers standards while allowing easy configuration. The heart of the matter is that it's not a standard if it can be changed at anybody's whim. Gradle is the de facto build tool for Android projects. In one of the companies I worked for, somebody wrote custom Groovy code in the Gradle build to run Sonar and send the metrics to the internal Sonar instance. There was no out-of-the-box Sonar plugin at the time, or I assume it didn't cut it. So far, so good. When another team created the company's second Android project, they copy-pasted the first project's structure and the build file. The intelligent thing to do would have been, at this time to make an internal Gradle plugin out of the Sonar-specific code. But they didn't do it because Gradle made it so easy to hack the build. And I, the Gradle-hater, took it upon myself to create the plugin. It could have been a better developer experience, to say the least. Lacking quality documentation and using an untyped language (Groovy), I used the console to print out the objects' structure to progress. Conclusion Competition is good, and Gradle has brought new ideas that Maven integrated, the wrapper and the daemon. However, Gradle is built on the premise that flexibility is good, while my experience has shown me the opposite. Ant was very flexible, and the cognitive load to go from one project to the next was high. We, developers, are human beings: we like to think our projects are different from others. Most of the time, they are not. Customization is only a way to satisfy our ego. Flexible build tools allow us to implement such customization, whether warranted or not. Irrelevant customizations bring no benefit and are easy to develop but expensive to maintain. If managing software assets is part of my responsibilities, I'll always choose stability over flexibility for my build tool.
Cucumber is the leading Behavior-driven development (BDD) framework. It is language-agnostic and integrates with other frameworks. You write the specification/feature, then write the glue code, then write the test code.With Smart BDD, you write the code first using best practices, and this generates the following: Interactive feature files that serve as documentation Diagrams to better document the product The barrier to entry is super low. You start with one annotation or add a file to resources/META-INF! That's it. You're generating specification/documentation. Please note I will interchange specifications, features, and documentation throughout. If you haven't seen Smart BDD before, here's an example: The difference in approach leads to Smart BDD To having less code and higher quality code Therefore, less complexity Therefore, lowering the cost of maintaining and adding testing Therefore, increasing productivity Oh, and you get sequence diagrams (see picture above), plus many new features are in the pipeline Both goals are the same, in a nutshell — specifications that can be read by anyone and tests that are exercised. Implementing BDD with Cucumber will give you benefits. However, there is a technical cost to adding and maintaining feature files. This means extra work has to be done. There are three main layers: feature file, glue code, and test code: You write the feature file Then the glue code Then the test code This approach, with extra layers and workarounds for limitations and quirks, leads Cucumber (we'll explore in more with code detail below): To have more code and lower quality. You have to work around limitations and quirks. Therefore, more complexity Therefore, increasing the cost of maintaining and adding testing Therefore, decreasing productivity Therefore, decreased coverage The quality of code can be measured in its ability to change! Hence, best practices and less code fulfill this brief. It's time to try and back these claims up. Let's check out the latest examples from Cucumber. For example, below, I created a repo for one small example — calculator-java-junit5. Then, I copied and pasted it into a new project. First, Let’s Implement the Cucumber Solution Feature file: Gherkin Feature: Shopping Scenario: Give correct change Given the following groceries: | name | price | | milk | 9 | | bread | 7 | | soap | 5 | When I pay 25 Then my change should be 4 Java source code: Java public class ShoppingSteps { private final RpnCalculator calc = new RpnCalculator(); @Given("the following groceries:") public void the_following_groceries(List<Grocery> groceries) { for (Grocery grocery : groceries) { calc.push(grocery.price.value); calc.push("+"); } } @When("I pay {}") public void i_pay(int amount) { calc.push(amount); calc.push("-"); } @Then("my change should be {}") public void my_change_should_be_(int change) { assertEquals(-calc.value().intValue(), change); } // omitted Grocery and Price class } Mapping for test input: Java public class ParameterTypes { private final ObjectMapper objectMapper = new ObjectMapper(); @DefaultParameterTransformer @DefaultDataTableEntryTransformer @DefaultDataTableCellTransformer public Object transformer(Object fromValue, Type toValueType) { return objectMapper.convertValue(fromValue, objectMapper.constructType(toValueType)); } } Test runner: Java /** * Work around. Surefire does not use JUnits Test Engine discovery * functionality. Alternatively execute the * org.junit.platform.console.ConsoleLauncher with the maven-antrun-plugin. */ @Suite @IncludeEngines("cucumber") @SelectClasspathResource("io/cucumber/examples/calculator") @ConfigurationParameter(key = GLUE_PROPERTY_NAME, value = "io.cucumber.examples.calculator") public class RunCucumberTest { } build.gradle.kts showing the cucumber config: Kotlin dependencies { testImplementation("io.cucumber:cucumber-java") testImplementation("io.cucumber:cucumber-junit-platform-engine") } tasks.test { // Work around. Gradle does not include enough information to disambiguate // between different examples and scenarios. systemProperty("cucumber.junit-platform.naming-strategy", "long") } Secondly, We Will Implement the Smart BDD Solution Java source code: Java @ExtendWith(SmartReport.class) public class ShoppingTest { private final RpnCalculator calculator = new RpnCalculator(); @Test void giveCorrectChange() { givenTheFollowingGroceries( item("milk", 9), item("bread", 7), item("soap", 5)); whenIPay(25); myChangeShouldBe(4); } public void whenIPay(int amount) { calculator.push(amount); calculator.push("-"); } public void myChangeShouldBe(int change) { assertThat(-calculator.value().intValue()).isEqualTo(change); } public void givenTheFollowingGroceries(Grocery... groceries) { for (Grocery grocery : groceries) { calculator.push(grocery.getPrice()); calculator.push("+"); } } // omitted Grocery class } build.gradle.kts showing the Smart BDD config: Kotlin dependencies { testImplementation("io.bit-smart.bdd:report:0.1-SNAPSHOT") } This generates: Turtle Scenario: Give correct change (PASSED) Given the following groceries "milk" 9 "bread" 7 "soap" 5 When I pay 25 My change should be 4 Notice how simple Smart BDD is, with much fewer moving parts — 1 test class vs 4 files.We removed the Cucumber feature file. The feature file has a few main drawbacks: It adds the complexity of mapping between itself and the source code As an abstraction, it will leak into the bottom layers It is very hard to keep feature files consistent When developing an IDE, it will need to support the feature file. Frequently you'll be left with no support You don't have these drawbacks in Smart BDD. In fact, it promotes bests practices and productively. The counterargument for feature files is normally, well, it allows non-devs to create user stories and or acceptance criteria. The reality is when a product owner writes a user story and or acceptance criteria, it will almost certainly be modified by the developer. Using Smart BDD, you can still write user stories and or acceptance criteria in your backlog. It's a good starting point to help you write the code. In time you'll end up with more consistency. In the Next Section, I’ll Try To Demonstrate the Complexity of Cucumber Let's dive into something more advanced: A dollar is 2 of the currency below Visa payments take 1 currency processing fee Gherkin When I pay 25 "Dollars" Then my change should be 29 It is reasonable to think that we can add this method: Java @When("I pay {int} {string}") public void i_pay(int amount,String currency){ calc.push(amount*exchangeRate(currency)); calc.push("-"); } However, this is the output: Plain Text Step failed io.cucumber.core.runner.AmbiguousStepDefinitionsException: "I pay 25 "Dollars"" matches more than one step definition: "I pay {int} {string}" in io.cucumber.examples.calculator.ShoppingSteps.i_pay(int,java.lang.String) Here is where the tail starts to wag the dog. You embark on investing time and more code to work around the framework. We should always strive for simplicity and additional code, and in a boarder sense, additional features will always make code harder to maintain.We have three options:1. Mutate i_pay method to handle a currency. If we had 10's or 100's, occurrences of When I pay .. this would be risky and time-consuming. If we add a "Visa" payment method, we are starting to add complexity to an existing method.2. Create a new method that doesn't start with I pay. It could be With currency I pay 25 "Dollars". Not ideal, as this isn't really what I wanted. It loses discoverability. How would we add a "Visa" payment method?3. Use multiple steps I pay and with currency. This is the most maintainable solution. For discoverability, you'd need a consistent naming convention. With a large codebase, good luck with discoverability, as they are loosely coupled in the feature file but coupled in code.Option 1 is the one I have seen the most — God glues methods with very complicated regular expressions. With Cucumber Expressions, it's the cleanest code I have seen. According to the Cucumber documentation, conjunction steps are an anti-pattern. If I added a payment method I pay 25 "Dollars" with "Visa" I don't know if this constitutes the conjunction step anti-pattern. If we get another requirement, "Visa" payments doubled on a "Friday," setting the day surely constitutes another step.Option 3 is really a thin layer on a builder. Below is one possible implementation of a builder. With this approach, adding the day of the week would be trivial (as we've chosen to use the builder pattern). Gherkin When I pay 25 And with currency "Dollars" Java public class ShoppingSteps { private final ShoppingService shoppingService = new ShoppingService(); private final PayBuilder payBuilder = new PayBuilder(); @Given("the following groceries:") public void the_following_groceries(List<Grocery> groceries) { for (Grocery grocery : groceries) { shoppingService.calculatorPush(grocery.getPrice().getValue()); shoppingService.calculatorPush("+"); } } @When("I pay {int}") public void i_pay(int amount) { payBuilder.withAmount(amount); } @When("with currency {string}") public void i_pay_with_currency(String currency) { payBuilder.withCurrency(currency); } @Then("my change should be {}") public void my_change_should_be_(int change) { pay(); assertThat(-shoppingService.calculatorValue().intValue()).isEqualTo(change); } private void pay() { final Pay pay = payBuilder.build(); shoppingService.calculatorPushWithCurrency(pay.getAmount(), pay.getCurrency()); shoppingService.calculatorPush("-"); } // builders and classes omitted } Let’s Implement This in Smart BDD: Java @ExtendWith(SmartReport.class) public class ShoppingTest { private final ShoppingService shoppingService = new ShoppingService(); private PayBuilder payBuilder = new PayBuilder(); @Test void giveCorrectChange() { givenTheFollowingGroceries( item("milk", 9), item("bread", 7), item("soap", 5)); whenIPay(25); myChangeShouldBe(4); } @Test void giveCorrectChangeWhenCurrencyIsDollars() { givenTheFollowingGroceries( item("milk", 9), item("bread", 7), item("soap", 5)); whenIPay(25).withCurrency("Dollars"); myChangeShouldBe(29); } public PayBuilder whenIPay(int amount) { return payBuilder.withAmount(amount); } public void myChangeShouldBe(int change) { pay(); assertEquals(-shoppingService.calculatorValue().intValue(), change); } public void givenTheFollowingGroceries(Grocery... groceries) { for (Grocery grocery : groceries) { shoppingService.calculatorPush(grocery.getPrice()); shoppingService.calculatorPush("+"); } } private void pay() { final Pay pay = payBuilder.build(); shoppingService.calculatorPushWithCurrency(pay.getAmount(), pay.getCurrency()); shoppingService.calculatorPush("-"); } // builders and classes omitted } Let's count the number of lines for the solution of optionally paying with dollars: Cucumber: ShoppingSteps 123 ParameterTypes 21 RunCucumberTest 16 shopping.feature 20 Total: 180 lines Smart BDD: ShoppingTest 114 lines Total: 114 lines Hopefully, I have demonstrated the simplicity and productivity of Smart BDD. Example of Using Diagrams With Smart BDD This is the source code: Java @ExtendWith(SmartReport.class) @SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.RANDOM_PORT) public class BookControllerIT { // skipped setup... @Override public void doc() { featureNotes("Working progress for example of usage Smart BDD"); } @BeforeEach void setupUml() { sequenceDiagram() .addActor("User") .addParticipant("BookStore") .addParticipant("ISBNdb"); } @Order(0) @Test public void getBookBy13DigitIsbn_returnsTheCorrectBook() { whenGetBookByIsbnIsCalledWith(VALID_13_DIGIT_ISBN_FOR_BOOK_1); thenTheResponseIsEqualTo(BOOK_1); } private void whenGetBookByIsbnIsCalledWith(String isbn) { HttpHeaders headers = new HttpHeaders(); headers.setAccept(singletonList(MediaType.APPLICATION_JSON)); response = template.getForEntity("/book/" + isbn, String.class, headers); generateSequenceDiagram(isbn, response, headers); } private void generateSequenceDiagram(String isbn, ResponseEntity<String> response, HttpHeaders headers) { sequenceDiagram().add(aMessage().from("User").to("BookStore").text("/book/" + isbn)); List<ServeEvent> allServeEvents = getAllServeEvents(); allServeEvents.forEach(event -> { sequenceDiagram().add(aMessage().from("BookStore").to("ISBNdb").text(event.getRequest().getUrl())); sequenceDiagram().add(aMessage().from("ISBNdb").to("BookStore").text( event.getResponse().getBodyAsString() + " [" + event.getResponse().getStatus() + "]")); }); sequenceDiagram().add(aMessage().from("BookStore").to("User").text(response.getBody() + " [" + response.getStatusCode().value() + "]")); } // skipped helper classes... } In my opinion, the above does a very good job of documenting the Book Store.Smart BDD is being actively developed. I'll try to reduce the code required for diagrams, and potentially use annotations. Strike a balance between magic and declarative code.I use the method whenGetBookByIsbnIsCalledWith in the example above, as this is the most appropriate abstraction. If we had more requirements, then the code could look more like the one below. This is at the other end of the spectrum. Work has gone into a test API to make testing super easy. With its approach, notice how consistent the generated documentation will be. It will make referring to the documentation much easier. Java public class GetBookTest extends BaseBookStoreTest { @Override public void doc() { featureNotes("Book Store example of usage Smart BDD"); } @Test public void getBookWithTwoAuthors() { given(theIsbnDbContains(aBook().withAuthors("author", "another-author"))); when(aUserRequestsABook()); then(theResponseContains(aBook().withAuthors("author", "another-author"))); } } SmartBDD is allowing me to choose the abstraction/solution that I feel is right without a framework getting in the way or adding to my workload. Anything you do and don't like, please comment below. I encourage anybody to contact me if you want to know more — contact details on GitHub. All source code can be found here. Please check out this.
I dedicate this article to László Fekete, my former boss and director at T-Mobile Hungary. He plays a significant role in this story as he was the one who made the decision to cancel our contract. I must acknowledge that he made the right call, and it was the correct course of action. However, I also remember some instances where he seemed less concerned about his health, disregarding his blood pressure and cholesterol levels, despite my concerns, which we discussed a few times. Sadly, László passed away in 2017 at the young age of 57 due to a heart attack. It’s a stark reminder of the importance of taking care of our well-being and not neglecting warning signs. Now, as I find myself at the same age László was when he left us, it serves as a poignant reminder of the fragility of life and the need to prioritize our health and well-being. Introduction, Topic I am 57, and I recently made some bad moves and my back aches. I cannot sit for a long time, and I suddenly had ample time on my hand watching YouTube videos. During my exploration, I stumbled upon an impressive channel called ThePrimeTime. The creator of this channel is a remarkable young individual who possesses wisdom beyond his years. His videos exhibit a profound understanding of technology, which captivates me. I appreciate how he simply sits and discusses other videos or articles without feeling the need to over-explain things. It’s a "take it or leave it" approach. Those who comprehend his content gain valuable insights, and those who don’t: sorry. I very much enjoy it when I understand what he says and feel that probably not many do. It is a snug but somewhat arrogant feeling that one should be careful. Also, I could hardly find any of his statements I would strongly disagree with. Sometimes I feel we could have some discussion, but generally, I can agree to, or accept his points. Go and watch him! Recently I saw a video where he was commenting on an article that was about a story about how someone almost accidentally corrupted PayPal in the early days. I will not talk about that: you can view it. It is a story with lots of technical details you can learn from. Being 57 does not only mean backache. It also means that I have seen and done a few things that sometimes I tell younger people in the office. Why not write articles about these? So I decided I will write a few articles about things that I have seen and done and that I think are worth sharing. And here we go. Disclaimer Most of the story is true and based on real events. Stopping Threads As I said, I have time to watch YouTube videos. I came across this short, one-minute video about how to stop a thread, which you should not. It does say you should not, and gives a sentence why, but one minute is too limited to explain the reasons. I know why you should not stop a thread and not only what the documentation says. It cost me $20,000 in lost revenue in 2006 when the GDP per capita per year in my country was less than that. Background I started programming in 1980. My father was a professor at TU Budapest in Hungary and could access a TI-30 calculator. It was a programmable calculator. I remember I tried to write a program to crack an RSA-encoded text published to be cracked. Although the prime numbers were only 10 digits long, and the calculator had 1024-step program memory, registers were perhaps 16-bit integers, and I had to implement multi-precision arithmetic in my code. I never succeeded with this one, but the exposure to programming "infected" me. I was 14. Later I programmed the Swedish ABC80, the Hungarian C64 clone, and the Hungarian VT-1080z that resembled the Enterprise computer, ZX Sinclair Spectrum, and many others. That time we programmed whatever we could get our hands on. My Unix exposure was minimal because the chair I was volunteering with had VAX VMS machines. I finished TU Budapest Electric Engineer and started to work as a sales rep for Digital Equipment Corporation in Hungary in 1991 -does not fit a programming carrier, does it? At the time, paid programming in Hungary was mainly crafting bookkeeping applications in DBase, and it did not pay well. I was already married and had a child with twins on the way, so I needed a respectable wage. You can afford to live your hobby as a profession if you can afford it. My priorities were different. I kept programming in C and Perl at that time as a hobby. I even wrote a small book in Hungarian about Perl, which was the first of such, and many learned Perl programming at that time from my book. So much so that when Larry Wall visited the Budapest Perl conference in the late 90s, I was invited as a keynote speaker. The title of my talk was "Forbid Perl," and I was talking about how Perl makes you so productive that using Perl eliminates the need for too many other programmers, and therefore it has to be forbidden to be used for real applications. I was saying that in front of the father of Perl sitting in the first row. I intended that as humor, but after a few decades, I see that I was right. At the time, I did not see the benefit of professional software development overhead versus hacking something together in Perl. It is not the trait of the language per se, but Perl usually was used to script things in a hacky way. I left DEC in 1999 and joined index.hu as CIO. It was a small startup, the first only online news site founded by a few university friends of mine. We wanted to make history and get rich. We achieved the first one. I also programmed the advertisement engine of the site, which is a story on its own. When the dot-com bubble burst, we had to lay off people and restructure the operation from investment-oriented growing to sustainable operation. There were a lot of things I learned there, but those were management lessons, not programming. The last step was to give in my own notice, and I left the company in 2001. Then I started to work for T-Mobile, but they did not hire me as a programmer. I had no prior professional experience and "hobby programming" did not count. I was hired as a project manager. Working in that position, I even ignited the development of a reformed project management methodology, but this was not my piece of cake. Five years later, my brother told me to create our own company. He was the one-sixth owner of a small company that was doing software development, and the other five developers moved towards SQL and stored-procedure direction. My brother thought that Java development is more interesting and more prospective, so he wanted to start a new company. Why we decided to go in the Java direction and not Microsoft is again another topic that deserves an article on its own. It was more political/philosophical than a technical decision. I will write an article about that later, as well as about why we chose to trade in our old Linux and Windows machines for MacBooks with MacOS. These are interesting topics because people approach such decisions based on belief, and it can lead to heated discussions. Not now. We started the company in 2006. One of our first clients was T-Mobile. We knew the people there, they knew me, and they needed an advertisement engine. I wrote the one for index.hu, and it was still in production six years later, delivering millions of HTTP responses per day. Not only it was the far largest traffic web server in the country, but it was also the most reliable one. Much later at a conference, a speaker said that back in the day, they checked their Internet connection by pinging the adserver of index.hu. Other sites could be down, but if the adserver is not reachable, then it is more likely they have a connection problem. He did not know I was sitting there in the audience. It was a great feeling hearing that. That ad server ran for nine years uninterrupted and without any code modification. Thread Stopping AdServer So we got the contract to develop an ad server for T-Mobile. The contract size was around $30,000. I did not know any Java at that time. I had limited OOP experience. I was mainly programming in C and Perl and not commercial. But I was a good programmer, or at least I thought so. We created the application in Java while I was learning it. The users were authenticated, and we had a backing database with user data. The ad engine had to select the ads based on the mobile subscription, number of used minutes, phone type, and other parameters. We used PostgreSQL as the database in the dev environment and Hibernate on a Tomcat. An advertisement had to be displayed in two seconds. If the selection process was running longer, then a default ad was displayed. To achieve this, we executed the selection logic in a separate thread using the ExecutorService and waiting on a Future object. We also used the database connection pool available from the Hibernate library. We manually tested the application, and it worked fine. We ran some load tests and it worked fine. But I wanted to deliver perfect software, so I decided to play a bit with the case when the selection times out. In that case, the request serving thread sends a response, but the selection thread is still running putting a useless load on an already overloaded system. We can call 'stop' on the thread. We tested this scenario, and it worked fine. The connection pool realized that the thread was stopped and closed the connection and created a new one in these cases. I knew that the production will use the ORACLE database and the connection pool will also be the one provided by ORACLE. We did not have a test environment with these components; therefore, I decided not to use this performance-saving trick in the production system. But I was proud of my code, and I did not want to delete the line stopping the thread. Instead, I put it into an if statement that was never true, with a comment something like: Java // this 'if' is always false but I keep it here to show that I know how to stop a thread if( true ){ thread.stop(); } Now, you already get a clue, especially if you skip over the line reading it not realizing that the ACTUAL value is 'true'. The code went into production and worked fine. It worked fine for a while, except when the load went up. When the load went up, the application started to deliver the default ad. The weird thing was that after the load went down, the application still delivered the default ad. The operation had to restart the application to work again. We did not have a clue what was going on, and we responded by suggesting increasing the hardware capacity. It was clearly needed to handle the peak load, but there was another problem eventually. We tried to ignore it. Being a small company, we were already occupied with the next project. Putting new hardware under service in a large corporation does not happen from one day to the other. The service needed to restart a few times every day. It went on between us and the project manager till he escalated the issue, and we could not ignore it anymore. We had the log files, and we started to investigate. The log clearly showed that the application allocated a connection from the pool when a selection started. The log also showed that the connection was returned to the pool when the selection finished even when the selection timed out. I strongly believed that this could not be the problem, especially because we did not stop the threads in the case of a timeout. At least that was what I thought. We added more logging to the code, and deployed it to production, which essentially made it a bit slower, making the client even less happy, but it was needed. There were log items for each request and response, we knew when a request timed out, the connection id, thread id, and so on. The log was huge, and I wrote Perl scripts to analyze it. It took a week and a lot of diagrams until I realized that whenever a thread timed out, that connection ID never appeared later in the log. The connection never returned to the pool, even though the library falsely reported that it was. But why? We did not stop the threads, and the log showed that these threads always stopped a few milliseconds after the selection timed out. This was the first clue. It seemed fishy. When the selection using a few SQL selects timed out, why was it always only a little bit late? The fact that we first tried to increase the timeout from two seconds to two and a half seconds shows how clueless we were. It made the time outing threads to finish in two and a half seconds plus a few milliseconds. Always the timeout time plus a few milliseconds. "Didn’t you leave the code in that stops the thread?" asked my brother. "Sure, I didn’t, see, it is in an if statement that is never true." "No. That is what the comment says." — he replied. — "But the code is there, and it stops the thread." I was looking at that code hundreds of times blindly during those previous two weeks. I read the comment and skipped the code. I read what I wanted to be there and not what really was there. This time I deleted the line and the comment, and we deployed the code. It worked fine, unlike our relationship with the client. They canceled our contract for the further development of the ad server. We had lost a $20,000 contract, and we were told that we will never get any contract from them again. I could not blame them. This "never" lasted three years when partnering with another company, we delivered a system they used to electronically sign four million invoices every month. Do you remember what my very first program was on that TI-30 calculator? That delivery I am not ashamed of. I learned a lot during those three years. Conclusion There are many things to learn from this story. Don’t Stop Threads Even though you technically can stop threads, you MUST not. If you MUST not, then why experiment with it? You can tell the thread that it can stop if it feels so. You can use some shared state for the thread to check periodically and stop when it can do safely. Calling interrupt() on a thread is a good way to tell the thread that it can stop. Documentations list a lot of things that may happen when you call stop() on a thread, but reading it is one thing and when it happens to you is another. Everybody has to burn their hands a few times. The cleverer you are, the less you need to burn your hands. There are some Mucius Scaevolas out there, not learning from their mistakes. Do not be one. Logs are Only Logs Logs contain the messages that the application writes about what it does and not what really happens. Programmers make bugs, including misleading logs. Even when you use a high-reputation library, you can still face bugs. Comments Can Be Dangerous Comments can be dangerous. Comments are in English and no matter how much of a nerd you are, your eyes will read the human text first. In this case, non-native English speakers may have a slight advantage. If the comment is outdated, misleading, or plain wrong, it may lead the maintainers' eyes away from the code. A good comment does not explain what the code does. The code precisely describes that. You should explain why it does what it does and how other parts of the code should use, and interface the code. In this case, not having any comment before the if statement, or just: Java // we can switch experimental thread stopping on and off here if( true ){ thread.stop(); } would have been better. My today wisdom says to delete the line and the comment. If you want to keep the line as a legacy, do it in a separate branch or tag it in the version control system. You Do Not Know When You Are Stupid At that point, writing my first commercial application, I was at the peak of my Dunner-Kruger curve. You do not know when you are there. If you feel you are an expert, you know everything, and you are the best: be very careful. You are probably at that dangerous peak. Don’t stay there: climb off on the right side and start to climb up on the peak-less long slope to the right, always with a healthy level of self-doubt. Customer Is Always Right When the customer says that you are wrong, you are wrong. They complained that the application does not come back from the overloaded state and our first response was to ask for more hardware. Technically, we were right. If the system does not ever get into the overloaded state, then there is no problem not getting back to normal from it. However, you see how arrogant this standpoint was. Probably this was the number one reason we lost the contract. We learned from this mistake. We learned many more mistakes after that, and this is a process that I have not finished yet. Learning from mistakes may be the most perpetual thing in my life, and I think it is important for everyone. I have many similar stories, and if you liked this one then leave a comment, and give some feedback that will make me know that I should write more.
I don’t know anyone who is still using the Oracle JDK. It has been my recommendation for quite a while to just switch to an OpenJDK distribution as they are roughly drop-in replacements for Oracle’s official JDK. I’ve repeated that advice quite frequently, but I guess I glossed over a lot of details that might be insignificant for hackers but can become a pretty big deal in an enterprise setting. Following the review from Bazlur I chose to also pick up Simon Ritter's “OpenJDK Migration for Dummies." This book has two things going against it: For dummies — I’ve never read one of these before and never considered reading those. While it does use overly simplified language, I think the Dummies brand hurts this book. The subject matter is sophisticated and geared towards developers (and DevOps) who can follow the nuances. I think it might deter some developers from reading it, which is a shame. It's a corporate book — Simon is the Deputy CTO at Azul. This creates the justified concern that the book is a promotion for Azul products. It has those. But having read through it, the material seems objective and valuable. It does give one advantage: we're getting the book for free. Unique Analysis There are many Java books, but this is the first time I read a book that explains these specific subjects. The first chapter discusses licensing, TCK (Test Compatibility Kit), and similar issues. I’m familiar with all of them since I worked for Sun Microsystems and Oracle. I had a team composing TCKs for the mobile platform at Sun Microsystems. However, even experienced engineers outside of Sun might be unfamiliar with these tests. The TCK is how we verify that our port of OpenJDK is still compatible with Java. The book illustrates why a reputable OpenJDK distribution can be trusted due to the TCK. This is knowledge that’s probably not available elsewhere if you aren’t deeply involved in the JVM. Simon explained nicely the scope of the current TCK (139k tests for Java 11), but I think he missed one important aspect: TCK isn’t enforced. Oracle doesn’t know you ran the TCK “properly.” It can’t verify that. This is why OpenJDK vendors must have a good reputation and understanding of the underlying QA process. This is just the beginning, but pretty much every chapter covered material that I haven’t seen in other books. As a side note, the whole TCK creation process is pretty insane. The engineers in my team would go over the JavaDoc like religious scholars and fill up Excel sheets with every statement made by the JavaDoc or implied by the Javadoc. Then devise tests to verify in isolation that every statement is indeed true. In that sense, TCK doesn’t test quality. It tests compliance to a uniform, consistent standard. A JDK can fail after running for a week, and we might not be able to tell from running the TCK alone. Early releases of JDK 8 did exactly that at that time… Learning From a 'For Dummies' Book I mentioned at the top of this post that I treat the OpenJDK migration casually as a drop-in replacement. This book convinced me that this is not always the case. There are some nuances. I was casually aware of most of them, e.g., I worked a lot with Pisces back in the day, but I never saw all of these nuances in a single place. This is an important list for anyone considering a migration of this type. One should comb over these and verify the risks before embarking on such a migration. As a startup, you might not care about exact fonts or NTLM support, but in an enterprise environment, there are still projects that might rely on that. In a later chapter comparing the various OpenJDK distributions, Simon included a great chart illustrating the differences. Take into consideration that Simon works for Azul, and it is obvious in the chart. Still, the content of the chart is pretty accurate. I am missing the Microsoft VM in the comparison, but I guess it’s a bit too new to be registered as a major vendor. Business Related Aspects I did consulting work for major organizations quite often, such as banks, insurance companies, etc. In these organizations, commercial support is crucial. I used to scoff at that notion, but as I ran into some of the edge cases those organizations run into, I get it. We had a senior engineer from IBM debug AIX and Websphere issues. Similarly, a bank I worked with was having issues with RTL support in newer versions of Swing. As the older JDKs were nearing the end of their life cycle, they were forced to migrate but had no way of addressing these issues. Oracle’s support for those issues was a dud in that case. Commercial support for the JVM isn’t something I ever needed or wanted to buy, but I understand the motivation. At the end of the book, Simon goes into more detail on the extra value that can be layered on top of an OpenJDK distribution. This was interesting to me as I often don’t understand the “it’s free” business model. It helped me both in understanding the motivation for offering (and maintaining) an OpenJDK release. It’s also valuable when I work with larger organizations. I can advise better on the value they can deliver for Java (e.g., fast response to zero days, etc.). Who Should Read This Book? It’s not a book for everyone. If you’re using the Oracle JDK, then you need to pick this up and review it. Make sure the reasons you picked Oracle JDK are still valid. They probably aren’t. If your job includes picking the JDKs for provisioning or development then you should make sure you’re familiar with the materials in the book. If you’re just learning Java or using it in a hobbyist capacity, then there’s an appendix on Java’s history that might be interesting to you. But the book as a whole is probably more targeted at developers who are handling production. In that regard, it’s useful both for server and desktop developers. BTW if you or someone you know is interested in learning Java, please check out my new book for learning Java.
Event-Driven Architecture (EDA) is a design principle focused on the creation, detection, and reaction to events. Renowned for its resilience and low latency, EDA is a reliable choice for developing robust, high-performing microservices. Moreover, this method can be helpful in improving productivity and making the process of cloud migration smoother. In this article, we will outline six key considerations and tactics for developing such services. Crafting Event-Based Microservices Within EDA, microservices interact with each other through events. An event is simply an immutable indication that something has happened. Microservices register their interest in a subset of events and perform their processing by reacting to these events when they occur. On completion of handling of an event, microservices will usually post one or more events reflecting the result of this processing, which will trigger further downstream microservices. For simplicity, we treat all inputs as recorded, replayable events. These inputs include the wall clock, reference information, configuration details, commands, and queries. For instance, timestamps are derived from the most recent wall clock event, so they are replayable, and a command or query is modeled as an event signifying that such a command or query has been requested. The EDA environment manages events using an immutable, ever-growing journal or log. This methodology means that microservices become less reliant on each others’ internal operation (loosely coupled), making systems more flexible in many ways, facilitating different deployment options, and improving scalability. Microservices developed within an event-driven framework are inherently simpler to design, test, and reason about. Each microservice is a function of its code and all the events it has ever processed. This aspect simplifies the creation of behavior-driven tests, essentially boiling down to a data-in and data-out scenario. This simplifies the maintenance of the software. Implementing Application Logic Within an Event-Driven Context In an EDA application, events are defined to model those in your business domain. Application components react to these events in ways that model the activities of your business processes. Data associated with an event encapsulated within the event’s payload can be implemented in the application as a Data Transfer Object (DTO). Representing events in a single, immutable event stream has the additional advantage of providing an audit trail of all the state changes that have occurred during the execution of the application, making it easier to analyze unexpected behavior, generate test environments that mirror production environments, and satisfy regulatory requirements. The event stream becomes the single source of truth throughout the application. Adopting a lightweight, comprehensive recording strategy eliminates the need for extensive logging, minimizing overhead and latency. To replicate the application’s state, retrieve the event journal and replay the microservices to the desired point. This approach allows you to debug and verify issue resolutions in the application proactively rather than waiting for the issues to recur. Optimising Microservice Performance Using high-performance, low-latency messaging, microservices can communicate as fast as threads in a monolith while still maintaining the key benefits of microservices. These include distinct contracts between components, independent testing, and development, a comprehensive record of all interactions, and independence in deployment strategies. Despite a system being distributed across numerous data centers globally, the efficiency of these microservices means that a single machine can effectively handle the critical, most latency-sensitive processing tasks. We generally conduct latency benchmarks for single-threaded services at one hundred thousand events per second. A service requiring higher throughput can handle loads exceeding a million events per second. Moreover, each component will operate fastest when event processing is performed in a single thread since this eliminates the significant overhead of lock contention, as there will be no concurrent access to mutable states within the component. Event Replication, Deterministic Services, and Live Upgrades We use Chronicle Queue as an event store, with total ordering and replication of this journal from leader to followers. Followers will see exactly the same data in the same order, with the same identifier for each message. Chronicle Services is a Java-based Microservices framework that provides features that can be used to ensure that services are deterministic. You can be sure that the follower services will be in the same state as the leader and be ready to take over from it. We are seeing an increasing demand for support for live upgrades. Using this framework allows us to build services that can seamlessly transition between instances running different software versions and revert back if necessary.
Nicolas Fränkel
Head of Developer Advocacy,
Api7
Shai Almog
OSS Hacker, Developer Advocate and Entrepreneur,
Codename One
Marco Behler
Ram Lakshmanan
yCrash - Chief Architect