DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report

Java

Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.

icon
Latest Refcards and Trend Reports
Trend Report
Low Code and No Code
Low Code and No Code
Trend Report
Modern Web Development
Modern Web Development
Refcard #024
Core Java
Core Java

DZone's Featured Java Resources

How To Best Use Java Records as DTOs in Spring Boot 3

How To Best Use Java Records as DTOs in Spring Boot 3

By Denis Magda CORE
With the Spring 6 and Spring Boot 3 releases, Java 17+ became the baseline framework version. So now is a great time to start using compact Java records as Data Transfer Objects (DTOs) for various database and API calls. Whether you prefer reading or watching, let’s review a few approaches for using Java records as DTOs that apply to Spring Boot 3 with Hibernate 6 as the persistence provider. Sample Database Follow these instructions if you’d like to install the sample database and experiment yourself. Otherwise, feel free to skip this section: 1. Download the Chinook Database dataset (music store) for the PostgreSQL syntax. 2. Start an instance of YugabyteDB, a PostgreSQL-compliant distributed database, in Docker: Shell mkdir ~/yb_docker_data docker network create custom-network docker run -d --name yugabytedb_node1 --net custom-network \ -p 7001:7000 -p 9000:9000 -p 5433:5433 \ -v ~/yb_docker_data/node1:/home/yugabyte/yb_data --restart unless-stopped \ yugabytedb/yugabyte:latest \ bin/yugabyted start \ --base_dir=/home/yugabyte/yb_data --daemon=false 3. Create the chinook database in YugabyteDB: Shell createdb -h 127.0.0.1 -p 5433 -U yugabyte -E UTF8 chinook 4. Load the sample dataset: Shell psql -h 127.0.0.1 -p 5433 -U yugabyte -f Chinook_PostgreSql_utf8.sql -d chinook Next, create a sample Spring Boot 3 application: 1. Generate an application template using Spring Boot 3+ and Java 17+ with Spring Data JPA as a dependency. 2. Add the PostgreSQL driver to the pom.xml file: XML <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>42.5.4</version> </dependency> 3. Provide YugabyteDB connectivity settings in the application.properties file: Properties files spring.datasource.url = jdbc:postgresql://127.0.0.1:5433/chinook spring.datasource.username = yugabyte spring.datasource.password = yugabyte All set! Now, you’re ready to follow the rest of the guide. Data Model The Chinook Database comes with many relations, but two tables will be more than enough to show how to use Java records as DTOs. The first table is Track, and below is a definition of a corresponding JPA entity class: Java @Entity public class Track { @Id private Integer trackId; @Column(nullable = false) private String name; @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "album_id") private Album album; @Column(nullable = false) private Integer mediaTypeId; private Integer genreId; private String composer; @Column(nullable = false) private Integer milliseconds; private Integer bytes; @Column(nullable = false) private BigDecimal unitPrice; // Getters and setters are omitted } The second table is Album and has the following entity class: Java @Entity public class Album { @Id private Integer albumId; @Column(nullable = false) private String title; @Column(nullable = false) private Integer artistId; // Getters and setters are omitted } In addition to the entity classes, create a Java record named TrackRecord that stores short but descriptive song information: Java public record TrackRecord(String name, String album, String composer) {} Naive Approach Imagine you need to implement a REST endpoint that returns a short song description. The API needs to provide song and album names, as well as the author’s name. The previously created TrackRecord class can fit the required information. So, let’s create a record using the naive approach that gets the data via JPA Entity classes: 1. Add the following JPA Repository: Java public interface TrackRepository extends JpaRepository<Track, Integer> { } 2. Add Spring Boot’s service-level method that creates a TrackRecord instance from the Track entity class. The latter is retrieved via the TrackRepository instance: Java @Transactional(readOnly = true) public TrackRecord getTrackRecord(Integer trackId) { Track track = repository.findById(trackId).get(); TrackRecord trackRecord = new TrackRecord( track.getName(), track.getAlbum().getTitle(), track.getComposer()); return trackRecord; } The solution looks simple and compact, but it’s very inefficient because Hibernate needs to instantiate two entities first: Track and Album (see the track.getAlbum().getTitle()). To do this, it generates two SQL queries that request all the columns of the corresponding database tables: SQL Hibernate: select t1_0.track_id, t1_0.album_id, t1_0.bytes, t1_0.composer, t1_0.genre_id, t1_0.media_type_id, t1_0.milliseconds, t1_0.name, t1_0.unit_price from track t1_0 where t1_0.track_id=? Hibernate: select a1_0.album_id, a1_0.artist_id, a1_0.title from album a1_0 where a1_0.album_id=? Hibernate selects 12 columns across two tables, but TrackRecord needs only three columns! This is a waste of memory, computing, and networking resources, especially if you use distributed databases like YugabyteDB that scatters data across multiple cluster nodes. TupleTransformer The naive approach can be easily remediated if you query only the records the API requires and then transform a query result set to a respective Java record. The Spring Data module of Spring Boot 3 relies on Hibernate 6. That version of Hibernate split the ResultTransformer interface into two interfaces: TupleTransformer and ResultListTransformer. The TupleTransformer class supports Java records, so, the implementation of the public TrackRecord getTrackRecord(Integer trackId) can be optimized this way: Java @Transactional(readOnly = true) public TrackRecord getTrackRecord(Integer trackId) { org.hibernate.query.Query<TrackRecord> query = entityManager.createQuery( """ SELECT t.name, a.title, t.composer FROM Track t JOIN Album a ON t.album.albumId=a.albumId WHERE t.trackId=:id """). setParameter("id", trackId). unwrap(org.hibernate.query.Query.class); TrackRecord trackRecord = query.setTupleTransformer((tuple, aliases) -> { return new TrackRecord( (String) tuple[0], (String) tuple[1], (String) tuple[2]); }).getSingleResult(); return trackRecord; } entityManager.createQuery(...) - Creates a JPA query that requests three columns that are needed for the TrackRecord class. query.setTupleTransformer(...) - The TupleTransformer supports Java records, which means a TrackRecord instance can be created in the transformer’s implementation. This approach is more efficient than the previous one because you no longer need to create entity classes and can easily construct a Java record with the TupleTransformer. Plus, Hibernate generates a single SQL request that returns only the required columns: SQL Hibernate: select t1_0.name, a1_0.title, t1_0.composer from track t1_0 join album a1_0 on t1_0.album_id=a1_0.album_id where t1_0.track_id=? However, there is one very visible downside to this approach: the implementation of the public TrackRecord getTrackRecord(Integer trackId) method became longer and wordier. Java Record Within JPA Query There are several ways to shorten the previous implementation. One is to instantiate a Java record instance within a JPA query. First, expand the implementation of the TrackRepository interface with a custom query that creates a TrackRecord instance from requested database columns: Java public interface TrackRepository extends JpaRepository<Track, Integer> { @Query(""" SELECT new com.my.springboot.app.TrackRecord(t.name, a.title, t.composer) FROM Track t JOIN Album a ON t.album.albumId=a.albumId WHERE t.trackId=:id """) TrackRecord findTrackRecord(@Param("id") Integer trackId); } Next, update the implementation of the TrackRecord getTrackRecord(Integer trackId) method this way: Java @Transactional(readOnly = true) public TrackRecord getTrackRecord(Integer trackId) { return repository.findTrackRecord(trackId); } So, the method implementation became a one-liner that gets a TrackRecord instance straight from the JPA repository: as simple as possible. But that’s not all. There is one more small issue. The JPA query that constructs a Java record requires you to provide a full package name for the TrackRecord class: SQL SELECT new com.my.springboot.app.TrackRecord(t.name, a.title, t.composer)... Let’s find a way to bypass this requirement. Ideally, the Java record needs to be instantiated without the package name: SQL SELECT new TrackRecord(t.name, a.title, t.composer)... Hypersistence Utils Hypersistence Utils library comes with many goodies for Spring and Hibernate. One feature allows you to create a Java record instance within a JPA query without the package name. Let’s enable the library and this Java records-related feature in the Spring Boot application: 1. Add the library’s Maven artifact for Hibernate 6. 2. Create a custom IntegratorProvider that registers TrackRecord class with Hibernate: Java public class ClassImportIntegratorProvider implements IntegratorProvider { @Override public List<Integrator> getIntegrators() { return List.of(new ClassImportIntegrator(List.of(TrackRecord.class))); } } 3. Update the application.properties file by adding this custom IntegratorProvider: Properties files spring.jpa.properties.hibernate.integrator_provider=com.my.springboot.app.ClassImportIntegratorProvider After that you can update the JPA query of the TrackRepository.findTrackRecord(...) method by removing the Java record’s package name from the query string: Java @Query(""" SELECT new TrackRecord(t.name, a.title, t.composer) FROM Track t JOIN Album a ON t.album.albumId=a.albumId WHERE t.trackId=:id """) TrackRecord findTrackRecord(@Param("id") Integer trackId); It’s that simple! Summary The latest versions of Java, Spring, and Hibernate have a number of significant enhancements to simplify and make coding in Java more fun. One such enhancement is built-in support for Java records that can now be easily used as DTOs in Spring Boot applications. Enjoy! More
Kubernetes-Native Development With Quarkus and Eclipse JKube

Kubernetes-Native Development With Quarkus and Eclipse JKube

By Eric Deandrea
This article explains what Eclipse JKube Remote Development is and how it helps developers build Kubernetes-native applications with Quarkus. Introduction As mentioned in my previous article, microservices don’t exist in a vacuum. They typically communicate with other services, such as databases, message brokers, or other microservices. Because of this distributed nature, developers often struggle to develop (and test) individual microservices that are part of a larger system. The previous article examines some common inner-loop development cycle challenges and shows how Quarkus, combined with other technologies, can help solve some of the challenges. Eclipse JKube Remote Development was not one of the technologies mentioned because it did not exist when the article was written. Now that it does exist, it certainly deserves to be mentioned. What Is Eclipse JKube Remote Development? Eclipse JKube provides tools that help bring Java applications to Kubernetes and OpenShift. It is a collection of plugins and libraries for building container images and generating and deploying Kubernetes or OpenShift manifests. Eclipse JKube Remote Development is a preview feature first released as part of Eclipse JKube 1.10. This new feature is centered around Kubernetes, allowing developers the ability to run and debug Java applications from a local machine while connected to a Kubernetes cluster. It is logically similar to placing a local development machine inside a Kubernetes cluster. Requests from the cluster can flow into a local development machine, while outgoing requests can flow back onto the cluster. Remember this diagram from the first article using the Quarkus Superheroes? Figure 1: Local development environment logically inserted into a Kubernetes cluster. We previously used Skupper as a proxy to connect a Kubernetes cluster to a local machine. As part of the 1.10 release, Eclipse JKube removes the need to use Skupper or install any of its components on the Kubernetes cluster or your local machine. Eclipse JKube handles all the underlying communication to and from the Kubernetes cluster by mapping Kubernetes Service ports to and from the local machine. Eclipse JKube Remote Development and Quarkus The new Eclipse JKube Remote Development feature can make the Quarkus superheroes example very interesting. If we wanted to reproduce the scenario shown in Figure 1, all we’d have to do is re-configure the rest-fights application locally a little bit and then run it in Quarkus dev mode. First, deploy the Quarkus Superheroes to Kubernetes. Then, add the Eclipse JKube configuration into the <plugins> section in the rest-fights/pom.xml file: XML <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>openshift-maven-plugin</artifactId> <version>1.11.0</version> <configuration> <remoteDevelopment> <localServices> <localService> <serviceName>rest-fights</serviceName> <port>8082</port> </localService> </localServices> <remoteServices> <remoteService> <hostname>rest-heroes</hostname> <port>80</port> <localPort>8083</localPort> </remoteService> <remoteService> <hostname>rest-villains</hostname> <port>80</port> <localPort>8084</localPort> </remoteService> <remoteService> <hostname>apicurio</hostname> <port>8080</port> <localPort>8086</localPort> </remoteService> <remoteService> <hostname>fights-kafka</hostname> <port>9092</port> </remoteService> <remoteService> <hostname>otel-collector</hostname> <port>4317</port> </remoteService> </remoteServices> </remoteDevelopment> </configuration> </plugin> Version 1.11.0 of the openshift-maven-plugin was the latest version as of the writing of this article. You may want to check if there is a newer version available. This configuration tells OpenShift (or Kubernetes) to proxy requests going to the OpenShift Service named rest-fights on port 8082 to the local machine on the same port. Additionally, it forwards the local machine ports 8083, 8084, 8086, 9092, and 4317 back to the OpenShift cluster and binds them to various OpenShift Services. The code listing above uses the JKube OpenShift Maven Plugin. If you are using other Kubernetes variants, you could use the JKube Kubernetes Maven Plugin with the same configuration. If you are using Gradle, there is also a JKube OpenShift Gradle Plugin and JKube Kubernetes Gradle Plugin available. Now that the configuration is in place, you need to open two terminals in the rest-fights directory. In the first terminal, run ./mvnw oc:remote-dev to start the remote dev proxy service. Once that starts, move to the second terminal and run: Shell ./mvnw quarkus:dev \ -Dkafka.bootstrap.servers=PLAINTEXT://localhost:9092 \ -Dmp.messaging.connector.smallrye-kafka.apicurio.registry.url=http://localhost:8086 This command starts up a local instance of the rest-fights application in Quarkus dev mode. Requests from the cluster will come into your local machine. The local application will connect to other services back on the cluster, such as the rest-villains and rest-heroes applications, the Kafka broker, the Apicurio Registry instance, and the OpenTelemetry collector. With this configuration, Quarkus Dev Services will spin up a local MongoDB instance for the locally-running application, illustrating how you could combine local services with other services available on the remote cluster. You can do live code changes to the local application while requests flow through the Kubernetes cluster, down to your local machine, and back to the cluster. You could even enable continuous testing while you make local changes to ensure your changes do not break anything. The main difference between Quarkus Remote Development and Eclipse JKube Remote Development is that, with Quarkus Remote Development, the application is running in the remote Kubernetes cluster. Local changes are synchronized between the local machine and the remote environment. With JKube Remote Development, the application runs on the local machine, and traffic flows from the cluster into the local machine and back out to the cluster. Wrap-Up As you can see, Eclipse JKube Remote Development compliments the Quarkus Developer Joy story quite well. It allows you to easily combine the power of Quarkus with Kubernetes to help create a better developer experience, whether local, distributed, or somewhere in between. More
Java REST API Frameworks
Java REST API Frameworks
By Farith Jose Heras García
Readability in the Test: Exploring the JUnitParams
Readability in the Test: Exploring the JUnitParams
By Otavio Santana CORE
Leverage Lambdas for Cleaner Code
Leverage Lambdas for Cleaner Code
By Karol Kisielewski
How To Create a Failover Client Using the Hazelcast Viridian Serverless
How To Create a Failover Client Using the Hazelcast Viridian Serverless

Failover is an important feature of systems that rely on near-constant availability. In Hazelcast, a failover client automatically redirects its traffic to a secondary cluster when the client cannot connect to the primary cluster. Consider using a failover client with WAN replication as part of your disaster recovery strategy. In this tutorial, you’ll update the code in a Java client to automatically connect to a secondary, failover cluster if it cannot connect to its original, primary cluster. You’ll also run a simple test to make sure that your configuration is correct and then adjust it to include exception handling. You'll learn how to collect all the resources that you need to create a failover client for a primary and secondary cluster, create a failover client based on the sample Java client, test failover and add exception handling for operations. Step 1: Set Up Clusters and Clients Create two Viridian Serverless clusters that you’ll use as your primary and secondary clusters and then download and connect sample Java clients to them. Create the Viridian Serverless cluster that you’ll use as your primary cluster. When the cluster is ready to use, the Quick Connection Guide is displayed. Select the Java icon and follow the on-screen instructions to download, extract, and connect the preconfigured Java client to your primary cluster. Create the Viridian Serverless cluster that you’ll use as your secondary cluster. Follow the instructions in the Quick Connection Guide to download, extract, and connect the preconfigured Java client to your secondary cluster. You now have two running clusters, and you’ve checked that both Java clients can connect. Step 2: Configure a Failover Client To create a failover client, update the configuration and code of the Java client for your primary cluster. Start by adding the keystore files from the Java client of your secondary cluster. Go to the directory where you extracted the Java client for your secondary cluster and then navigate to src/main/resources. Rename the client.keystore file to client2.keystore and rename the client.truststore file to client2.truststore to avoid overwriting the files in your primary cluster keystore. Copy both files over to the src/main/resources directory of your primary cluster. Update the code in the Java client (ClientwithSsl.java) of your primary cluster to include a failover class and the connection details for your secondary cluster. You can find these connection details in the Java client of your secondary cluster. Go to the directory where you extracted the Java client for your primary cluster and then navigate to src/main/java/com/hazelcast/cloud/. Open the Java client (ClientwithSsl.java) and make the following updates. An example failover client is also available for download. Step 3: Verify Failover Check that your failover client automatically connects to the secondary cluster when your primary cluster is stopped. Make sure that both Viridian Serverless clusters are running. Connect your failover client to the primary cluster in the same way as you did in Step 1. Stop your primary cluster. From the dashboard of your primary cluster, in Cluster Details, select Pause. In the console, you’ll see the following messages in order as the client disconnects from your primary cluster and reconnects to the secondary cluster: CLIENT_DISCONNECTED CLIENT_CONNECTED CLIENT_CHANGED_CLUSTER If you’re using the nonStopMapExample in the sample Java client, your client stops. This is expected because write operations are not retryable when a cluster is disconnected. The client has sent a put request to the cluster but has not received a response, so the result of the request is unknown. To prevent the client from overwriting more recent write operations, this write operation is stopped and an exception is thrown. Step 4: Exception Handling Update the nonStopMapExample() function in your failover client to trap the exception that is thrown when the primary cluster disconnects. Add the following try-catch block to the while loop in the nonStopMapExample() function. This code replaces the original map.put() function.try { map.put("key-" + randomKey, "value-" + randomKey); } catch (Exception e) { // Captures exception from disconnected client e.printStackTrace(); } 2. Verify your code again (repeat Step 3). This time the client continues to write map entries after it connects to the secondary cluster.

By Fawaz Ghali, PhD
Java Sealed Classes: Building Robust and Secure Applications
Java Sealed Classes: Building Robust and Secure Applications

Java sealed classes were introduced in Java 15 as a way to restrict the inheritance hierarchy of a class or interface. A sealed class or interface restricts the set of classes that can implement or extend them, which can help prevent bugs and make code more maintainable. Suppose you're building an e-commerce application that supports different payment methods, such as credit cards, PayPal, and Bitcoin. You could define a sealed class called PaymentMethod that permits various subclasses for each payment method: Java public sealed class PaymentMethod permits CreditCard, PayPal, Bitcoin { // Class members } In this example, PaymentMethod is a sealed class that permits CreditCard, PayPal, and Bitcoin to extend it. A sealed class can permit any number of classes to extend it by specifying them in a comma-separated list after the permits keyword. There are plenty of other use cases where the sealed class can be used to make our life easy. So let's go for a deep dive! Creating a Closed-Type Hierarchy Sealed classes can create a closed-type hierarchy; a limited set of classes that cannot be extended or implemented outside a particular package. This ensures that only a specified set of classes can be used and prevents unwanted extensions or implementations. Java package ca.bazlur public sealed class Animal permits Cat, Dog { // Class definition } public final class Cat extends Animal { // Class definition } public final class Dog extends Animal { // Class definition } In this example, Animal is a sealed class that only permits Cat and Dog to extend it. Any other attempt to extend Animal will result in a compilation error. Creating a Limited Set of Implementations Sealed classes can also create a limited set of implementations for a specific interface or abstract class. This ensures that the interface or abstract class owners can control and change the set of implementations. Java public sealed interface Shape permits Circle, Square { double getArea(); } public final class Circle implements Shape { // Class definition } public final class Square implements Shape { // Class definition } In this example, Shape is a sealed interface that only permits Circle and Square to implement it. This ensures that any other implementations of Shape cannot be created. Enhancing Pattern Matching in switch Statements Sealed classes can also be used to enhance pattern matching in switch statements. By limiting the set of subtypes that can extend a sealed class, developers can use pattern matching with exhaustive checks, ensuring that all possible subtypes are covered. Java public sealed abstract class PaymentMethod permits CreditCard, DebitCard, PayPal { // Class definition } public class PaymentProcessor { public void processPayment(PaymentMethod paymentMethod, double amount) { switch (paymentMethod) { case CreditCard cc -> { // Process credit card payment } case DebitCard dc -> { // Process debit card payment } case PayPal pp -> { // Process PayPal payment } } } } In this example, PaymentMethod is a sealed class that permits CreditCard, DebitCard, and PayPal to extend it. The processPayment method in the PaymentProcessor class uses a switch statement with pattern matching to process different payment methods. Using a sealed class ensures that all possible subtypes are covered in the switch statement, making it less error-prone. Implementing a State Machine Sealed classes can be used to implement a state machine, a computational model that defines the behavior of a system in response to a series of inputs. In a state machine, each state is represented by a sealed class, and the transitions between states are modeled by methods that return a new state. Java public sealed class State permits IdleState, ActiveState, ErrorState { public State transition(Input input) { // Transition logic } } public final class IdleState extends State { // Class definition } public final class ActiveState extends State { // Class definition } public final class ErrorState extends State { // Class definition } In this example, State is a sealed class that permits the extend of IdleState, ActiveState, and ErrorState. The transition method is responsible for transitioning between states based on the input provided. The use of sealed classes ensures that the state machine is well-defined and can only be extended with a limited set of classes. Creating a Limited Set of Exceptions Sealed classes can also create a limited set of exceptions that can be thrown by a method. This can help enforce a consistent set of error conditions and prevent the creation of arbitrary exception types. Java public sealed class DatabaseException extends Exception permits ConnectionException, QueryException { // Class definition } public final class ConnectionException extends DatabaseException { // Class definition } public final class QueryException extends DatabaseException { // Class definition } In this example, DatabaseException is a sealed class that permits ConnectionException and QueryException to extend it. This ensures that any exception thrown by a method related to a database operation is a well-defined type and can be handled consistently. Controlling Access to Constructors Sealed classes can also control access to constructors, which can help enforce a specific set of invariants for the class. Java public sealed class Person { private final String name; private final int age; private Person(String name, int age) { this.name = name; this.age = age; } public static final class Child extends Person { public Child(String name, int age) { super(name, age); if (age >= 18) { throw new IllegalArgumentException("Children must be under 18 years old."); } } } public static final class Adult extends Person { public Adult(String name, int age) { super(name, age); if (age < 18) { throw new IllegalArgumentException("Adults must be 18 years old or older."); } } } } In this example, a Person is a sealed class with two subclasses: Child and Adult. The constructors for Child and Adult are marked as public, but the constructor for Person is marked as private, ensuring all Person instances are created through its subclasses. This enables the Person to enforce the invariant that children must be under 18 years old and adults must be 18 years old or older. Improving Code Security Sealed classes can also improve code security by ensuring that only trusted code can extend or implement them. This can help prevent unauthorized access to sensitive parts of the codebase. Java public sealed class SecureCode permits TrustedCode { // Class definition } // Trusted code public final class TrustedCode extends SecureCode { // Class definition } // Untrusted code public final class UntrustedCode extends SecureCode { // Class definition } In this example, SecureCode is a sealed class that only permits TrustedCode to extend it. This ensures that only trusted code can access the sensitive parts of the codebase. Enabling Polymorphism With Exhaustive Pattern Matching Sealed classes can also be used to enable polymorphism with exhaustive pattern matching. By using sealed classes, developers can ensure that all possible subtypes are covered in a pattern-matching statement, enabling safer and more efficient code. Java public sealed class Shape permits Circle, Square { // Class definition } public final class Circle extends Shape { // Class definition } public final class Square extends Shape { // Class definition } public void drawShape(Shape shape) { switch (shape) { case Circle c -> c.drawCircle(); case Square s -> s.drawSquare(); } } In this example, Shape is a sealed class that permits Circle and Square to extend it. The drawShape method uses pattern matching to draw the shape, ensuring that all possible subtypes of Shape are covered in the switch statement. Enhancing Code Readability Sealed classes can also be used to enhance code readability by clearly defining the set of possible subtypes. By limiting the set of possible subtypes, developers can more easily reason about the code and understand its behaviour. Java public sealed class Fruit permits Apple, Banana, Orange { // Class definition } public final class Apple extends Fruit { // Class definition } public final class Banana extends Fruit { // Class definition } public final class Orange extends Fruit { // Class definition } In this example, Fruit is a sealed class that permits Apple, Banana, and Orange to extend it. This clearly defines the set of possible fruits and enhances code readability by making the code easier to understand. Enforcing API Contracts Sealed classes can also be used to enforce API contracts, which are the set of expectations that consumers of an API have regarding its behavior. By using sealed classes, API providers can ensure that the set of possible subtypes is well-defined and documented, improving the API's usability and maintainability. Java public sealed class Vehicle permits Car, Truck, Motorcycle { // Class definition } public final class Car extends Vehicle { // Class definition } public final class Truck extends Vehicle { // Class definition } public final class Motorcycle extends Vehicle { // Class definition } In this example, Vehicle is a sealed class that permits Car, Truck, and Motorcycle to extend it. By using a sealed class to define the set of possible vehicle types, API providers can ensure that the API contract is well-defined and enforceable. Preventing Unwanted Subtype Extensions Finally, sealed classes can also be used to prevent unwanted subtype extensions. By limiting the set of possible subtypes, developers can prevent the creation of arbitrary subclasses that do not conform to the class's intended behavior. Java public sealed class PaymentMethod { // Class definition } final class CreditCard extends PaymentMethod { // Class definition in the same file } final class DebitCard extends PaymentMethod { // Class definition in the same file } public class StolenCard extends PaymentMethod { // Class definition in another file } The PaymentMethod class is defined as sealed, which means that it cannot be extended beyond the classes defined within the file. Two final classes, CreditCard and DebitCard, are defined as subtypes of PaymentMethod, indicating that they are the only allowed extensions of the PaymentMethod class. However, a third class, StolenCard, is defined as a subtype PaymentMethod outside the file. Since the PaymentMethod class is sealed, it cannot be extended by this unauthorized subtype, and a compiler error will be generated. Enhancing Type Safety of Collections Sealed classes can also enhance the type safety of collections, which are a fundamental part of the Java language. By using sealed classes to define the set of possible elements in a collection, developers can ensure that the collection is type-safe and can enforce certain invariants. Java public sealed interface Animal permits Dog, Cat, Bird { // Interface definition } public final class Dog implements Animal { // Class definition } public final class Cat implements Animal { // Class definition } public final class Bird implements Animal { // Class definition } In this example, Animal is a sealed interface that permits Dog, Cat, and Bird to implement it. By using sealed classes to define the set of possible animals, developers can ensure that a collection of animals is type-safe and can enforce certain invariants. Java List animals = List.of(new Dog(), new Cat(), new Bird()); In this example, animal is a List of elements that extend the Animal interface. Because Animal is a sealed interface, the set of possible elements in the list is well-defined and type-safe. Facilitating API Evolution Sealed classes can also facilitate API evolution, which is updating an API to add or remove features. By using sealed classes to define the set of possible classes that can extend or implement a specific class or interface, developers can ensure that API changes are compatible with existing code. Java public sealed class Animal permits Dog, Cat { // Class definition } public final class Dog extends Animal { // Class definition } public final class Cat extends Animal { // Class definition } public final class Bird extends Animal { // Class definition } In this example, Animal is a sealed class that permits Dog and Cat to extend it. Because Animal is a sealed class, adding a new subtype, Bird would be a breaking change and would require an API version bump. This ensures that API changes are compatible with existing code and can help maintain the stability of the codebase. Here are a few more concrete and real-life examples of how sealed classes can be used in Java development: Representing Different Types of Messages In many distributed systems, messages pass data between different components or services. Sealed classes can represent different types of messages and ensure that each type is well-defined and type-safe. Java public sealed interface Message permits RequestMessage, ResponseMessage { // Interface definition } public final class RequestMessage implements Message { // Class definition } public final class ResponseMessage implements Message { // Class definition } In this example, Message is a sealed interface that permits RequestMessage and ResponseMessage to implement it. By using sealed classes, developers can ensure that each message type is well-defined and type-safe, which can help prevent bugs and improve the maintainability of the code. Defining a Set of Domain Objects In domain-driven design, domain objects represent the concepts and entities in a business domain. Sealed classes can define a set of domain objects and ensure that each object type is well-defined and has a limited set of possible subtypes. Java public sealed interface OrderItem permits ProductItem, ServiceItem { // Interface definition } public final class ProductItem implements OrderItem { // Class definition } public final class ServiceItem implements OrderItem { // Class definition } In this example, OrderItem is a sealed interface that permits ProductItem and ServiceItem to implement it. By using sealed classes, developers can ensure that each domain object is well-defined and has a limited set of possible subtypes, which can help prevent the introduction of bugs and make the code more maintainable. Representing Different Types of Users In many systems, users represent individuals who interact with the system somehow. Sealed classes can represent different types of users and ensure that each type is well-defined and type-safe. Java public sealed class User permits Customer, Employee, Admin { // Class definition } public final class Customer extends User { // Class definition } public final class Employee extends User { // Class definition } public final class Admin extends User { // Class definition } In this example, User is a sealed class that permits Customer, Employee, and Admin to extend it. By using sealed classes, developers can ensure that each user type is well-defined and type-safe, which can help prevent bugs and make the code more maintainable. Defining a Limited Set of Error Types In many systems, errors signal that something has gone wrong during the execution of a program. Sealed classes can define a limited set of error types and ensure that each type is well-defined and has a limited set of possible subtypes. Java public sealed class Error permits NetworkError, DatabaseError, SecurityError { // Class definition } public final class NetworkError extends Error { // Class definition } public final class DatabaseError extends Error { // Class definition } public final class SecurityError extends Error { // Class definition } In this example, Error is a sealed class that permits NetworkError, DatabaseError, and SecurityError to extend it. By using sealed classes to define a limited set of error types, developers can ensure that each error type is well-defined and has a limited set of possible subtypes, which can help make the code more maintainable and easier to reason about. Defining a Limited Set of HTTP Methods In many web applications, HTTP methods interact with web resources such as URLs. Sealed classes can define a limited set of HTTP methods and ensure that each method is well-defined and has a limited set of possible subtypes. Java public sealed class HttpMethod permits GetMethod, PostMethod, PutMethod { // Class definition } public final class GetMethod extends HttpMethod { // Class definition } public final class PostMethod extends HttpMethod { // Class definition } public final class PutMethod extends HttpMethod { // Class definition } In this example, HttpMethod is a sealed class that permits GetMethod, PostMethod, and PutMethod to extend it. By using sealed classes to define a limited set of HTTP methods, developers can ensure that each method is well-defined and has a limited set of possible subtypes. This can help make the code more maintainable and easier to reason about. Defining a Limited Set of Configuration Parameters In many systems, configuration parameters are used to control the behavior of a program. Sealed classes can define a limited set of configuration parameters and ensure that each parameter is well-defined and has a limited set of possible subtypes. Java public sealed class ConfigurationParameter permits DebugMode, LoggingLevel { // Class definition } public final class DebugMode extends ConfigurationParameter { // Class definition } public final class LoggingLevel extends ConfigurationParameter { // Class definition } In this example, ConfigurationParameter is a sealed class that permits DebugMode and LoggingLevel to extend it. By using sealed classes to define a limited set of configuration parameters, developers can ensure that each parameter is well-defined and has a limited set of possible subtypes. This can help make the code more maintainable and easier to reason about. Defining a Limited Set of Database Access Strategies In many systems, databases are used to store and retrieve data. Sealed classes can define a limited set of database access strategies and ensure that each strategy is well-defined and has a limited set of possible subtypes. Java public sealed class DatabaseAccessStrategy permits JdbcStrategy, JpaStrategy, HibernateStrategy { // Class definition } public final class JdbcStrategy extends DatabaseAccessStrategy { // Class definition } public final class JpaStrategy extends DatabaseAccessStrategy { // Class definition } public final class HibernateStrategy extends DatabaseAccessStrategy { // Class definition } In this example, DatabaseAccessStrategy is a sealed class that permits JdbcStrategy, JpaStrategy, and HibernateStrategy to extend it. By using sealed classes to define a limited set of database access strategies, developers can ensure that each strategy is well-defined and has a limited set of possible subtypes, which can help make the code more maintainable and easier to reason about. Defining a Limited Set of Authentication Methods In many systems, authentication is used to verify the identity of users. Sealed classes can define a limited set of authentication methods and ensure that each method is well-defined and has a limited set of possible subtypes. Java public sealed class AuthenticationMethod permits PasswordMethod, TokenMethod, BiometricMethod { // Class definition } public final class PasswordMethod extends AuthenticationMethod { // Class definition } public final class TokenMethod extends AuthenticationMethod { // Class definition } public final class BiometricMethod extends AuthenticationMethod { // Class definition } In this example, AuthenticationMethod is a sealed class that permits PasswordMethod, TokenMethod, and BiometricMethod to extend it. By using sealed classes to define a limited set of authentication methods, developers can ensure that each method is well-defined and has a limited set of possible subtypes. This can help make the code more maintainable and easier to reason about. Conclusion In conclusion, Java sealed classes are a powerful feature that can help you create more robust and maintainable code by restricting the inheritance hierarchy of your classes and interfaces. By limiting the set of permitted subclasses or implementers, you can prevent bugs and ensure that your code is more secure and easier to maintain. By mastering Java-sealed classes, you can take your programming skills to the next level and build better software.

By A N M Bazlur Rahman CORE
The Right Feature at the Right Place
The Right Feature at the Right Place

Before moving to Developer Relations, I transitioned from software architect to solution architect long ago. It's a reasonably common career move. The problem in this situation is two-fold: You know software libraries perfectly well. You don't know infrastructure components well. It seems logical that people in this situation try to solve problems with the solutions they are most familiar with. However, it doesn't mean it's the best approach. It's a bad one in most cases. A Concrete Example Imagine an API application. It runs on the JVM, and it's written in the "Reactive" style with the help of the Spring Boot framework. One of the requirements is to limit the number of calls a user can make in a timeframe. In the API world, such a rate-limiting feature is widespread. With my software architect hat on, I'll search for a JVM library that does it. Because I have a bit of experience, I know of the excellent Bucket4J library: Java rate-limiting library based on token-bucket algorithm - Bucket4J It's just a matter of integrating the library into my code: Kotlin val beans = beans { bean { val props = ref<BucketProperties>() //1 BucketFactory().create( //2 props.size, props.refresh.tokens, props.refresh.duration ) } bean { coRouter { val handler = HelloHandler(ref()) //3 GET("/hello") { handler.hello(it) } GET("/hello/{who}") { handler.helloWho(it) } } } } class HelloHandler(private val bucket: Bucket) { //3 private suspend fun rateLimit( //4 req: ServerRequest, f: suspend (ServerRequest) -> ServerResponse ) = if (bucket.tryConsume(1)) f.invoke(req) else ServerResponse.status(429).buildAndAwait() suspend fun hello(req: ServerRequest) = rateLimit(req) { //5 ServerResponse.ok().bodyValueAndAwait("Hello World!") } } Get configuration properties from a @ConfigurationProperties-annotated class. Create a properly-configured bucket. Pass the bucket to the handler. Create a reusable rate-limiting wrapper based on the bucket. Wrap the call. At this point, the bucket is for the whole app. If we want a dedicated bucket per user, as per the requirements, we need to: Bring in Spring Security to authenticate users (or write our own authentication mechanism). Create a bucket per user. Store the bucket server-side and bind it to the user session. While it's perfectly acceptable, it's a lot of effort for a feature that one can implement cheaper elsewhere. The Golden Case for API Gateways "A place for everything, everything in its place." This quote is associated with Samuel Smiles, Mrs. Isabella Beeton, and Benjamin Franklin. In any case, cross-cutting features don't belong in the application but in infrastructure components. Our feature is an API, so it's a perfect use-case for an API Gateway. We can simplify the code by removing Bucket4J and configuring an API Gateway in front of the application. Here's how to do it with Apache APISIX. YAML consumers: - username: joe plugins: key-auth: #1 key: joe - username: jane plugins: key-auth: #1 key: jane routes: - uri: /hello* upstream: type: roundrobin nodes: "resilient-boot:8080": 1 plugins: limit-req: #2 rate: 1 burst: 0 key: consumer_name #3 rejected_code: 429 key-auth: ~ #1 We use a simple HTTP header for authentication for demo purposes. Real-world apps would use OAuth2.0 or OpenID Connect, but the principle is the same. Rate limiting plugin. Configure a bucket per consumer. Discussion: What Belongs Where? Before answering the question, let me go through a detour first. The book Thinking, Fast and Slow makes the case that the brain has two "modes": The book's main thesis is that of a dichotomy between two modes of thought: "System 1" is fast, instinctive and emotional; "System 2" is slower, more deliberative, and more logical. Also, System 2 is much more energy-consuming. Because we are lazy, we tend to favor System 1 - fast and instinctive. Hence, as architects, we will generally favor the following: Solutions we are familiar with, e.g., libraries for former software architects Rules to apply blindly: As a side comment, it's the main reason for herd mentality in the tech industry, such as "microservices everywhere." Hence, take the following advice as guidelines and not rules. Now that this has been said, here's my stance. First, you need to categorize whether the feature is purely technical. For example, classical rate-limiting to prevent DDoS is purely technical. Such technical features belong in the infrastructure: every reverse proxy worth its salt has this kind of rate-limiting. The more business-related a feature, the closer it must be to the application. Our use-case is slightly business-related because rate-limiting is per user. Still, the API Gateway provides the feature out of the box. Then, know your infrastructure components. It's impossible to know all the components, but you should have a passing knowledge of the elements available inside your org. If you're using a cloud provider, get a map of all their proposed services. Regarding the inability to know all the components, talk to your SysAdmins. My experience has shown me that most organizations must utilize their SysAdmins effectively. The latter would like to be more involved in the overall system architecture design but are rarely requested to. Most SysAdmins love to share their knowledge! You also need to think about configuration. If you need to configure each library component on each instance, that's a huge red flag: prefer an infrastructure component. Some libraries offer a centralized configuration solution, e.g., Spring Cloud Config. Carefully evaluate the additional complexity of such a component and its failure rate compared to other dedicated infrastructure components. Organizations influence choice a lot. The same problem in two different organizational contexts may result in two opposite solutions. Familiarity with a solution generally trumps other solutions' better fit. Finally, as I mentioned in the introduction, your experience will influence your choices: former software architects prefer app-centric solutions, and former sys admins infrastructure solutions. One should be careful to limit one's bias toward one's preferred solution, which might not be the best fit in a different context. Conclusion In this post, I've taken the example of per-user rate limiting to show how one can implement it in a library and an infrastructure component. Then, I generalized this example and gave a couple of guidelines. I hope they will help you make better choices regarding where to place a feature in your system. The complete source code for this post can be found on GitHub. To Go Further Apache APISIX rate limiting plugin

By Nicolas Fränkel CORE
I Don’t TDD: Pragmatic Testing With Java
I Don’t TDD: Pragmatic Testing With Java

We're building a Google Photos clone, and testing is damn hard! How do we test that our Java app spawns the correct ImageMagick processes or that the resulting thumbnails are the correct size and indeed thumbnails, not just random pictures of cats? How do we test different ImageMagick versions and operating systems? What’s in the Video 00:00 Intro We start the video with a general overview of what makes testing our Google Photos clone so tricky. As in the last episode, we started extracting thumbnails from images, but we now need a way to test that. As this is done via an external ImageMagick process, we are in for a ride. 01:05 Setting Up JUnit and Writing the First Test Methods First off, we will set up JUnit 5. As we're not using a framework like Spring Boot, it serves as a great exercise to add the minimal set of libraries and configuration that gets us up and running with JUnit. Furthermore, we will write some test method skeletons, while thinking about how we would approach testing our existing code and taking care of test method naming, etc. 04:19 Implementing ImageMagick Version Detection In the last episode, we noticed that running our Java app on different systems leads to unexpected results or just plain errors. That is because different ImageMagick versions offer a different set of APIs that we need to call. Hence, we need to adjust our code to detect the installed ImageMagick version and also add a test method that checks that ImageMagick is indeed installed, before running any tests. 10:32 Testing Trade-Offs As is apparent with detecting ImageMagick versions, the real problem is that to reach 100% test coverage with a variety of operating systems and installed ImageMagick versions, you would need a pretty elaborate CI/CD setup, which we don't have in the scope of this project. So we are discussing the pros and cons of our approach. 12:00 Implementing @EnabledIfImageMagickIsInstalled What we can do, however, is make sure that the rest of our test suite only runs if ImageMagick is installed. Thus, we will write a custom JUnit 5 annotation called EnabledIfImageMagickIsInstalled that you can add to any test methods or even whole classes to enable said behavior. If ImageMagick is not installed, the tests simply will not run instead of display an ugly error message. 16:05 Testing Successful Thumbnail Creation The biggest problem to tackle is: How do we properly assert that thumbnails were created correctly? We will approach this question by testing for ImageMagick's exit code, estimating file sizes, and also loading the image, and making sure it has the correct amount of pixels. All of this with the help of AssertJ and its SoftAssertions to easily combine multiple assertions into one. 23:59 Still Only Works on My Machine Even after having tested our whole workflow, we still need to make sure to call a different ImageMagick API for different versions. We can quickly add that behavior to support IM6 as well as IM7, and we are done. 25:53 Deployment Time to deploy the application to my NAS. And this time around, everything works as expected! 26:20 Final Testing Thoughts We did a fair amount of testing in this episode. Let's sum up all the challenges and pragmatic testing strategies that we learned about. 27:31 What’s Next We'll finish the episode by having a look at what's next: multithreading issues! See you in the next episode.

By Marco Behler CORE
Hidden Classes in Java 15
Hidden Classes in Java 15

Java has had anonymous classes from the very start. (Well, actually, they came in version 1.1.) However, anonymous classes were not anonymous. You did not need to name them, but under the hood, they were named by the Java compiler. If you are familiar with the command javap, you can "disassemble" a JAR file and see the name of the compiler generated for the anonymous classes. Java 15 introduced hidden classes, which do not have a name. Almost, as you will see. It is not part of the language but part of the JDK. There is no language element to create hidden classes, but JDK methods and classes come to the rescue. In this article, we will discuss what hidden classes are, and what is the reason to have them, how you can use hidden classes, how to load hidden classes using the JDK methods and, finally how to easily create and load hidden classes using SourceBuddy. Note I created SourceBuddy, an Apache v2.0 licensed open-source program. While creating the code, I learned a few things about hidden classes I wanted to share with you. You may also look at this article as a SourceBuddy promotion, which is ok if you do that. Nevertheless, I hope to successfully add extra value to this article so that this is not simply a promo. What Are Hidden Classes? There is an easy-to-read introductory article about hidden classes on Baeldung. If you are impatient and do not care about some intricate details, go there and read that article. Baeldung articles are always short, focusing on the most important and correct. They give a good starting point, which there would be no reason to repeat. Hidden classes were proposed in the JEP371, and it reads: hidden … classes that cannot be used directly by the bytecode of other classes It is a bit short and may not be easy to understand. A hidden class is loaded into the JVM. When a class is in source code or byte code format, it cannot be "hidden." This term can refer only to loaded classes. Calling them secretly loaded classes could be more appropriate. A class gets hidden when it is loaded in a particular way so that it remains secret in front of other code parts. Remaining hidden does not mean that other codes cannot use this class. They can so long as long they "know" about the secret. The big difference is that this class is not "advertised" because you cannot find it using the name. When you load a class the hidden way creating a hidden class, you will have a reference to this class. Using the reflective methods, you can instantiate the class many times, and then you can invoke methods, set, and get fields. If the class implements an interface or extends a class, you can cast the instance reference to the interface and class and invoke the methods without reflection. The class is hidden for two reasons: it does not have a name other classes could reference, and there is no reference from the class loader to the class. When you call getName() or getSimpleName() on a variable referencing a hidden class, you will get some string. These are names for messages for humans and are irrelevant for the other classes. When a class refers to another class it needs the canonical name. getCanonicalName() returns null. The canonical name is the actual name of the class, which is non-existent in the case of hidden classes. Since the class cannot be found through the class loader without the canonical name, there is no reason for the loader to keep a reference to the class. Why would it keep a reference when it cannot give the class to anyone? Keeping a reference would have only one side effect: preventing the GC from unloading the class so long as the class loader is alive. Since there is no reference from the class loader, the GC can unload the class object as soon as it is out of use. What Is the Use of Hidden Classes? The JEP371 describes the reason for hidden classes. It says Allow frameworks to define classes as non-discoverable implementation details of the framework so that they cannot be linked against other classes nor discovered through reflection. Many frameworks use dynamically created classes. They are proxy classes in most cases. A proxy class implements an interface or extends another class, and when invoked, it calls an instance of the interface or the original class. It usually does something extra as well, or else there would be no reason for the proxy class and instance. An example is the Spring framework when your code requires injecting a request bean into a session bean. (Or any other shorter lifecycle bean into a longer one.) Several threads can serve different requests at the same time, all belonging to the same session. All these threads will see the same session bean, but they magically will see their request beans. The magic is a proxy object extending the request bean’s class. When you call a method on the request bean, you invoke the proxy instance. It checks the thread and the request it serves and forwards the call to the appropriate request bean. Another example is JPA lazy loading. You can have an SQL table where each row references the previous one. When you try to load the last record, it will automatically load the previous one, which indeed will load the one before. It will load the whole table. It happens unless you annotate the field as lazy. It means that the actual data from the database has to be loaded only when it is needed. When you load the record, you get a proxy object. This proxy object knows which record it refers to and will load the record from the database only when a method is called. The same mechanism is used for Aspect Oriented Programming and many other cases. You can create a proxy class using only the JDK reflection API, so long as the target class implements the interface you want to proxy. You can use the ByteBuddy library if there is no such interface. Note The cglib library is widely used and well-known in many frameworks, but it has been deprecated recently. When you create such classes, you do not need any name for these classes. You get the reference to the class and the reference to the instance. The framework injects the reference to the field it has to, and then the code uses them as any object. It does not need to know the name of the class. All it needs to know is that it is an instance of the target class or interface. However, some codes may discover the name. These classes have some names that reflection can discover. Some "clever" junior may discover it and play some neat trick that you may have later issues maintaining. Would it be better if there was no name at all? Probably yes, it would be cleaner. Hence: hidden classes. Note The proxy classes may also cause issues when you implement the equals(Object other) method. The usual implementation of the equals method compares the other object’s class to the actual class. It will eventually be false when the other object is a proxy instance. What the equals() method should check is the assignability whenever there is a possibility that the other object is a proxy. In addition to that, there is another reason to have hidden classes. As soon as a class has a name, it is possible to discover it by the name. The class loader has to keep the class alive to keep it discoverable. The class loader has a reference to the loaded classes. It means the garbage collector will not be able to collect the class, even when it is no longer in use. If a class has no name, the class loader does not need to keep a reference to this class. Class loaders do not keep references to hidden classes unless you explicitly instruct them to do so. When all instances of a hidden class are collected, and there is no reference to the class, the garbage collector will recognize it as garbage. The class loader will not keep the class in memory. That way, the frameworks will not over-consume memory when a long-running code creates a lot of classes. Better frameworks that collect unused classes do not need to create separate class loaders for these ephemeral classes. There is no need to create a short-living, disposable class loader to make the class also disposable. Support extending an access control nest with non-discoverable classes. It is the second bullet point in the list of goals in the JEP371. JVM can load hidden classes in a way that they become a member of a nest. What is a nest? Note If you know what a nesting host is and are impatient, jump to the following quote. Once upon a time, Java version 1.0 did not have inner classes. Now, you better stop reading it here if you ask me what inner classes are. Then, Java version 1.1 introduced inner classes but did not change the JVM structure. The JVM did not know anything about inner classes. The Java compiler created regular (almost) top-level classes from the inner classes. It invented some funny names, like A$B when there was a class B inside A. Note You can try to define an A$B top-level class in the same package where the class A containing the class B is. A$B is a valid name. You will see what the compiler does. There was some hacking with the visibility though. An inner class has the same visibility as the top-level class. Anything private inside one compilation unit (file) is visible. Visibility, however, is also enforced by the JVM. But the JVM sees two top-level classes. The compiler generated bridge methods in the classes wherever needed to overcome this issue. They are package level for the JVM, and when called, they pass on the call to the private method. Then came Java 11 something like 25 years later and introduced nest control. Since Java 11, every class has a relation to another class or to itself, which is the nest host of the class. Classes having the same nest host can see each other’s, private members. The JVM does not need the bridge methods anymore. When you load a class hidden, you can specify it to become a member of the same nest (having the same nest host) as the class that created the lookup object. Note We have not yet discussed what a lookup object is and how to load a class hidden. It will come. As for now: a lookup object is something that can load a byte array as a hidden class into the JVM memory. When a lookup object is created from inside a method of a class, the lookup object will belong to that class. When a class is loaded as hidden using the lookup object, it is possible to pass an option to make the new hidden class belong to the nest in which the code created the lookup object. Without the hidden class functionality, I do not know any other possibility to load a class that will belong to an already existing nest. If you know of any possibility, write it in a comment. The following bullet point reads: Support aggressive unloading of non-discoverable classes, so that frameworks have the flexibility to define as many as they need. It is an important point. When you create a class, it remains in the memory so long as the classloader is alive. Classloaders keep references to all the classes they loaded. These references say that some code may ask the classloader to return the loaded class object by the name. The application logic may long forget the class; nobody will ever need it. Still, the garbage collector cannot collect it because there is a reference in the class loader. A solution is to create a new class loader for every new non-hidden dynamically created class, but that is overkill. Classloaders loading hidden classes do not keep a reference to the hidden class by default. As with the nesting host, it is possible to provide an option to differ. I do not see any reason. There is no name, not discoverable, but keep an extra reference so the GC cannot throw it away. If you see any reasonable use case, again: comment. Deprecate the non-standard API sun.misc.Unsafe::defineAnonymousClass, with the intent to deprecate it for removal in a future release. Very well. Yes. Absolutely. Separate articles and many of them. Do not change the Java programming language in any way. Nice point. Sure. With these, we discussed what hidden classes are. You should have a firm understanding of their nature and why they are essential. We also derailed a bit to nest hosting or host nesting, nesting hosting… whatever. I hope it was of some value. In the following, I will discuss how we create hidden classes using the JDK API and then using SourceBuddy. Creating Hidden Classes Articles and tutorials showing how to load hidden classes use precompiled Java classes. These are usually part of the running application. The tutorial calculates the path to the .class file and reads the byte code. Technically this is correct but does not demonstrate the basic need for hidden class loading: load dynamically created classes hidden. These classes are not dynamically created and could be loaded the usual way. In this article, we will create a class from text, Java source on the fly — during run-time — and then load the resulting byte code as a hidden class. Code Sample Disclaimer The code samples are available on GitHub in the project directory. Each article has a project directory named YYYY-MM-DD-article-title where the project code files are. For this article it is 2023-01-05-hidden-classes. The samples are automatically copied from the project directory to the article using Jamal. No manual copy, no outdated stale samples. The sample project for this article contains only unit test files. The class is TestHiddenClassLoader. We have the source code for the hidden class stored in a field variable. Java 1. private static final String CODE1 = """ 2. package com.javax0.blog.hiddenclasses; 3. 4. public class MySpecialClass implements TestHiddenClassLoader.Hello { 5. 6. @Override 7. public void hello() { 8. System.out.println("Hello, from the hidden class."); 9. } 10. } 11. """; The interface is also inside the same class. Java 1. interface Hello { 2. void hello(); 3. } The following code is from one of the unit tests: Java 1. final var byteCode = Compiler.java().from(CODE1).compile().get(); 2. final var lookup = MethodHandles.lookup(); 3. final var classLookup = lookup.defineHiddenClass(byteCode, true); 4. final var helloClass = (Class<Hello>) classLookup.lookupClass(); 5. 6. final var hello = helloClass.getConstructor().newInstance(); 7. hello.hello(); We use the SourceBuddy library in this code to compile the Java source to byte code. The first line of the sample does that. We use SourceBuddy version 2.1.0. We need a lookup object to load the compiled byte code as a hidden class. This object is created on the second line. The lookup object is used on the third and fourth lines to load the class hidden. Line 3 defines the class loading it into the JVM. The second argument, true, initializes the class. That is when the static{} blocks execute. The last line invokes the interface-defined method hello(). Now the local variable hello is an instance of an object, a hidden class. What are a hidden class’s name, simple name, and canonical name? Let’s print it out. Java 1. System.out.println("1. " + hello.getClass()); 2. System.out.println("2. " + hello.getClass().getClassLoader()); 3. System.out.println("3. " + this.getClass().getClassLoader()); 4. System.out.println("4. " + hello.getClass().getSimpleName()); 5. System.out.println("5. " + hello.getClass().getName()); 6. System.out.println("6. " + hello.getClass().getCanonicalName()); 7. System.out.println("7. " + lookup.getClass()); 8. System.out.println("8. " + lookup.getClass().getClassLoader()); Output Disclaimer The output in the unit tests is redirected calling System.setOut(). The output is collected to a file and then this file is included calling include [verbatim] Jamal macro into the article. Hello, from the hidden class. 1. class com.javax0.blog.hiddenclasses.MySpecialClass/0x00000008011b0c00 2. jdk.internal.loader.ClassLoaders$AppClassLoader@5b37e0d2 3. jdk.internal.loader.ClassLoaders$AppClassLoader@5b37e0d2 4. MySpecialClass/0x00000008011b0c00 5. com.javax0.blog.hiddenclasses.MySpecialClass/0x00000008011b0c00 6. null 7. class java.lang.invoke.MethodHandles$Lookup 8. null You can see the output from calling hello() and then the name as printed from the implicit toString() from the class object, the class loader that loaded the hidden class, the simple name, the name, and in the last line, the canonical name. This last one is interesting as it is null, showing no class name. It is hidden. The class, although hidden, has a reference to the class loader that loaded it. It is needed when there is anything to resolve during the execution of the code. The difference is that the class loader does not have a reference to the class. One direction from the class to the loader exists, but the other direction from the loader to the class does not. The class loader is the same as the one that loaded the class calling MethodHandles.lookup(). You can see that since we printed out the class loader of the this object in the test. Finally, we also print out the class of the lookup object and the class loader. The latter is null, which means the bootstrap class loader loaded it. (For more information on class loaders, I can recommend reading the article class loaders from the Baeldung blog.) You should also note that the interface hello is package private. It is still visible for the dynamically created code because it is in the same package and module. Note Starting with Java 9, there is a module system in Java. Many developers I meet say they are not interested in JPMS; they do not need to use it. The fact is that you DO use it, whether you want it or not. It is the same as concurrent programming. Java is concurrent; at least there are three threads in a JVM, so your code runs in a concurrent environment, whether you want it or not. You may not have trouble understanding the details for a long time. However, when you start digging deeper and creating code that uses some "tricks" or does something special, you almost certainly face some weird errors. You must know and understand the underlying theory to understand the errors, handle them, mitigate the cause, and fix the bug. Loading hidden classes dynamically created is precisely such a trick. You should learn Java Modules. When the hidden class is loaded, it is in the same package as the one where the interface is defined. It is not enough, however, as we will see an example in the next section. It is also a requirement that the same class loader loads the interface and the hidden class. That way, the interface, and the hidden class are in the same module, in this case, the same unnamed module. The different class loaders load classes into different modules; thus, when you load a class using a different class loader, it may not see the package fields, methods, interfaces, etc., even if they are in the same package. It is not the only requirement that the lookup object is from the same module. It is also a requirement that it is from the same package as the class to be loaded. We must stop here to clarify things, to be painfully precise, because it is easy to confuse things at this point. The lookup object is an instance of a class in the java.lang.invoke package. The class loader loaded this class is null as shown in the output. It means the bootstrap class loader. The bootstrap class loader is implemented in C/C++ and not in Java. No corresponding Java object represents this class loader; thus, there cannot be a reference to it. It is solved by returning null from getClassloader(). There is a module, package, and class that "belongs" to the lookup object. The code’s module, package, and class were called the MethodHandles.lookup() method. You cannot create a hidden class from one package for another. If you try that, like in the following sample code: Java 1. try { 2. final var byteCode = Compiler.java() 3. .from("package B; class A{}").compile().get(); 4. MethodHandles.lookup().defineHiddenClass(byteCode, true); 5. } catch (Throwable t) { 6. System.out.println(t); 7. } still from the test class com.javax0.blog.hiddenclasses.TestHiddenClassLoader. The class to be loaded is NOT in the same package as the caller for MethodHandles.lookup(). It will result in the printout: java.lang.IllegalArgumentException: B.A not in same package as lookup class Creating Hidden Classes the Easy Way In the previous section, we created a new class dynamically and loaded the new class hidden. The loading was done using lookup objects we acquired from the MethodHandles class. In this section, we will see how we can do the same by calling the fluent API of SourceBuddy. The code creating a class saying hello is the following: Java 1. final var hello = Compiler.java() 2. .from(CODE1.replaceAll("\\.Hello", ".PublicHello")).hidden() 3. .compile().load().newInstance(PublicHello.class); 4. hello.hello(); In this code, we replaced the interface from Hello to PublicHello, which you may guess: Java 1. public interface PublicHello { 2. void hello(); 3. } It is essentially the same as the previous interface but is public. The process is much more straightforward than before. We specify the source code; we declare that it is a hidden class calling hidden(), and we compile, load, and ask for an instance cast to PublicHello. If we want to use the package-private interface, like (not replacing Hello to PublicHello): Java 1. Assertions.assertThrows(IllegalAccessError.class, () -> 2. Compiler.java().from(CODE1).hidden().compile().load().newInstance(PublicHello.class)); we will get an error. java.lang.IllegalAccessError: class com.javax0.blog.hiddenclasses.MySpecialClass/0x00000008011b1c00 cannot access its superinterface com.javax0.blog.hiddenclasses.TestHiddenClassLoader$Hello (com.javax0.blog.hiddenclasses.MySpecialClass/0x00000008011b1c00 is in unnamed module of loader com.javax0.sourcebuddy.ByteClassLoader @4e5ed836; com.javax0.blog.hiddenclasses.TestHiddenClassLoader$Hello is in unnamed module of loader 'app') The reason is explained clearly in the error message. The interface and the class implementing it are in two different modules. Both are unnamed modules, but they are not the same. In Java, starting with Java 9, there are modules, and when the application does not use modules, it essentially creates pseudo modules putting the classes there. The JDK classes are still in modules, like java.base. The hidden class creation, as created above, uses a separate class loader to load the dynamically written Java class. The separate class loader loads classes to its module. Code in different modules cannot see classes from other modules unless they are public. Although SourceBuddy does a little trick to load a class hidden, it cannot overcome this restriction. Loading a hidden class needs a lookup object. The application usually provides this object. The calls above do not specify any lookup object, but SourceBuddy still needs one. To have one, it creates one. The lookup object remembers the class called MethodHandles.lookup() to create one. When loading a class hidden, it is required that the lookup object "belongs" to the class’s package. The lookup object was created, calling for it from a class, which is in that package. The lookup object will "belong" to that class and hence to the class’s package. To have a lookup object that comes from a class from a specific package we need a class in that package that can give us one. If there is none in the code, we must create one dynamically. SourceBuddy does that exactly. It creates the Java source code for the class, compiles it and loads it, instantiates it, and calls the Supplier<MethodHandles.Lookup> defined get() method the class implements. It is a kind of trick that seems to violate the access control built-in to Java. We seem to get a new hidden class in a package that was not prepared for it. A package is protected from external access in Java (trivial). Only public and protected members and classes can be used from outside the package. The package can be accessed using reflection from the outside, but only in the same module, or the module has to be opened explicitly. Similarly, an object loaded using a lookup object should be in the same package and access the package’s internal members and whatnot if a class in the package provided that lookup. As we can see from the error message, it only seems to be the package. In reality, the new hidden class is in a package with the same name but in a different module. If you want to have a hidden class in the same package and not only a package with the same name, you need a lookup object from that package. In our example, it is simple. Our Hello interface is in the same package as the test code so that we can create the lookup object ourselves: Java 1. final var hi = Compiler.java().from(CODE1).hidden(MethodHandles.lookup()).compile() 2. .load().newInstance(Hello.class); 3. hi.hello(); Access to a lookup object may be a bit more complex in real-life examples. When the code calling SourceBuddy is in a different package than the code generated, the lookup object creation cannot be in the SourceBuddy calling code. In the following example, we will see how that will be done. We have a class OuterClass in the package com.javax0.blog.hiddenclasses.otherpackage. Java 1. package com.javax0.blog.hiddenclasses.otherpackage; 2. 3. import java.lang.invoke.MethodHandles; 4. 5. public class OuterClass { 6. 14. public static MethodHandles.Lookup lookup() { 15. return MethodHandles.lookup(); 16. } 17. } Note Some lines are skipped from the class. We will use those later. This class has a method, lookup(). It creates a lookup object and returns it. We will have a proper lookup object if we call this method from our code. Note that this class is in a different package and not the same as our test code. Our test code is in com.javax0.blog.hiddenclasses, and OuterClass is a package deeper. Essentially in a different package. We also have another class for the demonstration. Java 1. package com.javax0.blog.hiddenclasses.otherpackage; 2. 3. class MyPackagePrivateClass { 4. 5. void sayHello(){ 6. System.out.println("Hello from package private."); 7. } 8. 9. } It is a package-private class with a package-private method in it. If we dynamically create a hidden class, as in the following example: Java 1. final var hidden = Compiler.java().from(""" 2. package com.javax0.blog.hiddenclasses.otherpackage; 3. 4. public class AnyName_ItWillBeDropped_Anyway { 5. public void hi(){ 6. new MyPackagePrivateClass().sayHello(); 7. } 8. }""").hidden(OuterClass.lookup()).compile().load().newInstance(); 9. final var hi = hidden.getClass().getDeclaredMethod("hi"); 10. hi.invoke(hidden); It will work. There is one topic that we have not touched on. It is how to create a nestmate. When you have a binary class file, you can load it as a nestmate to a class that provides a lookup object. The JVM does not care how that class was created. When we compile Java sources, we only have one possibility. The class has to be an inner class. When you use SourceBuddy, you have to provide your source code as an inner class to the one you want the hidden to be nest mate with. The source code and the class was already provided when you compiled your code. It is not possible to insert into THAT source code any new inner class. We have to fool the compiler. We provide a class having the same name as the one we want to insert our inner class later. When the compilation is done, we have the outer class and the inner class as well. We tell the class loading to forget the outer and only to load the inner one, hidden. It is what we will do. This time we display here the whole outer class that we use for demonstration including the skipped lines. Java 1. package com.javax0.blog.hiddenclasses.otherpackage; 2. 3. import java.lang.invoke.MethodHandles; 4. 5. public class OuterClass { 6. 7. // skip lines 8. private int z = 55; 9. 10. public int getZ() { 11. return z; 12. } 13. // end skip 14. public static MethodHandles.Lookup lookup() { 15. return MethodHandles.lookup(); 16. } 17. } As you will see, it has a private field and a getter to test the changed value effectively. It also has the before-mentioned lookup() method. The code dynamically creating an inner class is the following: Java 1. final var inner = Compiler.java().from(""" 2. package com.javax0.blog.hiddenclasses.otherpackage; 3. 4. public class OuterClass 5. { 6. private int z; 7. 8. public static class StaticInner { 9. public OuterClass a(){ 10. final var outer = new OuterClass(); 11. outer.z++; 12. return outer; 13. } 14. } 15. 16. }""").nest(MethodHandles.Lookup.ClassOption.NESTMATE).compile().load() 17. .newInstance("StaticInner"); 18. final var m = inner.getClass().getDeclaredMethod("a"); 19. final var outer = (OuterClass)m.invoke(inner); 20. Assertions.assertEquals(56, outer.getZ()); There is an OuterClass in the source, but it is only to help the compilation and to tell SourceBuddy the name of the nesting host. When we call the method nest() with the option NESTMATE, it knows that the class OuterClass is the nesting host. It also marks the class not to be loaded by the class loader ever. The inner class compiles to a different byte code, and when it is loaded, it becomes a nestmate of OuterClass. If you pay attention to the intricate details of Java access control discussed in this article, you will notice that we do not provide a lookup object. And the example above still works. How is it possible? There is no magic. When you call nest(), SourceBuddy looks for the already loaded version of OuterClass and fetches the lookup object using reflection. To do that the outer class has to have a static field or method of type MethodHandles.Lookup. OuterClass has a method, so SourceBuddy calls this method to get the lookup object. The example above creates a static inner class. You can create the same way a non-static inner class as well. Note The difference between static and non-static inner classes in Java is that non-static inner class instances have a reference to an outer class instance. Static inner classes do not. It is where the name comes from. Static inner class instances belong to the class. Non-static belongs to an instance of the outer class. To get the reference to the outer class instance, the inner class’s constructor is modified. When you specify a constructor for an inner class, the compiled adds an extra parameter in front of the other parameters specified in the Java source code. This extra first parameter is the reference to the outer class instance. This reference is stored in a field not available at the source level but used by the code to access the fields and methods of the outer instance. The creation of a non-static inner class looks very much the same as the creation of a static inner class: Java 1. final var outer = new OuterClass(); 2. final var inner = Compiler.java().from(""" 3. package com.javax0.blog.hiddenclasses.otherpackage; 4. 5. public class OuterClass { 6. private int z; 7. 8. public class Inner { 9. public void a(){ 10. z++; 11. } 12. } 13. 14. }""").nest(MethodHandles.Lookup.ClassOption.NESTMATE).compile().load() 15. .newInstance("Inner", classes(OuterClass.class), args(outer)); 16. final var m = inner.getClass().getDeclaredMethod("a"); 17. m.invoke(inner); 18. Assertions.assertEquals(56, outer.getZ()); We need an instance of the outer class to instantiate the inner class. It is the variable outer. We must pass this variable to the constructor through the newInstance() API of SourceBuddy. This method call has a version that accepts a Class[] and an Object[] array specifying the constructor argument types and values. In the case of an inner class, it is the outer class and an instance. 6. Summary This article discussed some details of the hidden classes introduced in Java 15. We went a little deeper than the usual introductory articles. Now you understand how hidden classes work and how to use them in your projects.

By Peter Verhas CORE
The Beauty of Java Optional and Either
The Beauty of Java Optional and Either

Many of us Java developers — particularly beginners — often overlook its functional programming capabilities. In this article, we'd look at how to chain Optional and Either to write concise and beautiful code. To illustrate, let's assume we have a bank where a user could have zero or more accounts. The entities look as below: Java record User( int id, String name ) {} record Account( int id, User user // a user can have zero or more account ) {} For fetching the entities, the repositories look as below: Java interface UserRepository { Optional<User> findById(int userId); } interface AccountRepository { Optional<Account> findAnAccountOfUser(User user); // given a user, find its any account if it has one } Now, get ready for a couple of assignments! First Assignment Let's code a method Optional<Account> findAnAccountByUserId(Integer userId) that would: Given a userId, return any one account of the user, if there's one If either there's no user with the given id, or there's no account of the user, return an empty Optional A novice solution could be as follows: Java public Optional<Account> findAccountByUserId(int userId) { Optional<User> possibleUser = userRepository.findById(userId); if (possibleUser.isEmpty()) return Optional.empty(); var user = possibleUser.orElseThrow(); return accountRepository.findAnAccountOfUser(user); } But, then the map method of Optional strikes our mind! Instead of checking for possibleUser.isEmpty(), we could just map the user, if present, to an account: Java public Optional<Account> findAccountByUserId(int userId) { return userRepository .findById(userId) .map(accountRepository::findAnAccountOfUser); } We land up with a compilation error because accountRepository.findAnAccountOfUser(user) returns an Optional<Account>, whereas the map method above needs an Account. For this exact use case, Optional provides a flatMap method, which flattens nested Optionals. So, changing map to flatMap would work. Java public Optional<Account> findAccountByUserId(int userId) { return userRepository .findById(userId) .flatMap(accountRepository::findAnAccountOfUser); } Cool! Get ready for a more complex assignment. Second Assignment When a user/account is not found, instead of returning an empty optional, how about indicating exactly what was not found: user or account? We could approach this problem in a few ways: Throw Exceptions We could define some custom exceptions, viz. UserNotFoundException and AccountNotFoundException, and throw those: Java public Account findAccountByUserIdX(int userId) { var user = userRepository.findById(userId).orElseThrow(UserNotFoundException::new); return accountRepository.findAnAccountOfUser(user).orElseThrow(AccountNotFoundException::new); } However, using exceptions for expected cases is considered an anti-pattern: Googling will get you numerous articles about the subject. So let's avoid that. Use a Result Interface Another approach would be returning a custom Result object instead of returning Optional; i.e., Result findAnAccountByUserId(Integer userId). The result would be an interface that would be implemented by custom error classes, as well as Account and User. Use Either A third approach, which I find simpler, is to return an Either instead of Optional. Whereas an Optional holds zero or one value, an Either holds either of two values. It's typically used to hold either an error or a success value. Unlike Optional, you don't get a Java Either implementation out of the box. There are quite a few libraries. I prefer using jbock-java/either because it's lightweight and simple. So, let's first define the error interface and classes: Java interface Error {} record UserNotFound() implements Error {} record AccountNotFound() implements Error {} Let's now attempt coding: Java public Either<Error, Account> findAccountByUserId(int userId) { ... } Did you notice above that we used Error as the left generic parameter, whereas Account as the right one? That wasn't accidental: the convention when using Either is that the left is used for errors whereas the right is used for success values. Either has a similar functional API like Optional. For example, we have map and flatMap for mapping the success values, whereas we have mapLeft and flatMapLeft for mapping the errors. We also have utility methods like Either.left(value) and Either.right(value) to create Either objects. Do have a look at its API. It has many interesting features for functional programming. So, continuing our journey, we could first create an either having the user or error as below: Java public Either<Error, Account> findAccountByUserId(int userId) { var eitherErrorOrUser = userRepository .findById(userId) .map(Either::<Error, User>right) .orElse(Either.left(new UserNotFound())) ... } Lines 4 and 5 above convert an Optional<User> to Either<UserNotFound, User>. Because converting an Optional to an Either would be a common use case, let's code a utility method for it: Java public class EitherUtils { public static <L, R> Either<L, R> of(Optional<R> possibleValue, Supplier<L> errorSupplier) { return possibleValue.map(Either::<L, R>right).orElseGet(() -> Either.<L, R>left(errorSupplier.get())); } } It takes the optional and an errorSupplier. The errorSupplier is used for composing the error if the optional is empty. Using it, our code now looks like this: Java public Either<Error, Account> findAccountByUserId(int userId) { var eitherErrorOrUser = EitherUtils .<Error, User>of(userRepository.findById(userId), UserNotFound::new) ... } Next, as above, eitherErrorOrUser could be mapped for an account in a similar way. The complete solution would then look like this: Java public Either<Error, Account> findAccountByUserId(int userId) { return EitherUtils .<Error, User>of(userRepository.findById(userId), UserNotFound::new) .flatMap(user -> of(accountRepository.findAnAccountOfUser(user), AccountNotFound::new)); } Looks cute, doesn't it? Summary Consider using functional programming capabilities of Java wherever possible, and make your code cute and concise!

By Sanjay Patel CORE
3 Ways That You Can Operate Record Beyond DTO [Video]
3 Ways That You Can Operate Record Beyond DTO [Video]

The record feature has arrived in the latest LTS version, Java 17! Records allow the making of an immutable class without a boilerplate. That is awesome! The question is: how can we use it? In general, we saw a couple of samples with DTO, but we can do more than that. In this tutorial and video, we'll explore design capabilities with a record exceeding DTO. DTO We won't focus on it here, but it is worth mentioning that it is a good sample of record but not a unique case. It does not matter if you use Spring, MicroProfile, or Jakarta. Currently, we have several samples cases that I'll list below: MapStruct Jakarta JSON-B Spring Value Objects or Immutable Types In the DDD, value objects represent a concept from your problem domain. Those classes are immutable, such as Money, email, etc. So, once both value objects records are firm, you can use them. In our first sample, we'll create an email that needs validation only: Java public record Email (String value) { } As with any value object, you can add methods and behavior, but the result should be a different instance. Imagine we'll create a Money type, and we want to create the add operation. Thus, we'll add the method to check if those are the same currency and then result in a new instance: Java public record Money(Currency currency, BigDecimal value) { Money add(Money money) { Objects.requireNonNull(money, "Money is required"); if (currency.equals(money.currency)) { BigDecimal result = this.value.add(money.value); return new Money(currency, result); } throw new IllegalStateException("You cannot sum money with different currencies"); } } The Money was a sample, mainly because Java has a specification with JavaMoney and a famous library, Joda-Money, where you can use it. The point is when you need to create a Value object or an immutable type record that can fit perfectly on it. Immutable Entities But wait? Did you say immutable entities? Is that possible? It is not usual, but it happens, such as when the entity holds a historic transitional point. Can an entity be immutable? If you check Evan's definition of an entity in Chapter 5: An entity is anything that has continuity through a life cycle and distinctions independent of attributes essential to the application's user. The entity is not about be mutable or not, but it is related to the domain; thus, we can have immutable entities, but again, it is not usual. There is a discussion at Stackoverflow about this question. Let's create an entity, Book, where this entity has an ID, title, and release as a year. What happens if you want to edit a book? We don't: we need to create a new edition. Therefore, we'll also add the edition field. Java public record Book(String id, String title, Year release, int edition) {} Ok, but we also need validation. Otherwise, this book will have inconsistent data on it. It does not make sense to have null values on the id, title, and release as a negative edition. With a record, we can use the compact constructor and put validations on it: Java public Book { Objects.requireNonNull(id, "id is required"); Objects.requireNonNull(title, "title is required"); Objects.requireNonNull(release, "release is required"); if (edition < 1) { throw new IllegalArgumentException("Edition cannot be negative"); } } We can overwrite equals, hashCode, and toString methods if we wish. Indeed, let's overwrite the equalshashCode contracts to operate on the id field: Java @Override public boolean equals(Object o) { if (this == o) { return true; } if (o == null || getClass() != o.getClass()) { return false; } Book book = (Book) o; return Objects.equals(id, book.id); } @Override public int hashCode() { return Objects.hashCode(id); } To make it easier to create this class or when you have more complex objects, you can either create a factory method or define builders. The code below shows the builder creation on the book record method: Java Book book = Book.builder().id("id").title("Effective Java").release(Year.of(2001)).builder(); At the end of our immutable entity with a record, we'll also include the change method, where we need to change the book to a new edition. In the next step, we'll see the creation of the second edition of Effective Java. Thus, we cannot change the fact that there was a first edition of this book once; this historical part is part of our library business. Java Book first = Book.builder().id("id").title("Effective Java").release(Year.of(2001)).builder(); Book second = first.newEdition("id-2", Year.of(2009)); Currently, JPA cannot support immutable for compatibility reasons, but we can explore it on NoSQL APIs such as Eclipse JNoSQL and Spring Data MongoDB. We covered many of those topics; therefore, let's move on to another design pattern to represent the form of our code design. State Implementation There are circumstances where we need to implement a flow or a state inside the code. The state design pattern explores an e-commerce context where we have an order, and we need to keep the chronological flow of an order. Naturally, we want to know when it was requested, delivered, and finally received from the user. The first step is interface creation. To make it smooth, we'll use String to represent products, but you know we'll need an entire object for it: Java public interface Order { Order next(); List<String> products(); } With this interface ready for use, let's create an implementation that follows its flows and returns the products. We want to avoid any change to the products. Thus, we'll overwrite the products methods from the record to produce a read-only list. Java public record Ordered(List<String> products) implements Order { public Ordered { Objects.requireNonNull(products, "products is required"); } @Override public Order next() { return new Delivered(products); } @Override public List<String> products() { return Collections.unmodifiableList(products); } } public record Delivered(List<String> products) implements Order { public Delivered { Objects.requireNonNull(products, "products is required"); } @Override public Order next() { return new Received(products); } @Override public List<String> products() { return Collections.unmodifiableList(products); } } public record Received(List<String> products) implements Order { public Received { Objects.requireNonNull(products, "products is required"); } @Override public Order next() { throw new IllegalStateException("We finished our journey here"); } @Override public List<String> products() { return Collections.unmodifiableList(products); } } We have the state implemented; let's change the Order interface. First, we'll create a static method to start an order. Then, to ensure that we won't have a new intruder state, we'll block the new order state implementation and only allow the ones we have; therefore, we'll use the sealed interface feature. Java public sealed interface Order permits Ordered, Delivered, Received { static Order newOrder(List<String> products) { return new Ordered(products); } Order next(); List<String> products(); } We made it! We'll test the code with a list of products. As you can see, we have our flow exploring the capabilities of records. Java List<String> products = List.of("Banana"); Order order = Order.newOrder(products); Order delivered = order.next(); Order received = delivered.next(); Assertions.assertThrows(IllegalStateException.class, () -> received.next()); The state with an immutable class allows you to think about transactional moments, such as an entity, or generate an event on an event-driven architecture. Video Check out more video info to know more about the record:

By Otavio Santana CORE
Null Safety: Kotlin vs. Java
Null Safety: Kotlin vs. Java

Last week, I was at the FOSDEM conference. FOSDEM is specific in that it has multiple rooms, each dedicated to a different theme and organized by a team. I had two talks: Practical Introduction to OpenTelemetry Tracing, in the Monitoring and Observability devroom What I miss in Java, the perspective of a Kotlin developer, in the Friends of OpenJDK devroom The second talk is from an earlier post. Martin Bonnin did a tweet from a single slide, and it created quite a stir, even attracting Brian Goetz. In this post, I'd like to expand on the problem of nullability and how it's solved in Kotlin and Java and add my comments to the Twitter thread. Nullability I guess that everybody in software development with more than a couple of years of experience has heard the following quote: I call it my billion-dollar mistake. It was the invention of the null reference in 1965. At that time, I was designing the first comprehensive type system for references in an object oriented language (ALGOL W). My goal was to ensure that all use of references should be absolutely safe, with checking performed automatically by the compiler. But I couldn't resist the temptation to put in a null reference, simply because it was so easy to implement. This has led to innumerable errors, vulnerabilities, and system crashes, which have probably caused a billion dollars of pain and damage in the last forty years. -- Tony Hoare The basic idea behind null is that one can define an uninitialized variable. If one calls a member of such a variable, the runtime locates the memory address of the variable... and fails to dereference it because there's nothing behind it. Null values are found in many programming languages under different names: Python has None JavaScript has null So do Java, Scala, and Kotlin Ruby has nil etc. Some languages do not allow uninitialized values, such as Rust. Null Safety in Kotlin As I mentioned, Kotlin does allow null values. However, they are baked into the type system. In Kotlin, every type X has two indeed two types: X, which is non-nullable. No variable of type X can be null. The compiler guarantees it. Kotlin val str: String = null The code above won't compile. X?, which is nullable. Kotlin val str: String? = null The code above does compile. If Kotlin allows null values, why do its proponents tout its null safety? The compiler refuses to call members on possible null values, i.e., nullable types. Kotlin val str: String? = getNullableString() val int: Int? = str.toIntOrNull() //1 Doesn't compile The way to fix the above code is to check whether the variable is null before calling its members: Kotlin val str: String? = getNullableString() val int: Int? = if (str == null) null else str.toIntOrNull() The above approach is pretty boilerplate-y, so Kotlin offers the null-safe operator to achieve the same: Kotlin val str: String? = getNullableString() val int: Int? = str?.toIntOrNull() Null Safety in Java Now that we have described how Kotlin manages null values, it's time to check how Java does it. First, there are neither non-nullable types nor null-safe operators in Java. Thus, every variable can potentially be null and should be considered so. Java var MyString str = getMyString(); //1 var Integer anInt = null; //2 if (str != null) { anInt = str.toIntOrNull(); } String has no toIntOrNull() method, so let's pretend MyString is a wrapper type and delegates to String A mutable reference is necessary If you chain multiple calls, it's even worse as every return value can potentially be null. To be on the safe side, we need to check whether the result of each method call is null. The following snippet may throw a NullPointerException: Java var baz = getFoo().getBar().getBaz(); Here's the fixed but much more verbose version: Java var foo = getFoo(); var bar = null; var baz = null; if (foo != null) { bar = foo.getBar(); if (bar != null) { baz = bar.getBaz(); } } For this reason, Java 8 introduced the Optional type. Optional is a wrapper around a possibly null value. Other languages call it Maybe, Option, etc. Java language's designers advise that a method returns: Type X if X cannot be null Type Optional if X can be null If we change the return type of all the above methods to Optional, we can rewrite the code in a null-safe way - and get immutability on top: Java final var baz = getFoo().flatMap(Foo::getBar) .flatMap(Bar::getBaz) .orElse(null); My main argument regarding this approach is that the Optional itself could be null. The language doesn't guarantee that it's not. Also, it's not advised to use Optional for method input parameters. To cope with this, annotation-based libraries have popped up: Project Package Non-null annotation Nullable annotation JSR 305 javax.annotation @Nonnull @Nullable Spring org.springframework.lang @NonNull @Nullable JetBrains org.jetbrains.annotations @NotNull @Nullable Findbugs edu.umd.cs.findbugs.annotations @NonNull @Nullable Eclipse org.eclipse.jdt.annotation @NonNull @Nullable Checker framework org.checkerframework.checker.nullness.qual @NonNull @Nullable JSpecify org.jspecify @NonNull @Nullable Lombok org.checkerframework.checker.nullness.qual @NonNull - However, different libraries work in different ways: Spring produces WARNING messages at compile-time FindBugs requires a dedicated execution Lombok generates code that adds a null check but throws a NullPointerException if it's null anyway etc. Thanks to Sébastien Deleuze for mentioning JSpecify, which I didn't know previously. It's an industry-wide effort to deal with the current mess. Of course, the famous XKCD comic immediately comes to mind: I still hope it will work out! Conclusion Java was incepted when null-safety was not a big concern. Hence, NullPointerException occurrences are common. The only safe solution is to wrap every method call in a null check. It works, but it's boilerplate-y and makes the code harder to read. Multiple alternatives are available, but they have issues: they aren't bulletproof, compete with each other, and work very differently. Developers praise Kotlin for its null-safety: it's the result of its null-handling mechanism baked into the language design. Java will never be able to compete with Kotlin in this regard, as Java language architects value backward compatibility over code safety. It's their decision, and it's probably a good one when one remembers the pain of migration from Python 2 to Python 3. However, as a developer, it makes Kotlin a much more attractive option than Java to me. To go further: Are there languages without "null"? Kotlin nullable types and non-null types JSpecify Originally published at A Java Geek on February 12th, 2023

By Nicolas Fränkel CORE
Connection Pooling With BoneCP, DBCP, and C3PO [Video Tutorials]
Connection Pooling With BoneCP, DBCP, and C3PO [Video Tutorials]

This series of video tutorials will help readers better understand connection pooling with BoneCP, DBCP, and C3PO with Oracle, Tomcat, and Java Servlets. Video Tutorials BoneCP Connection Pooling The video below will go into detail about connection pooling for BoneCP, including: Oracle database MySQL C3PO Connection Pooling The video below will go into detail about connection pooling for C3P0, including: Naming services Conjugating databases MySQL BoneCP, DBCP, and C3PO Connection Pooling The video below will inform readers about connection pooling with BoneCP, DBCP, and C3PO connection pooling, including: ContextListener MySQL Data sources JNDI lookup Conclusion By now, you should have a better understanding of how connection pooling works for each implementation. Hopefully, you have taken away some valuable information about connecting pooling for BoneCP, DBCP, and C3PO with Oracle, Tomcat, and Java Servlets.

By Ram N
Spring Cloud: How To Deal With Microservice Configuration (Part 1)
Spring Cloud: How To Deal With Microservice Configuration (Part 1)

Configuring a software system in a monolithic approach does not pose particular problems. To make the configuration properties available to the system, we can store them in a file inside an application folder, in some place in the filesystem, or as OS environment variables. Microservice configuration is a more complex subject. We have to deal with a number, which can be huge, of totally independent services, each with its own configuration. We could even face a scenario in which several instances of the same service need different configuration values. In such a situation, a way to centralize and simplify configuration management would be of great importance. Spring Cloud has its own module to solve these problems, named Spring Cloud Config. This module provides an implementation of a server that exposes an API to retrieve the configuration information, usually stored in some remote repository like Git, and, at the same time, it gives us the means to implement the client side meant to consume the services of that API. In the first part of this article, we will discuss the basic features of this Spring Cloud module and storing the configuration in the configuration server classpath. In part two, we will show how to use other, more effective, repository options, like Git, and how to refresh the configuration without restarting the services. Then, in later posts, we will show how the centralized configuration can be coupled with service discovery features to set a solid base for the whole microservice system. Microservice Configuration—Spring Cloud Config Server Side The first component we need in a distributed configuration scenario is a server meant to provide the configuration information for the services. To implement such a server component by Spring Cloud Config, we have to use the right Spring Boot “starter” dependency, like in the following configuration fragment: XML <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-config-server</artifactId> </dependency> Then, we have to annotate the Spring Boot main class with the @EnableConfigServer annotation: Java @SpringBootApplication @EnableConfigServer public class AppMain { public static void main(String[] args) { new SpringApplicationBuilder(Config.class).run(args); } } The Spring Cloud Config server, according to the auto-configuration features of Spring Boot, would run on the default 8080 port as all Spring Boot applications. If we want to customize it, we can do it by the application.properties or application.yml file: YAML server: port: ${PORT:8888} spring: application: name: config-server If we run the application with the above configuration, it will use the 8888 port as default. We can override the default by launching the application with a different port, by the PORT placeholder: java -jar -DPORT=8889 sample-server-1.0-SNAPSHOT.jar In any case, if we launch the application with the spring.config.name=configserver argument instead, the default port will be 8888. This is due to a configserver.yml default file embedded in the spring-cloud-config-server library. As a matter of fact, it would be very convenient to launch the server config application on the 8888 port, either by explicitly configuring it by the server.port parameter, like in the example above, or passing spring.config.name=configserver in the startup Java command because 8888 happens to be the default port used by the client side. Important Note: The spring.config.name=configserver option only works if passed in the startup command and seems to be ignored, for some reason, if set in the configuration file. We can see below an example of how to start the config server with a Java command: java -jar -Dspring.config.name=configserver spring-cloud-config-native-server-1.0-SNAPSHOT.jar By default, the Spring Cloud Config server uses Git as a remote repository to store the configuration data. To simplify the discussion, we will focus on a more basic approach based on files stored on the application classpath. We will describe this option in the next section. It must be stressed that, in a real scenario, this would be far from ideal, and Git would be surely a better choice. Enforcing Basic Authentication on the Server Side We can provide the server with a basic security layer in the form of an authentication mechanism based on user and password. To do that, we must first add the following security starter to the POM: XML <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> And then add the following piece of configuration in the application.yml file: YAML security: user: name: myusername password: mypassword With the above, the client side should be configured accordingly to be able to connect to the server, as we will see in the section related to the client side. We will discuss more advanced securing mechanisms in later articles. Spring Cloud Config Backend Storing Options The Spring Cloud Config server can store the configuration data in several ways: By a remote Git system, which is the default. Other Version Control Systems (VCS) like SVN. VAULT: is a tool by HashiCorp specialized in storing passwords, certificates, or other entities as secrets. By storing it in some place in the file system or the classpath. Below, we will describe the filesystem/classpath option. Spring Cloud Config has a profile named native that covers this scenario. In order to run the config server with a filesystem/classpath backend storage, we have to start it with the spring.profiles.active=native option. In the native scenario, the config server will search by default in the following places: classpath:/ classpath:/config file:./ file:./ config So, we can simply store the configuration files inside the application jar file. If we want to use an external filesystem directory instead or customize the above classpath options, we can set the spring.cloud.config.server.native.searchLocations property accordingly. Config Server API The Config Server can expose the configuration properties of a specific application by an HTTP API with the following endpoints: /{ application}/{ profile}[/{ label}]: this returns the configuration data as JSON with the specific application, profile, and an optional label parameter. /{ label}/{ application}-{ profile}.yml: this returns the configuration data in YAML format, with the specific application, profile, and an optional label parameter. /{ label}/{ application}-{ profile}.properties: this returns the configuration data as raw text, with the specific application, profile, and an optional label parameter. The application part represents the name of the application configured by the spring.application.name property and the profile part represents the active profile. A profile is a feature to segregate a set of configurations related to specific environments, such as development, test, and production. The label part is optional and is used when using Git as a backend repository to identify a specific branch. Microservice Configuration—Spring Cloud Config Client Side If we want our services to obtain their own configuration from the server, we must provide them with a dependency named spring-cloud-starter-config: XML <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-config</artifactId> </dependency> Clearly, the configuration must be obtained as the first step during its startup. To deal with this requirement, Spring Cloud introduces a bootstrap context. The bootstrap context can be seen as the parent of the application context. It serves the purpose of loading configuration retrieved data from some external source and making it available to the application context. In earlier versions of Spring Cloud, we could provide the configuration properties for the bootstrap context by a bootstrap.yml file. This is deprecated in the new versions. Now, we simply have to provide a config.import property=optional:configserver: property in the standard application.yml: YAML config: import: "optional:configserver:" With the optional:configserver value, the config client service will use the default http://localhost:8888 address to contact the config server. If we exclude the optional part, an error will be raised during startup if the server is unreachable. If we want to set a specific address and port, we can add the address part to the value like this: YAML config: import: "optional:configserver:http://myhost:myport" Configuring Security on the Client Side If we have secured the server with basic authentication, we must provide the necessary configuration to the client. Adding the following piece of configuration to the application.yml will be enough: YAML security: user: name: myusername password: mypassword Putting the Pieces Together in a Simple Demo Using the notions described above, we can realize a simple demo with a configuration server and a single client service, as shown in the picture below: Server Side Implementation To implement the server side, we create a Spring Boot application with the required Spring Cloud release train and Spring Cloud Config starter: XML <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>2021.0.5</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-config-server</artifactId> </dependency> </dependencies> Then, we write the application.yml file, setting the port to the conventional 8888 value, the active profile as native, and, finally, the application name: YAML server: port: ${PORT:8888} spring: profiles: active: native application: name: config-server Since we have set the spring.profiles.active equal to the native value, this config server application storage will be based on the filesystem/classpath. In our example, we choose to store the configuration file of the client service in the classpath in a config subdirectory of the “/resources” folder. We name the client service application file name as client-service.yml, and we fill it with the following content: YAML server: port: ${PORT:8081} myproperty: value myproperties: properties: - value1 - value2 The myproperty and myproperties parts will be used to test this minimal demo: we will expose them by some REST service on the client, and if all works as expected, the above values will be returned. Client Side Implementation We configure the client application with the same release train of the server. As dependencies, we have a spring-cloud-starter-config starter and also a spring-boot-starter-web because we want our application to expose some HTTP REST services: XML <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>2021.0.5</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-config</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies> The application.yml properties will be consumed by a specific component class by the @Value and @ConfigurationProperties annotations: Java @Component @ConfigurationProperties(prefix = "myproperties") public class DemoClient { private List<String> properties = new ArrayList<String>(); public List<String> getProperties() { return properties; } @Value("${myproperty}") private String myproperty; public String getMyproperty() { return myproperty; } } Then, a controller class will implement two REST services, /getProperties and /getProperty, returning the above class properties: Java @RestController public class ConfigClientController { private static final Logger LOG = LoggerFactory.getLogger(ConfigClientController.class); @Autowired private DemoClient demoClient; @GetMapping("/getProperties") public List<String> getProperties() { LOG.info("Properties: " + demoClient.getProperties().toString()); return demoClient.getProperties(); } @GetMapping("/getProperty") public String getProperty() { LOG.info("Property: " + demoClient.getMyproperty().toString()); return demoClient.getMyproperty(); } } Compiling and Running the Config Server and Client Service After compiling the two applications by Maven, we can take the resulting jars and run first the server part and then the client from the command line: java -jar spring-cloud-config-native-server-1.0-SNAPSHOT.jar ... java -jar spring-cloud-config-native-client-1.0-SNAPSHOT.jar We can test the correct behavior by executing a call with the following address in the browser: http://localhost:8081/getProperties If all works as expected, we will have the following values printed on the screen: [ "value1", "value2" ] The Maven projects related to the demo described above are available on GitHub at the following addresses: Client Service Config Server Conclusion In this article, we have covered the basic notions required to configure a microservice system based on remote configuration. We have used the native approach here, using the classpath as a storage repository. In part two, we will show how to use a remote Git repository and how to refresh the configuration at runtime without restarting the services.

By Mario Casari

Top Java Experts

expert thumbnail

Nicolas Fränkel

Developer Advocate,
Api7

Developer Advocate with 15+ years experience consulting for many different customers, in a wide range of contexts (such as telecoms, banking, insurances, large retail and public sector). Usually working on Java/Java EE and Spring technologies, but with focused interests like Rich Internet Applications, Testing, CI/CD and DevOps. Currently working for Hazelcast. Also double as a trainer and triples as a book author.
expert thumbnail

Shai Almog

OSS Hacker, Developer Advocate and Entrepreneur,
Codename One

Software developer with ~30 years of professional experience in a multitude of platforms/languages. JavaOne rockstar/highly rated speaker, author, blogger and open source hacker. Shai has extensive experience in the full stack of backend, desktop and mobile. This includes going all the way into the internals of VM implementation, debuggers etc. Shai started working with Java in 96 (the first public beta) and later on moved to VM porting/authoring/internals and development tools. Shai is the co-founder of Codename One, an Open Source project allowing Java developers to build native applications for all mobile platforms in Java. He's the coauthor of the open source LWUIT project from Sun Microsystems and has developed/worked on countless other projects both open source and closed source. Shai is also a developer advocate at Lightrun.
expert thumbnail

Marco Behler

Hi, I'm Marco. Say hello, I'd like to get in touch! twitter: @MarcoBehler
expert thumbnail

Ram Lakshmanan

Architect,
yCrash

In pursuit of answering the beautiful question 'Why Crash?' before this life ends.

The Latest Java Topics

article thumbnail
How to Use Buildpacks to Build Java Containers
This article will look under the hood of buildpacks to see how they operate and give tips on optimizing the default settings to reach better performance outcomes.
March 30, 2023
by Dmitry Chuyko
· 1,902 Views · 1 Like
article thumbnail
Reconciling Java and DevOps with JeKa
This article takes the reader through the JeKa capabilities and how to use a single language for everything from dev to delivery.
March 29, 2023
by jerome angibaud
· 2,067 Views · 2 Likes
article thumbnail
Java Concurrency: LockSupport
Learn more about LockSupport in Java concurrency.
March 29, 2023
by Emmanouil Gkatziouras CORE
· 1,778 Views · 2 Likes
article thumbnail
gRPC vs REST: Differences, Similarities, and Why to Use Them
This article compares gRPC and REST client-server architectures for communication and compares their strengths and weaknesses.
March 29, 2023
by Shay Bratslavsky
· 5,056 Views · 1 Like
article thumbnail
Synchronized Method: BoyFriend Threads and GirlFriend Object [Video]
How does the synchronized method work in Java? Find out in this tutorial while taking a closer look at BoyFriend threads and GirlFriend objects.
March 29, 2023
by Ram Lakshmanan CORE
· 1,538 Views · 1 Like
article thumbnail
Rapid Debugging With Proper Exception Handling
In this article, you will learn when to use and when NOT to use exception handling using concrete examples.
March 29, 2023
by Akanksha Gupta
· 1,517 Views · 1 Like
article thumbnail
Automate Long Running Routine Task With JobRunr
This article will explain a simple way to deploy, manage and monitor long-running background tasks, especially ETL systems.
March 28, 2023
by MORRISON IDIASIRUE
· 1,453 Views · 2 Likes
article thumbnail
Introduction Garbage Collection Java
Understand the trade-off of each approach and how this can impact the current application.
March 28, 2023
by Marcelo Palma
· 2,059 Views · 1 Like
article thumbnail
Using Swagger for Creating a PingFederate Admin API Java Wrapper
Get started with PingFederate by exploring a side utility and two sample applications that demonstrate Authorization Code Flow.
March 27, 2023
by Raghuraman Ramaswamy CORE
· 2,511 Views · 3 Likes
article thumbnail
Implementing PEG in Java
Continue exploring PEG implementations in this look into the basic implementation of scanner-less PEG parsers.
March 24, 2023
by Vinod Pahuja
· 3,339 Views · 2 Likes
article thumbnail
CRUD REST API With Jakarta Core Profile Running on Java SE
In this blog post, I want to explore running a Jakarta EE Core profile application on Java SE without an application server.
March 24, 2023
by Omos Aziegbe
· 1,457 Views · 1 Like
article thumbnail
Stop Using Spring Profiles Per Environment
This article discusses Spring's feature, Profiles, which some may consider a bad practice. Learn an alternative way to solve this issue.
March 24, 2023
by Bukaj Sytlos
· 4,498 Views · 2 Likes
article thumbnail
What Are the Benefits of Java Module With Example
Discover the advantages of using Java modules and how they are implemented with examples. Get a clear view of this key component in modern Java development.
March 23, 2023
by Janki Mehta
· 3,443 Views · 2 Likes
article thumbnail
How To Perform Local Website Testing Using Selenium And Java
In this article on local website testing, we will learn more about local page testing and its advantages in software development and the testing cycle.
March 23, 2023
by Vipul Gupta
· 2,838 Views · 1 Like
article thumbnail
Spring Boot, Quarkus, or Micronaut?
REST API frameworks play a crucial role in developing efficient and scalable microservices. Compare three frameworks, their features, and their pros and cons.
March 23, 2023
by Farith Jose Heras García
· 5,493 Views · 4 Likes
article thumbnail
Along Came a Bug
Learn more about innovative ways of understanding and solving bugs.
March 22, 2023
by Stelios Manioudakis
· 13,139 Views · 5 Likes
article thumbnail
How To Build a Spring Boot GraalVM Image
In this article, readers will use a tutorial to learn how to build a Spring Boot GraalVM images and using Reflection, including guide code block examples.
March 21, 2023
by Gunter Rotsaert CORE
· 3,267 Views · 1 Like
article thumbnail
Master Spring Boot 3 With GraalVM Native Image
This article covers the intricacies associated with Spring Boot Native Image development.
March 21, 2023
by Dmitry Chuyko
· 2,775 Views · 1 Like
article thumbnail
DevOps for Developers: Continuous Integration, GitHub Actions, and Sonar Cloud
When done badly, the CI process can turn this amazing tool into a nightmare. Continuous Integration should make our lives easier, not the other way around.
March 21, 2023
by Shai Almog CORE
· 4,506 Views · 5 Likes
article thumbnail
Introduction to Spring Cloud Kubernetes
In this article, we will explore the various features of Spring Cloud Kubernetes, its benefits, and how it works.
March 21, 2023
by Aditya Bhuyan
· 5,663 Views · 3 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: