DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Java

Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.

icon
Latest Refcards and Trend Reports
Trend Report
Low Code and No Code
Low Code and No Code
Trend Report
Modern Web Development
Modern Web Development
Refcard #024
Core Java
Core Java

DZone's Featured Java Resources

Spring Cloud: How To Deal With Microservice Configuration (Part 1)

Spring Cloud: How To Deal With Microservice Configuration (Part 1)

By Mario Casari
Configuring a software system in a monolithic approach does not pose particular problems. To make the configuration properties available to the system, we can store them in a file inside an application folder, in some place in the filesystem, or as OS environment variables. Microservice configuration is a more complex subject. We have to deal with a number, which can be huge, of totally independent services, each with its own configuration. We could even face a scenario in which several instances of the same service need different configuration values. In such a situation, a way to centralize and simplify configuration management would be of great importance. Spring Cloud has its own module to solve these problems, named Spring Cloud Config. This module provides an implementation of a server that exposes an API to retrieve the configuration information, usually stored in some remote repository like Git, and, at the same time, it gives us the means to implement the client side meant to consume the services of that API. In the first part of this article, we will discuss the basic features of this Spring Cloud module and storing the configuration in the configuration server classpath. In part two, we will show how to use other, more effective, repository options, like Git, and how to refresh the configuration without restarting the services. Then, in later posts, we will show how the centralized configuration can be coupled with service discovery features to set a solid base for the whole microservice system. Microservice Configuration—Spring Cloud Config Server Side The first component we need in a distributed configuration scenario is a server meant to provide the configuration information for the services. To implement such a server component by Spring Cloud Config, we have to use the right Spring Boot “starter” dependency, like in the following configuration fragment: XML <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-config-server</artifactId> </dependency> Then, we have to annotate the Spring Boot main class with the @EnableConfigServer annotation: Java @SpringBootApplication @EnableConfigServer public class AppMain { public static void main(String[] args) { new SpringApplicationBuilder(Config.class).run(args); } } The Spring Cloud Config server, according to the auto-configuration features of Spring Boot, would run on the default 8080 port as all Spring Boot applications. If we want to customize it, we can do it by the application.properties or application.yml file: YAML server: port: ${PORT:8888} spring: application: name: config-server If we run the application with the above configuration, it will use the 8888 port as default. We can override the default by launching the application with a different port, by the PORT placeholder: java -jar -DPORT=8889 sample-server-1.0-SNAPSHOT.jar In any case, if we launch the application with the spring.config.name=configserver argument instead, the default port will be 8888. This is due to a configserver.yml default file embedded in the spring-cloud-config-server library. As a matter of fact, it would be very convenient to launch the server config application on the 8888 port, either by explicitly configuring it by the server.port parameter, like in the example above, or passing spring.config.name=configserver in the startup Java command because 8888 happens to be the default port used by the client side. Important Note: The spring.config.name=configserver option only works if passed in the startup command and seems to be ignored, for some reason, if set in the configuration file. We can see below an example of how to start the config server with a Java command: java -jar -Dspring.config.name=configserver spring-cloud-config-native-server-1.0-SNAPSHOT.jar By default, the Spring Cloud Config server uses Git as a remote repository to store the configuration data. To simplify the discussion, we will focus on a more basic approach based on files stored on the application classpath. We will describe this option in the next section. It must be stressed that, in a real scenario, this would be far from ideal, and Git would be surely a better choice. Enforcing Basic Authentication on the Server Side We can provide the server with a basic security layer in the form of an authentication mechanism based on user and password. To do that, we must first add the following security starter to the POM: XML <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-security</artifactId> </dependency> And then add the following piece of configuration in the application.yml file: YAML security: user: name: myusername password: mypassword With the above, the client side should be configured accordingly to be able to connect to the server, as we will see in the section related to the client side. We will discuss more advanced securing mechanisms in later articles. Spring Cloud Config Backend Storing Options The Spring Cloud Config server can store the configuration data in several ways: By a remote Git system, which is the default. Other Version Control Systems (VCS) like SVN. VAULT: is a tool by HashiCorp specialized in storing passwords, certificates, or other entities as secrets. By storing it in some place in the file system or the classpath. Below, we will describe the filesystem/classpath option. Spring Cloud Config has a profile named native that covers this scenario. In order to run the config server with a filesystem/classpath backend storage, we have to start it with the spring.profiles.active=native option. In the native scenario, the config server will search by default in the following places: classpath:/ classpath:/config file:./ file:./ config So, we can simply store the configuration files inside the application jar file. If we want to use an external filesystem directory instead or customize the above classpath options, we can set the spring.cloud.config.server.native.searchLocations property accordingly. Config Server API The Config Server can expose the configuration properties of a specific application by an HTTP API with the following endpoints: /{ application}/{ profile}[/{ label}]: this returns the configuration data as JSON with the specific application, profile, and an optional label parameter. /{ label}/{ application}-{ profile}.yml: this returns the configuration data in YAML format, with the specific application, profile, and an optional label parameter. /{ label}/{ application}-{ profile}.properties: this returns the configuration data as raw text, with the specific application, profile, and an optional label parameter. The application part represents the name of the application configured by the spring.application.name property and the profile part represents the active profile. A profile is a feature to segregate a set of configurations related to specific environments, such as development, test, and production. The label part is optional and is used when using Git as a backend repository to identify a specific branch. Microservice Configuration—Spring Cloud Config Client Side If we want our services to obtain their own configuration from the server, we must provide them with a dependency named spring-cloud-starter-config: XML <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-config</artifactId> </dependency> Clearly, the configuration must be obtained as the first step during its startup. To deal with this requirement, Spring Cloud introduces a bootstrap context. The bootstrap context can be seen as the parent of the application context. It serves the purpose of loading configuration retrieved data from some external source and making it available to the application context. In earlier versions of Spring Cloud, we could provide the configuration properties for the bootstrap context by a bootstrap.yml file. This is deprecated in the new versions. Now, we simply have to provide a config.import property=optional:configserver: property in the standard application.yml: YAML config: import: "optional:configserver:" With the optional:configserver value, the config client service will use the default http://localhost:8888 address to contact the config server. If we exclude the optional part, an error will be raised during startup if the server is unreachable. If we want to set a specific address and port, we can add the address part to the value like this: YAML config: import: "optional:configserver:http://myhost:myport" Configuring Security on the Client Side If we have secured the server with basic authentication, we must provide the necessary configuration to the client. Adding the following piece of configuration to the application.yml will be enough: YAML security: user: name: myusername password: mypassword Putting the Pieces Together in a Simple Demo Using the notions described above, we can realize a simple demo with a configuration server and a single client service, as shown in the picture below: Server Side Implementation To implement the server side, we create a Spring Boot application with the required Spring Cloud release train and Spring Cloud Config starter: XML <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>2021.0.5</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-config-server</artifactId> </dependency> </dependencies> Then, we write the application.yml file, setting the port to the conventional 8888 value, the active profile as native, and, finally, the application name: YAML server: port: ${PORT:8888} spring: profiles: active: native application: name: config-server Since we have set the spring.profiles.active equal to the native value, this config server application storage will be based on the filesystem/classpath. In our example, we choose to store the configuration file of the client service in the classpath in a config subdirectory of the “/resources” folder. We name the client service application file name as client-service.yml, and we fill it with the following content: YAML server: port: ${PORT:8081} myproperty: value myproperties: properties: - value1 - value2 The myproperty and myproperties parts will be used to test this minimal demo: we will expose them by some REST service on the client, and if all works as expected, the above values will be returned. Client Side Implementation We configure the client application with the same release train of the server. As dependencies, we have a spring-cloud-starter-config starter and also a spring-boot-starter-web because we want our application to expose some HTTP REST services: XML <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>2021.0.5</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-starter-config</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> </dependencies> The application.yml properties will be consumed by a specific component class by the @Value and @ConfigurationProperties annotations: Java @Component @ConfigurationProperties(prefix = "myproperties") public class DemoClient { private List<String> properties = new ArrayList<String>(); public List<String> getProperties() { return properties; } @Value("${myproperty}") private String myproperty; public String getMyproperty() { return myproperty; } } Then, a controller class will implement two REST services, /getProperties and /getProperty, returning the above class properties: Java @RestController public class ConfigClientController { private static final Logger LOG = LoggerFactory.getLogger(ConfigClientController.class); @Autowired private DemoClient demoClient; @GetMapping("/getProperties") public List<String> getProperties() { LOG.info("Properties: " + demoClient.getProperties().toString()); return demoClient.getProperties(); } @GetMapping("/getProperty") public String getProperty() { LOG.info("Property: " + demoClient.getMyproperty().toString()); return demoClient.getMyproperty(); } } Compiling and Running the Config Server and Client Service After compiling the two applications by Maven, we can take the resulting jars and run first the server part and then the client from the command line: java -jar spring-cloud-config-native-server-1.0-SNAPSHOT.jar ... java -jar spring-cloud-config-native-client-1.0-SNAPSHOT.jar We can test the correct behavior by executing a call with the following address in the browser: http://localhost:8081/getProperties If all works as expected, we will have the following values printed on the screen: [ "value1", "value2" ] The Maven projects related to the demo described above are available on GitHub at the following addresses: Client Service Config Server Conclusion In this article, we have covered the basic notions required to configure a microservice system based on remote configuration. We have used the native approach here, using the classpath as a storage repository. In part two, we will show how to use a remote Git repository and how to refresh the configuration at runtime without restarting the services. More
Deploying Java Serverless Functions as AWS Lambda

Deploying Java Serverless Functions as AWS Lambda

By Nicolas Duminil CORE
The AWS console allows the user to create and update cloud infrastructure resources in a user-friendly manner. Despite all the advantages that such a high-level tool might have, using it is repetitive and error-prone. For example, each time we create a Lambda function using the AWS console, we need to repeat the same operations again and again, and, even if these operations are intuitive and easy as graphical widget manipulations, the whole process is time-consuming and laborious. This working mode is convenient for rapid prototyping, but as soon as we have to work on a real project with a relatively large scope and duration, it doesn't meet the team's goals and wishes anymore. In such a case, the preferred solution is IaC (Infrastructure as Code). IaC essentially consists of using a declarative notation in order to specify infrastructure resources. In the case of AWS, this notation expressed as a formalism based on a JSON or YAML syntax is captured in configuration files and submitted to the CloudFormation IaC utility. CloudFormation is a vast topic that couldn't be detailed in a blog ticket. The important point that we need to retain here is that this service is able to process input configuration files and guarantee the creation and the update of the associated AWS cloud infrastructure resources. While the benefits of the CloudFormation IaC approach are obvious, this tool has a reputation for being verbose, unwieldy, and inflexible. Fortunately, AWS Lambda developers have the choice of using SAM, a superset of CloudFormation which includes some special commands and shortcuts aiming at easing the development, testing, and deployment of the Java serverless code. Installing SAM Installing SAM is very simple: one only has to follow the guide. For example, installing it on Ubuntu 22.04 LTS is as simple as shown below: Shell $ sudo apt-get update ... $ sudo apt-get install awscli ... $ aws --version aws-cli/2.9.12 Python/3.9.11 Linux/5.15.0-57-generic exe/x86_64.ubuntu.22 prompt/off Creating AWS Lambda Functions in Java With SAM Now that SAM is installed on your workstation, you can write and deploy your first Java serverless function. Of course, we assume here that your AWS account has been created and that your environment is configured such that you can run AWS CLI commands. Like CloudFormation, SAM is based on the notion of template, which is a YAML-formatted text file that describes an AWS infrastructure. This template file, which named by default is template.yaml, has to be authored manually, such that to be aligned with the SAM template anatomy (complete specifications can be found here). But writing a template.yaml file from scratch is difficult; hence, the idea of automatically generating it. Enters CookieCutter. CookieCutter is an open-source project allowing automatic code generation. It is greatly used in the Python world but here, we'll use it in the Java world. Its modus operandi is very similar to one of the Maven archetypes, in the sense that it is able to automatically generate full Java projects, including but not limited to packages, classes, configuration files, etc. The generation process is highly customizable and is able to replace string occurrences, flagged by placeholders expressed in a dedicated syntax, by values defined in an external JSON formatted file. This GitHub repository provides such a CookieCutter-based generator able to generate a simple but complete Java project, ready to be deployed as an AWS Lambda serverless function. The listing below shows how: Shell $ sam init --location https://github.com/nicolasduminil/sam-template You've downloaded /home/nicolas/.cookiecutters/sam-template before. Is it okay to delete and re-download it? [yes]: project_name [my-project-name]: aws-lambda-simple aws_lambda_resource_name [my-aws-lambda-resource-name]: AwsLambdaSimple java_package_name [fr.simplex_software.aws.lambda.functions]: java_class_name [MyAwsLambdaClassName]: AwsLambdaSimple java_handler_method_name [handleRequest]: maven_group_id [fr.simplex-software.aws.lambda]: maven_artifact_id [my-aws-function]: aws-lambda-simple maven_version [1.0.0-SNAPSHOT]: function_name [AwsLambdaTestFunction]: AwsLambdaSimpleFunction Select architecture: 1 - arm64 2 - x86_64 Choose from 1, 2 [1]: timeout [10]: Select tracing: 1 - Active 2 - Passthrough Choose from 1, 2 [1]: The command sam init above mentions the location of the CookieCutter-based template used to generate a new Java project. This generation process takes the form of a dialog where the utility is asking questions and accepting answers. Each question has default responses and, in order to accept them, the user just needs to type Enter. Everything starts by asking about the project name and we chose aws-lambda-simple. Further information to be entered is: AWS resource name Maven GAV (GroupId, ArtifactId, Version) Java package name Java class name Processor architecture Timeout value Tracing profile As soon as the command terminates, you may open the new project in your preferred IDE and inspect the generated code. Once finished, you may proceed with a first build, as follows: Shell $ cd aws-lambda-simple/ nicolas@nicolas-XPS-13-9360:~/sam-test/aws-lambda-simple$ mvn package [INFO] Scanning for projects... [INFO] [INFO] ----------< fr.simplex-software.aws.lambda:aws-lambda-simple >---------- [INFO] Building aws-lambda-simple 1.0.0-SNAPSHOT [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ aws-lambda-simple --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/nicolas/sam-test/aws-lambda-simple/src/main/resources [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ aws-lambda-simple --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 1 source file to /home/nicolas/sam-test/aws-lambda-simple/target/classes [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ aws-lambda-simple --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/nicolas/sam-test/aws-lambda-simple/src/test/resources [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ aws-lambda-simple --- [INFO] No sources to compile [INFO] [INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ aws-lambda-simple --- [INFO] No tests to run. [INFO] [INFO] --- maven-jar-plugin:2.4:jar (default-jar) @ aws-lambda-simple --- [INFO] Building jar: /home/nicolas/sam-test/aws-lambda-simple/target/aws-lambda-simple-1.0.0-SNAPSHOT.jar [INFO] [INFO] --- maven-shade-plugin:3.2.1:shade (default) @ aws-lambda-simple --- [INFO] Replacing /home/nicolas/sam-test/aws-lambda-simple/target/aws-lambda-simple.jar with /home/nicolas/sam-test/aws-lambda-simple/target/aws-lambda-simple-1.0.0-SNAPSHOT-shaded.jar [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.604 s [INFO] Finished at: 2023-01-12T19:09:23+01:00 [INFO] ------------------------------------------------------------------------ Our new Java project has been built and packaged as a JAR (Java ARchive). The generated template.yaml file defines the required AWS cloud infrastructure, as shown below: YAML AWSTemplateFormatVersion: 2010-09-09 Transform: AWS::Serverless-2016-10-31 Description: aws-lambda-simple Resources: AwsLambdaSimple: Type: AWS::Serverless::Function Properties: FunctionName: AwsLambdaSimpleFunction Architectures: - arm64 Runtime: java11 MemorySize: 128 Handler: fr.simplex_software.aws.lambda.functions.AwsLambdaSimple::handleRequest CodeUri: target/aws-lambda-simple.jar Timeout: 10 Tracing: Active This file has been created based on the value entered during the generation process. Things like the AWS template version and the transformation version are constants and should be used as such. All the other elements are known as they mirror the input data. Special consideration has to be given to the CodeUri element which specifies the location of the JAR to be deployed as the Lambda function. It contains the class AwsLambdaSimple below: Java public class AwsLambdaSimple { private static Logger log = Logger.getLogger(AwsLambdaSimple.class.getName()); public String handleRequest (Map<String, String> event) { log.info("*** AwsLambdaSimple.handleRequest: Have received: " + event); return event.entrySet().stream().map(e -> e.getKey() + "->" + e.getValue()).collect(Collectors.joining(",")); } } A Lambda function in Java can be run in the following two modes: A synchronous or RequestResponse mode in which the caller waits for whatever response the Lambda function returns An asynchronous or Event mode in which the caller is responded to without waiting, by the Lambda platform itself, while the function proceeds with the request processing, without returning any further response In both cases, the method handleRequest() above is processing the request, as its name implies. This request is an event implemented as a Map<String, String>. All right! Now our new Java project is generated and while the class AwsLambdaSimple presented above (which will be deployed in fine as an AWS Lambda function) doesn't do much, it is sufficiently complete in order to demonstrate our use case. So let's deploy our cloud infrastructure. But first, we need to create an AWS S3 bucket in order to store in it our Lambda function. The simplest way to do that is shown below: Shell $ aws s3 mb s3://bucket-$$ make_bucket: bucket-18468 Here we just created an S3 bucket having the name of bucket-18468. The AWS S3 buckets are constrained to have a unique name across regions. And since it's difficult to guarantee the uniqueness of a name, we use here the Linux $$ function which generates a random number. Shell sam deploy --s3-bucket bucket-18468 --stack-name simple-lambda-stack --capabilities CAPABILITY_IAM Uploading to 44774b9ed09001e1bb31a3c5d11fa9bb 4031 / 4031 (100.00%) Deploying with following values =============================== Stack name : simple-lambda-stack Region : eu-west-3 Confirm changeset : False Disable rollback : False Deployment s3 bucket : bucket-18468 Capabilities : ["CAPABILITY_IAM"] Parameter overrides : {} Signing Profiles : {} Initiating deployment ===================== Uploading to 3af7fb4a847b2fea07d606a80de2616f.template 555 / 555 (100.00%) Waiting for changeset to be created.. CloudFormation stack changeset ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Operation LogicalResourceId ResourceType Replacement ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- + Add AwsLambdaSimpleRole AWS::IAM::Role N/A + Add AwsLambdaSimple AWS::Lambda::Function N/A ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Changeset created successfully. arn:aws:cloudformation:eu-west-3:495913029085:changeSet/samcli-deploy1673620369/0495184e-58ca-409c-9554-ee60810fec08 2023-01-13 15:33:00 - Waiting for stack create/update to complete CloudFormation events from stack operations (refresh every 0.5 seconds) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ResourceStatus ResourceType LogicalResourceId ResourceStatusReason ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- CREATE_IN_PROGRESS AWS::IAM::Role AwsLambdaSimpleRole - CREATE_IN_PROGRESS AWS::IAM::Role AwsLambdaSimpleRole Resource creation Initiated CREATE_COMPLETE AWS::IAM::Role AwsLambdaSimpleRole - CREATE_IN_PROGRESS AWS::Lambda::Function AwsLambdaSimple - CREATE_IN_PROGRESS AWS::Lambda::Function AwsLambdaSimple Resource creation Initiated CREATE_COMPLETE AWS::Lambda::Function AwsLambdaSimple - CREATE_COMPLETE AWS::CloudFormation::Stack simple-lambda-stack - ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Successfully created/updated stack - simple-lambda-stack in eu-west-3 Our Java class has been successfully deployed as an AWS Lambda function. Let's test it using the two invocation methods presented above. Shell $ aws lambda invoke --function-name AwsLambdaSimpleFunction --payload $(echo "{\"Hello\":\"Dude\"}" | base64) outputfile.txt { "StatusCode": 200, "ExecutedVersion": "$LATEST" } $ cat outputfile.txt "Hello->Dude" The listing above demonstrates the synchronous or RequestResponse invocation. We pass a JSON-formatted payload as the input event and, since its default format is Base64, we need to convert it first. Since the invocation is synchronous, the caller waits for the response which is captured in the file outputfile.txt. The returned status code is HTTP 200, as expected, meaning that the request has been correctly processed. Let's see the asynchronous or Event invocation. Shell $ aws lambda invoke --function-name AwsLambdaSimpleFunction --payload $(echo "{\"Hello\":\"Dude\"}" | base64) --invocation-type Event outputfile.txt { "StatusCode": 202 } This time the --invocation-type is Event and, consequently, the returned status code is HTTP 202, meaning that the request has been accepted but not yet processed. The file output.txt is empty, as there is no result. This concludes our use case showing the AWS Lambda functions deployment in Java via the SAM tool. Don't forget to clean up your environment before leaving by running: Shell $ aws s3 rm --recursive s3://bucket-18468 $ aws s3 rb --force s3://bucket-18468 $ aws cloudformation delete-stack --stack-name simple-lambda-stack Enjoy! More
Build CRUD RESTful API Using Spring Boot 3, Spring Data JPA, Hibernate, and MySQL Database
Build CRUD RESTful API Using Spring Boot 3, Spring Data JPA, Hibernate, and MySQL Database
By Ramesh Fadatare
Upgrade Guide To Spring Data Elasticsearch 5.0
Upgrade Guide To Spring Data Elasticsearch 5.0
By Arnošt Havelka CORE
Implementing Infinite Scroll in jOOQ
Implementing Infinite Scroll in jOOQ
By Anghel Leonard CORE
Watch Area and Renderers
Watch Area and Renderers

This is it. The debugging book is now live. I would really appreciate reviews and feedback! I also finished recording and editing the entire course. There are 50 total videos which total seven hours... I also recorded additional videos for the two other free courses for beginners and for modern Java. So keep an eye on those. Renderers In today's video, we discuss one of my favorite obscure IDE features: renderers. Very few people are aware of them. I explained them in the past, but I feel I didn't properly explain why they are so much better than any other alternative. This time I think I got the explanation right. If you work with JPA or any elaborate API, you should check this out, I think the demo is revolutionary. If you provide a complex library to developers this can also be an amazing tool. Transcript Welcome back to the sixth part of debugging at scale, where the bugs come to die. In this section, we discuss the watch area. The watch is one of the most important areas in the debugging process. Yet we don’t give it nearly as much attention as we should. In this segment, we’ll discuss a few of the powerful things we can do in the watch area and how we can extend it to support some fantastic capabilities. Mute Renderers Let’s start with mute renderers, which let us improve the performance of the watch area. Before we discuss that, I’d like to talk about the watch area itself. This is the watch area. In it, we can see most of the visible variables for the current stack frame and their values. We can expand entries within the watch area. Every variable value or expression we have in the watch is an entry. We can add arbitrary elements to the watch and even embed watch expressions directly into the IDE user interface. Notice the values on the right-hand side. These are typically the results of the toString() method. In IntelliJ, we can customize these via renderers which we will discuss further. But there’s more to it, as we’ll see later on. For now, just consider this. Every time I step over a line of code, the IDE needs to collect all the variables in scope and invoke toString() on every one of them. In some cases, even more, elaborate code. This is expensive and slow… In the right-click menu, we have the mute renderers option. By checking this option, we can disable that behavior and potentially speed up the step-over speed significantly. Once we select that, you will notice that all the values turn to three dots followed by the word toString. This means the renderers don’t fetch the value anymore. They instead show this placeholder. This removes the overhead of the renderers completely and can speed up your step-over performance. If we want to see the value, we can click the toString label, and the value is extracted dynamically. Notice that this only impacts the objects. Primitives, arrays, etc., are unaffected by this feature. Customize Rendering Rendering is the process of drawing the element values in the watch. To get started with renderers, we need to customize them through the right-click menu here. This launches the renderer customization dialog, which lets us do amazing things in IntelliJ. For the most basic customization, we can toggle and enable multiple features within this dialog. Then press apply to instantly see them in the variables view below. I can see the declared type of the field. We can include fully qualified names for class files so we can see the static field values. We can include hex values for primitives by default which is a feature I always enable because it’s so useful for me. This is an amazing view that’s worth exploring and customizing to fit your own preference, where you can tune the verbosity level in the watch area. But the real power of this dialog is in the second tab. The Java-type renderer is the next subject. Data Rendering We can go so much further with renderers. You might recall the visit objects I’ve shown before. This is from a standard Spring Boot demo called pet clinic. Spring Boot has the concept of a Repository which is an interface that represents a data source. Often a repository is just a table; it can do more, but it has a strong relation to an underlying SQL table, and it helps to think about it in these terms. If you look at the visitRepository and perRepository objects at the bottom of the screen, you’ll notice that we don’t have much to go on. These are just object IDs with no data that’s valuable for a person debugging the objects. I didn’t expand them, but there’s nothing under the variables here, either. Let's fix that in the customize data view as we did before. We add a renderer that applies to JpaRepository which is the base interface of this instance. Then we just write the expression to represent the rendering here. This renderer will apply to the JpaRepository and its sub-interfaces or classes. Next, instead of using the default renderer, I use an expression to indicate what we will show. The JPARepository includes a method called count() which queries the database and counts the number of elements within the database. I simply invoke it and notice I assume that the current object is the object being rendered. I don’t provide an object instance. The IDE automatically runs in the context of the object. You can also use this to represent the rendered object. Notice I don’t need to cast to a JPARepository. This means the expression will be rendered directly in the watch without changing the toString method, which, in this case, I obviously can’t change and usually might not want to. The toString() method is useful in production; I wouldn’t want expensive code in there. But in the renderer, I can just go wild and do things that don’t make sense in the repository. Notice the on-demand checkbox. If we check this, the expression will act like a muted renderer by default. You will need to click it to see the value. Let’s apply this change to the code, and you’ll notice the visitRepository instantly changes to use the new expression we defined. We can now immediately see that the repository has four elements which is pretty cool. Right? Notice that petRepository isn’t changed; this is because it’s a repository too, but it isn’t a JPARepository. So far, we did stuff that can theoretically be done by toString() methods. It might be hacky, but it’s not something unique. Let’s take this up a notch. The When expanding node option lets us define the behavior when a user expands the entry. The findAll() method of JPARepository returns all the entities in the repository; this will be invoked when we expand the entry. We can optionally check if there’s a reason to show the expand widget. In this case, I use the count() method which would be faster than repeatedly calling findAll(). Once we apply the changes, we can see the elements from the repository listed. You’ll notice all four elements are here, and since they are objects, we can see all the attributes like any object we see in the watch. This is truly spectacular, and you can’t fake it with a toString() call… Doing It for Everyone That was a cool feature, right? But it’s so annoying to configure all of that stuff for every project. Here we see the same renderer from before; you’ll notice that everything looks exactly the same. The numbering, the list of entities, etc. But when we open the list of renderers, it’s blank, and there are no renderers! How does this suddenly work without a renderer? What’s going on? We use code annotations to represent the renderer. This way, you can commit the renderer to the project repository, and you don’t need to configure individual IDE instances. This is pretty simple, and we add a dependency on the JetBrains annotation library into the POM. This is an annotation library. That means the code doesn’t change in a significant way. It’s just markers. Since it’s just hinting to the debugger, it’s ignored in runtime and doesn’t have any implications or overhead. We add an import to the renderer, then we scroll down and we added a simple renderer annotation code. Notice that this is pretty much the code I typed in the dialog, but this time it’s used in an annotation. This way, our entire team can benefit from the improved view of repository objects! If you’re building libraries or frameworks, you can integrate this to make the debugging experience easier for your users without impacting the behavior of the toString() methods or similar semantics. Finally In the next video, we’ll discuss threading issues. Their reputation as “hard to debug” isn’t always justified. If you have any questions, please use the comments section. Thank you!

By Shai Almog CORE
Express Hibernate Queries as Type-Safe Java Streams
Express Hibernate Queries as Type-Safe Java Streams

As much as the JPA Criteria builder is expressive, JPA queries are often equally verbose, and the API itself can be unintuitive to use, especially for newcomers. In the Quarkus ecosystem, Panache is a partial remedy for these problems when using Hibernate. Still, I find myself juggling the Panache’s helper methods, preconfigured enums, and raw strings when composing anything but the simplest of queries. You could claim I am just inexperienced and impatient or, instead, acknowledge that the perfect API is frictionless to use for everyone. Thus, the user experience of writing JPA queries can be further improved in that direction. Introduction One of the remaining shortcomings is that raw strings are inherently not type-safe, meaning my IDE rejects me the helping hand of code completion and wishes me good luck at best. On the upside, Quarkus facilitates application relaunches in a split second to issue quick verdicts on my code. And nothing beats the heart-felt joy and genuine surprise when I have composed a working query on the fifth, rather than the tenth, attempt... With this in mind, we built the open-source library JPAstreamer to make the process of writing Hibernate queries more intuitive and less time-consuming while leaving your existing codebase intact. It achieves this goal by allowing queries to be expressed as standard Java Streams. Upon execution, JPAstreamer translates the stream pipeline to an HQL query for efficient execution and avoids materializing anything but the relevant results. Let me take an example—in some random database exists a table called Person represented in a Hibernate application by the following standard Entity: Java @Entity @Table(name = "person") public class Person { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) @Column(name = "person_id", nullable = false, updatable = false) private Integer actorId; @Column(name = "first_name", nullable = false, columnDefinition = "varchar(45)") private String firstName; @Column(name = "last_name", nullable = false, columnDefinition = "varchar(45)") private String lastName; @Column(name = "created_at", nullable = false, updatable = false) private LocalDateTime createdAt; // Getters for all fields will follow from here } To fetch the Person with an Id of 1 using JPAstreamer, all you need is the following: Java @ApplicationScoped public class PersonRepository { @PersistenceContext EntityManagerFactory entityManagerFactory; private final JPAStreamer jpaStreamer; public PersonRepository EntityManagerFactory entityManagerFactory) { jpaStreamer = JPAStreamer.of(entityManagerFactory); <1> } @Override public Optional<Person> getPersonById(int id) { return this.jpaStreamer.from(Person.class) <2> .filter(Person$.personId.equal(id)) <3> .findAny(); } } <1> Initialize JPAstreamer in one line, the underlying JPA provider handles the DB configuration. <2> The stream source is set to be the Person table. <3> The filter operation is treated as an SQL WHERE clause and the condition is expressed type-safely with JPAstreamer predicates (more on this to follow). Despite it looking as if JPAstreamer operates on all Person objects, the pipeline is optimized to a single query, in this case: Plain Text select person0_.person_id as person_id1_0_, person0_.first_name as first_na2_0_, person0_.last_name as last_nam3_0_, person0_.created_at as created_4_0_, from person person0_ where person0_.person_id=1 Thus, only the Person matching the search criteria is ever materialized. Next, we can look at a more complex example in which I am searching for Person’s with a first name ending with an “A” and a last name that starts with “B.” The matches are sorted primarily by first name and secondly by last name. I further decide to apply an offset of 5, excluding the first five results, and to limit the total results to 10. Here is the stream pipeline to achieve this task: Java List<Person> list = jpaStreamer.stream(Person.class) .filter(Person$.firstName.endsWith("A").and(Person$.lastName.startsWith("B"))) <1> .sorted(Person$.firstName.comparator().thenComparing(Person$.lastName.comparator())) <2> .skip(5) <3> .limit(10) <4> .collect(Collectors.toList()) <1> Filters can be combined with the and/or operators. <2> Easily filter on one or more properties. <3> Skip the first 5 Persons. <4> Return at most 10 Persons. In the context of queries, the stream operators filter, sort, limit, and skip, all have a natural mapping that makes the resulting query expressive and intuitive to read while remaining compact. This query is translated by JPAstreamer to the following HQL statement: Plain Text select person0_.person_id as person_id1_0_, person0_.first_name as first_na2_0_, person0_.last_name as last_nam3_0_, person0_.created_at as created_4_0_, from person person0_ where person0_.person_id=1 where (person0_.first_name like ?) and (person0_.last_name like ?) order by person0_.first_name asc, person0_.last_name asc limit ?, ? How JPAstreamer Works Okay, it looks simple. But how does it work? JPAstreamer uses an annotation processor to form a meta-model at compile time. It inspects any classes marked with the standard JPA annotation @Entity, and for every entity Foo.class, a corresponding Foo$.class is created. The generated classes represent entity attributes as Fields used to form predicates on the form User$.firstName.startsWith("A") that can be interpreted by JPAstreamer’s query optimizer. It is worth repeating that JPAstreamer does not alter or disturb the existing codebase but merely extends the API to handle Java stream queries. Installing the JPAstreamer Extension JPAstreamer is installed as any other Quarkus extension, using a Maven dependency: XML <dependency> <groupId>io.quarkiverse.jpastreamer</groupId> <artifactId>quarkus-jpastreamer</artifactId> <version>1.0.0</version> </dependency> After the dependency is added, rebuild your Quarkus application to trigger JPAstreamer’s annotation processor. The installation is complete once the generated fields reside in /target/generated-sources; you’ll recognize them by the trailing $ in the class names, e.g., Person$.class. Note: JPAstreamer requires an underlying JPA provider, such as Hibernate. For this reason, JPAstreamer needs no additional configuration as the database integration is taken care of by the JPA provider. JPAstreamer and Panache Any Panache fan will note that JPAstreamer shares some of its objectives with Panache, in simplifying many common queries. Still, JPAstreamer distinguishes itself by instilling more confidence in the queries with its type-safe stream interface. However, no one is forced to take a pick as Panache and JPAstreamer work seamlessly alongside each other. Note: Here is an example Quarkus application that uses both JPAstreamer and Panache. At the time of writing, JPAstreamer does not have support for Panache’s Active Record Pattern, as it relies on standard JPA Entities to generate its meta-model. This will likely change in the near future. Summary JPA in general, and Hibernate have greatly simplified application database access, but its API sometimes forces unnecessary complexity. With JPAstreamer, you can utilize JPA while keeping your codebase clean and maintainable.

By Julia Gustafsson
Type Variance in Java and Kotlin
Type Variance in Java and Kotlin

“There are three kinds of variance: invariance, covariance, and contravariance…” It looks pretty scary already, doesn’t it? If we search Wikipedia, we will find covariance and contravariance in category theory and linear algebra. Some of you who learned these subjects in university might be having dreadful flashbacks because it can be complex stuff. Because these terms look so scary, people avoid learning this topic regarding programming languages. From my experience, many middle-level and sometimes senior-level Java and Kotlin developers fail to understand type variance. This leads to a poor design of internal APIs because to create convenient APIs using generics, you need to understand type variance, otherwise, you either don’t use generics at all or use them incorrectly. It is all about creating better APIs. If we compare a program to a building, then its internal API is the foundation of the building. If your internal API is convenient, then your code is more robust and maintainable. So let’s fill this gap in our knowledge. The best way to explain this topic is with a historical and evolutionary perspective. I will start by considering examples from ancient and primitive concepts such as arrays that appeared in early Java versions, through Java Collections API, and finally, Kotlin, which has an advanced type variance. Going from simple to more complex examples, you’ll see how language features have evolved and what problems were solved by introducing these language features. After reading this article no mysteries will remain about Java’s “? extends,” “? super,” or Kotlin’s “in” and “out.” For illustration purposes, I’ll be using the same example of type hierarchy everywhere: We have a base class called Person, a subclass called Employee, and another subclass called Manager. Each Employee is a Person, each Manager is a Person, and each Manager is an Employee, but not necessarily vice versa: some of the Persons are not Employees. In Java and Kotlin, this means you can assign an expression of type Manager to a variable of type Employee and so on, but not vice versa. We will also consider a lot of code examples and for all of them, we’re interested in only four kinds of possible outcomes. We will use emojis to identify them: The code won’t compile. The code will compile and run, but there will be a runtime exception. The code will compile and run normally. The heap pollution will occur. Heap pollution is a situation where a variable of a certain type contains an object of the wrong type. For example, a variable declared as a String refers to an instance of a Manager or Employee. Yes, it is what it looks like: a flaw in the language’s type system. In general, this should not happen, but this sometimes happens both in Java and in Kotlin, and I’ll show you an example of heap pollution as well. Covariance of Reified Java Arrays Arrays have been present in Java for more than twenty-five years, starting from Java 1.0, and in a way, we can consider arrays as a prototype for generics. For example, when we have a Manager type, we can build an array Manager[] and by getting elements of this array, we are getting values of the Manager type. It is trivial about the types of variables that we get from the array, but what about assigning the values to the array’s elements? Can we assign a Manager as an element of Employee[]? And what about the Person? All of the possible combinations are represented in the table below. Have a look and try to figure out what is going on: The result of assigning a value to an element of a Java array The rightmost column is green because in Java null can be assigned (and returned) everywhere. In the lower-left corner, we have cases that won’t compile, which also makes sense: you cannot assign a Person to an Employee or Manager without an explicit type cast, and thus, you cannot set a Person as an element of an array of employees or managers. That’s the main idea of type checking! Everything was understandable so far, but what about the rest of the combinations? We would expect that assigning an Employee to an element of Employee[], Person[], or Object[] will cause no problems, just as assigning it to a variable of type Employee, Person, or Object. What do these exclamation marks mean? A runtime exception? Why? What is this exception and what can go wrong? I will explain this soon. Meanwhile, let’s consider another question: Can we assign a Java array of a given type to an array of another type? That is, can we assign Employee[] to Person[]? And vice versa? All the possible combinations are given in the following table: Can we assign a Java array of a given type to an array of another type? We could remove square brackets and this would give us a table of possible assignments of simple objects: Employee is assignable to Person, but Person is not assignable to Employee. Since each Manager is an Employee, then an array of managers is an array of employees, right? At this point, we can already say that arrays in Java are covariant against the types of their elements, but we will go back to strict terms soon. The following UML diagram is valid: Covariance Now have a look at the code below to see how it behaves: Java Manager[] managers = new Manager[10]; Person[] persons = managers; //this should compile and run persons[0] = new Person(); //line 1 ?? Manager m = managers[0]; //line 2 ?! Nothing special happens in the beginning. Since the Manager is a Person, the assignment is possible. But since arrays, just like any objects, are reference types in Java, both manager and person variables keep the reference to the same object. On line 1, we are trying to insert a Person into this array. Note: the compiler type-checking cannot prevent us from doing this. But if this line is allowed to be executed, then, on line 2, we should expect a catastrophic error: an array of Managers will contain someone who is not a Manager—in other words, heap pollution. But Java won’t let you do it here. Experienced Java developers might know that an ArrayStoreException will occur on line 1. To prevent heap pollution, an array object “knows” the type of its elements in runtime, and each time we assign a value, a runtime check is performed. This explains the exclamation marks in one of the previous tables: writing a non-null value to any Java array, generally speaking, may lead to an ArrayStoreException if the actual type of the array is the subtype of the array variable. The ability of a container to “know” the type of its elements is called “reification.” So now we know that arrays in Java are covariant and reified. To sum up, we may say that: The need for arrays reification and runtime check (and possible runtime exceptions) comes from the covariance of arrays (the fact that the Manager[] array can be assigned to Person[]). Covariance is safe when we read values, but can lead to problems when we write values. Note: the problem is so huge that Java even abandoned the main static language objective here, that is to have all the type checking in compile time, and behaves more like a dynamically-typed language (e.g., Python) in this scenario. You might ask: “Was covariance the right choice for Java arrays?” “What if we just prohibit the assignment of arrays of different types?” In this case, it would have been impossible to assign Manager[] to Person[], we would have known the array elements type at compile time, and there would have been no need to resort to run-time checking. The ability of the type to be assignable only to the variables of the same type strictly is called invariance, and we will discover it in Java and Kotlin generics very soon. But imagine the problems that the invariance of arrays would have led to in Java. Imagine we have a method that accepts a Person[] as its argument and calculates, for example, the average age of the given people: Java Double calculateAverageAge(Person[] people) Now we have a variable of type Manager[]. Managers are people, but can we pass this variable as an argument for calculateAverageAge? In Java we can because of the covariance of arrays. If arrays were invariant, we would have to create a new array of type Person[], copy all the values from Manager[] to this array, and only then call the method. The memory and CPU overhead would have been enormous. This is why invariance is impractical in APIs, and this is the real reason why Java arrays are covariant (although covariance implies difficulties with value assignments). The example of Java arrays shows the full range of problems associated with type variance. Java and Kotlin generics tried to address these problems. Invariance of Java and Kotlin Mutable Lists I believe you are familiar with the concept of generics. In Java and Kotlin, given that list is not empty, we will have the following return types of list.get(0): type of list type of list.get(0) List<Person> Person List<?> Object List<*> Any? The difference between Java and Kotlin is in the last two lines. Both Java and Kotlin have a notion of an “unknown” type parameter: both List<?> in Java and List<*> in Kotlin denote “a List of elements of some type, and we don’t know/don’t care what the type is.” In Java, everything is nullable, thus the Object type returned by list.get(...) can be null. In Kotlin, we have to care about nullability, thus get the method for List<*> returns Any? Now, let’s build the same tables we have previously built for Java arrays. First, let’s consider the assignment of elements. Here we will find a huge difference between Java and Kotlin Collections API (and as we will discover very soon, this difference is tightly related to the difference between type variance in Java and Kotlin). In Java, every List has methods for its modification (add, remove, and so on). The difference between mutable and immutable collections in Java is visible only in runtime. We may have UnsupportedModificationException if we try to change an immutable list. In Kotlin, mutability is visible at compile time. The List interface itself does not have any modification methods, and if we want mutability, we need to use MutableList. In other respects, List<..> in Java and MutableList<..> in Kotlin are nearly the same. Here are the results of the list.add(…) method in Java and Kotlin: What is the result of list.add(…) method in Java and Kotlin? Why we cannot add a null to MutableList<*> is understandable: “star” may mean any type, both nullable and non-nullable. Since we don’t know anything about the actual type and its nullability, we cannot allow adding nullable values to MutableList<*>. Note: we don’t have anything similar to ArrayStoreException, although the table looks similar to the one we have built for arrays so far. Now, let’s try to figure out when we can assign Java and Kotlin lists to each other. All the possible combinations are presented here: Can we assign these lists to each other? The rightmost green column means that List<?>/MutableList<*> are universally assignable: since we “don’t care” about the actual type parameter, we can assign anything. In the rest of the diagram, we see the green diagonal, which means that MutableList<...> can be assigned only to a MutableList parameterized with the same type. In other words, List<T> in Java and MutableList<T> in Kotlin are invariant against the type parameters. This cuts off the possibility of insertion of elements of the wrong type already in compilation time: Java List<Manager> managers = new ArrayList<>(); List<Person> persons = managers; //won't compile persons.add(new Person()); //no runtime check is possible Two concerns may arise at this point: As we know from the Java arrays example, invariance is bad for building APIs. What if we need a method that processes List<Person>, but can be called with List<Manager> without having to copy the whole list element by element? Why not implement everything the same way as for arrays? The answer for the first concern is the declaration site and use site variance that we are going to consider soon. The answer to the second question is that, unlike arrays, which are reified, generics in Java and Kotlin are type erased, which means they have no information about their type parameters in run time, and run-time type checking is impossible. Let’s dive deeper into type erasure now. Type Erasure, Generics/Arrays Incompatibility, and Heap Pollution One of the reasons why the Java platform implements generics via type erasure is purely historical. Generics appeared in Java version 5 when the Java platform already was quite mature. Java keeps backward compatibility at the source code and bytecode level, which means that very old source code can be compiled in modern Java versions, and very old compiled libraries can be used in modern applications by placing them on the classpath. To facilitate an upgrade to Java 5, the decision had been made to implement Generics as a language feature, not a platform feature. This means that in run time JVM doesn’t know anything about generics and their type parameters. For example, a simple Pair<T> class is compiled to byte code in the following way (type parameter T is “erased” and replaced with Object): Generic Type (Source) Raw Type (compiled) Java class Pair<T> { private T first; private T second; Pair(T first, T second) { this.first = first; this.second = second; } T getFirst() {return first; } T getSecond() {return second; } void setFirst(T newValue) {first = newValue;} void setSecond(T newValue) {second = newValue;} } Java class Pair { private Object first; private Object second; Pair(Object first, Object second) { this.first = first; this.second = second; } Object getFirst() {return first; } Object getSecond() {return second; } void setFirst(Object newValue) {first = newValue;} void setSecond(Object newValue) {second = newValue;} } Or, if we use bounded types in the generic type definition, the type parameter is replaced with boundary type: Generic Type (source) Raw Type (compiled) Java class Pair<T extends Employee>{ private T first; private T second; Pair(T first, T second) { this.first = first; this.second = second; } T getFirst() {return first; } T getSecond() {return second; } void setFirst(T newValue) {first = newValue;} void setSecond(T newValue) {second = newValue;} } Java class Pair { private Employee first; private Employee second; Pair(Employee first, Employee second) { this.first = first; this.second = second; } Employee getFirst() {return first; } Employee getSecond() {return second; } void setFirst(Employee newValue) {first = newValue;} void setSecond(Employee newValue) {second = newValue;} } This implies many strict and sometimes counterintuitive limitations on how we can use generics in Java and Kotlin. If you want to know more details (e.g. if you want to know more about bounded types, and know what “bridge methods” are), you can refer to my lecture on Java Generics titled Mainor 2022: Java Generics. But the most important restriction is the following: neither in Java nor Kotlin can we determine the type parameter in the runtime. In the following situation, These code snippets won’t compile: Java Kotlin Java if (a instanceof Pair<String>) ... Kotlin if (a is Pair<String>) ... But these will compile and run successfully, although probably we would like to know more about a: Java Kotlin Java if (a instanceof Pair<?>) ... Kotlin if (a is Pair<*>) ... An important implication of this is Java arrays and generic incompatibility. For example, the following line wont compile in Java with the error “generic array creation:” Java List<String>[] a = new ArrayList<String>[10]; As we know, Java arrays need to keep the full type information in runtime, while all the information that will be available in this case is that it is an array of ArrayList of something unknown (“String” type parameter will be erased). Interestingly, we can overcome this protection and create an array of generics in Java (either via type cast or varargs (variable arguments) parameter), and then easily make heap pollution with it. But let’s consider another example. It doesn’t involve Java arrays and thus it is possible both in Java and Kotlin: Java Kotlin Java Pair<Integer> intPair = new Pair<>(42, 0); Pair<?> pair = intPair; Pair<String> stringPair = (Pair<String>) pair; stringPair.b = "foo"; System.out.println(intPair.a * intPair.b); Kotlin var intPair = Pair<Int>(42, 0) var pair: Pair<*> = intPair var stringPair: Pair<String> = pair as Pair<String> stringPair.b = "foo" println(intPair.a * intPair.b) An example of heap pollution. A chimera appears! First, we create a pair of integers. Then we “forget” its type in compile time and through explicit typecast we are casting it to a pair of Strings. Note: we cannot cast intPair to stringPair straightforwardly: Integer cannot be cast to String, and the compiler will warn us about it. But we can do this via Pair<?> / Pair <*>: although there will be a warning about unsafe typecast, the compiler won’t prohibit the typecast in this scenario (we can imagine a Pair<String> casted to Pair<?> and then explicitly casted back to Pair<String>). Then something weird happens: we assign a String to the second component of our object, and this code is going to compile and run. It compiles because the compiler “thinks” that b has a type of String. It runs because in runtime there are no checks, and the type of b is Object. After the execution of this line, we have a “chimera” object: its first variable is Integer, its second variable is String, and it’s neither Pair<String> nor Pair<Integer>. We’ve broken the type safety of Java and Kotlin and made heap pollution. To sum up: Because of type erasure, it’s impossible to perform type checking of objects passed to generics in run time. It’s unsafe to store type-erased generics in Java native reified arrays. Both Java and Kotlin languages permit heap pollution: a situation where a variable of some type refers to an object that is not of that type. Use Site Covariance Imagine we are facing the following practical task: we are implementing a class MyList<E>, and we want it to have the ability to add elements from other lists via the addAllFrom method and the ability to add its elements to another list via addAllTo. Since we have the usual Manager – Employee – Person inheritance chain, these must be the valid and invalid options: Java MyList<Manager> managers = ... MyList<Employee> employees = ... //Valid options, we want these to be compilable! employees.addAllFrom(managers); managers.addAllTo(employees); //Invalid options, we don't want these to be compilable! managers.addAllFrom(employees); employees.addAllTo(managers); A naive approach (the one that, unfortunately, I’ve seen many times in real life projects) is to use type parameters straightforwardly: Java class MyList<E> implements Iterable<E> { void add(E item) { ... } //Don't do this :-( void addAllFrom(MyList<E> list) { for (E item : list) this.add(item); } void addAllTo(MyList<E> list) { for (E item : this) list.add(item); } ...} Now, when we try to write the following code, it will not compile. Java MyList<Manager> managers = ...; MyList<Employee> employees = ...; employees.addAllFrom(managers); managers.addAllTo(employees); I often see people struggling with this: they tried to introduce generic classes in their code, but these classes were unusable. Now we know why this happens: it is due to the invariance of MyList. We have figured out that due to the lack of runtime type-checking, type invariance is the best that can be done for type safety of Java’s List/ Kotlin’s MutableList. Both Java and Kotlin offer a solution for this problem: to create convenient APIs, we need to use wildcard types in Java or type projections in Kotlin. Let’s look at Java first: Java class MyList<E> implements Iterable<E> { void addAllFrom (List<? extends E> list){ for (Е item: list) add(item); } } MyList<Manager> managers = ...; MyList<Employee> employees = ... employees.addAllFrom(managers); List<? extends E> means: “a list of any type will do as soon as this type is a subtype of E.” When we iterate over this list, the items can be safely cast to “E.” And since our list is a list of “E,” then we can safely add these elements to our list. The program will compile and run. In Kotlin, this looks very similar, but instead of “? extends E,” we are using “out E:” Kotlin class MyList<E> : Iterable<E> { fun addAllFrom(list: MyList<out E>) { for (item in list) add(item) } } val managers: MyList<Manager> = ... ; val employees: MyList<Employee> = ... employees.addAllFrom(managers) By declaring <? extends E> or <out E> are making the type of the argument covariant. But to avoid heap pollution, this implies certain limitations to what can be done with the variable declared with wildcard types/type projections. One of my favourite questions for a Java technical interview is: given a variable declared as List<? extends E> list in Java, what can be done with this variable? Of course, we can use list.get(...), and the return type will be E. On the other hand, if we have a variable E element, we cannot use list.add(element): such code won’t compile. Why? We know that the list is a list of elements of some type which is a subtype of E. But we don’t know what subtype. For example, if E is Person, then ? extends E might be Employee or Manager. We cannot blindly append a Person to such a list then. An interesting exception: list.add(null) will compile and run. This happens because null in Java is assignable to a variable of any type, and thus it is safe to add it to any list. We can also use an “unbounded wildcard” in Java, which is just a question mark in triangular braces, like in Foo<?>. The rules for it are as follows: If Foo<T extends Bound>, then Foo<?> is the same as Foo<? extends Bound>. We can read elements, but only as Bound (or Object, if no Bound is given). If we’re using intersection types Foo<T extends Bound1 & Bound2>, any of the bound types will do. We can put only null values. What about covariant types in Kotlin? Unlike Java, nullability now plays a role. If we have a function parameter with a type MyList<out E?>: We can read values typed E?. We cannot add anything. Even null won’t do because, although we have nullable E?, out means any subtype. In Kotlin, a non-nullable type is a subtype of a nullable type. So the actual type of the list element might be non-nullable, and this is why Kotlin won’t let you add an even null to such a list. Use Site Contravariance We’ve been talking about covariance so far. Covariant types are good for reading values and bad for writing. What about contravariance? Before figuring out where it might be needed, let’s have a look at the following diagram: Unlike in covariant types, subtyping works the other way around in contravariant ones, and this makes them good for writing values, but bad for reading. The classical example of a use case for contravariance is Predicate<E>, which is a functional type that takes E as an argument and returns a boolean value. The wider the type of E in a predicate, the more “powerful” it is. For example, Predicate<Person> can substitute Predicate<Employee> (because Employee is a Person), and thus it can be considered as its subtype. Of course, everything is invariant in Java and Kotlin by default, and this is why we need to use another kind of wildcard type and type projections. The addAllTo method of our MyList class can be implemented the following way: Java class MyList<E> implements Iterable<E> { void addAllTo (List<? super E> list) { for (Е item: this) list.add(item); } } MyList<Employee> employees = ...; MyList<Person> people = ...; employees.addAllTo(people); List<? super E> means “a list of any type will do as soon as this type is E or a supertype of E, up to Object.” When we iterate over our list, our items, which have type E, can be safely cast to this unknown type and can be safely added to another list. The program will compile and run. In Kotlin it looks the same, but we use MyList<in E> instead of MyList<? super E>: Kotlin class MyList<E> : Iterable<E> { fun addAllTo(list: MyList<in E>) { for (item in this) list.add(item) } } val employees: MyList<Employee> = ... ; val people: MyList<Person> = ... employees.addAllTo(people) What Can Be Done With an Object Typed List<? super E> in Java? When we have an element of type E, we can successfully add it to this list. The same works for null. null can be added everywhere in Java. We can call get(..) method for such a list, but we read its values only as Objects. Indeed, <? super E> means that the actual type parameter is unknown and can be anything up to Object, so Object is the only safe assumption about the type of list.get(..). And what about Kotlin? Again, nullability plays a role. If we have a parameter list: MyList<in E>, then: We can add elements of type E to the list. We cannot add nulls (but we can add nulls if the variable is declared like MyList<in E?>). The type of its elements (e. g. the type of list.first()) is Any? – mind the question mark. In Kotlin, “Any?” is a universal supertype, while “Any” is a subtype of “Any?”. If a type is contravariant, it can always potentially hold nulls. PECS The Mnemonic Rule for Java Now we know that covariance is for reading (and writing is generally prohibited to a covariantly-typed object), and contravariance is for writing (and although we can read values for contravariant-typed objects, all the type information is lost). Joshua Bloch in his famous “Effective Java” book proposes the following mnemonic rule for Java programmers: PECS: Producer — Extends, Consumer — Super This rule makes it simple to reason about the correct wildcard types in your API. If, for example, an argument for our method is a Function, we should always (no exceptions here) declare it this way: Java void myMethod(Function<? super T, ? extends R> arg) The T parameter in Function is a type of the input, i.e. something that is being consumed, and thus we use ? super for it. The R parameter is the result, something that is produced, and thus we use ? extends. This trick will allow us to use any compatible Function as an argument. Any Function that can process T or its supertype will do, as well as any Function that yields R or any of its subtypes. In the standard Java library API, we can see a lot of examples of wildcard types, all of them following the PECS rule. For example, a method that finds a maximum number in a Collection given a Comparator is defined like this: Java public static <T> T max (Collection<? extends T> coll, Comparator<? super T> comp) This allows us to conveniently use the following parameters: Collections.max(List<Integer>, Comparator<Number>) (if we can compare any Numbers, then we can compare Integers), Collections.max(List<String>, Comparator<Object>) (if we can compare Objects, then we can compare Strings). In Kotlin, it is easy to memorize that producers always use the “out” keyword and consumers use “in.” Although Kotlin syntax is more concise and “in/out” keywords make it clearer which type is used for producer and which for consumer, it is still very useful to understand that “out” actually means a subtype, while “in” means a supertype. Declaration Site Variance in Kotlin Now we’re going to consider a feature that Kotlin has and Java doesn’t have: declaration site variance. Let’s have a look at Kotlin’s immutable List. When we check the assignability of Kotlin’s List, we find that it looks similar to Java arrays. In other words, Kotlin’s List is covariant itself: Can we assign these immutable lists to each other? Сovariance for Kotlin’s List doesn’t imply any problems related to Java covariant arrays, since you cannot add or modify anything. When just reading the values, we can safely cast Manager to Employee. That’s why a Kotlin function that requires List<Person> as its parameter will happily accept, say, List<Manager> even if that parameter does not use type projections. There is no similar functionality in Java. When we compare the declaration of the List interface in Java and Kotlin, we’ll see the difference: Java Kotlin Java public interface List<E> extends Collection<E> {...} Kotlin public interface List<out E> : Collection<E> {...} The keyword “out” in type declaration makes the List interface in Kotlin a covariant type everywhere. Of course, you cannot make any type covariant in Kotlin: only those that are not using type parameters as an argument of a public method (while return type for E is OK). So it’s a good idea to declare all your immutable classes as covariant in Kotlin. In our ‘MyList’ example we might also want to introduce an immutable pair like this: Kotlin class MyImmutablePair<out E>(val a: E, val b: E) In this class, we can only declare methods that return something of type E, but not public methods that will have E-typed arguments. Note: constructor parameters and private methods with E-typed arguments are OK. Now, if we want to add a method that takes values from MyImmutablePair, we don’t need to bother about use-site variance. Kotlin class MyList<E> : Iterable<E> { //Don't bother about use-site type variance! fun addAllFrom(pair: MyImmutablePair<E>){ add(pair.a); add(pair.b) } ... } val twoManagers: MyImmutablePair<Manager> = ... employees.addAllFrom(twoManagers) The same applies to contravariance, of course. We might want to define a contravariant class MyConsumer in this way: Kotlin class MyConsumer<in E> { fun consume(p: E){ ... } } As soon as we defined a type as contravariant, the following limitations emerge: We can define methods that have E-typed arguments, but we cannot expose anything of type E. We can have private class variables of type E, and even private methods that return E. The addAllTo method, which dumps all the values to the given consumer, now doesn’t need to use type projections. The following code will compile and run: Kotlin class MyList<E> : Iterable<E> { //Don't bother about use-site type variance! fun addAllTo(consumer: MyConsumer<E>){ for (item in this) consumer.consume(item) } ... } val employees: MyList<Employee> = ... val personConsumer: MyConsumer<Person> = ... employees.addAllTo(personConsumer) The one thing that’s worth mentioning is how declaration-site variance influences star projection Foo<*>. If we have an object typed Foo<*>, does it matter if Foo class is defined as invariant, covariant, or contravariant if we want to do something with this object? If the original type declaration is Foo<T : TUpper> (invariant), then, of course, you can read values as TUpper, and you cannot write anything (even null), because we don’t know the exact type. If Foo<out T : TUpper> is covariant, you can still read values as TUpper, and you cannot write anything just because there are no public methods for writing in this class. If Foo<in T : TUpper> is contravariant, then you cannot read anything (because there are no such public methods) and you still cannot write anything (because you “forgot” the exact type). So the contravariant Foo<*> variable is the most useless thing in Kotlin. Kotlin Is Better for the Creation of Fluent APIs When we consider switching between languages, the most important question is: what can a new language provide that cannot be achieved with the old one? The more concise syntax is good, but if everything that a new language offers is just syntactic sugar, then maybe it is not worth switching from familiar tools and ecosystems. In regards to type variance in Kotlin vs. Java, the question is: does declaration site variance provide the options that are impossible in Java with wildcard types? In my opinion, the answer is definitely yes, as use site variance is not just about getting rid of “? extends” and “? super” everywhere. Here’s a real-life example of the problems that arise when we design APIs for data streaming processing frameworks (in particular, this example relates to Apache Kafka Streams API). The key classes of such frameworks are abstractions of data streams, like KStream<K>, which are semantically covariant: stream of Employee can be safely considered as a stream of Person if all that we are interested in are Person’s properties. Now imagine that in library code we have a class which accepts a funciton capable of transforming into a stream. Java class Processor<E> { void withFunction(Function<? super KStream<E>, ? extends KStream<E>> chain) {...} } In the user’s code these functions may look like this: Java KStream<Employee> transformA(KStream<Employee> s) {...} KStream<Manager> transformB(KStream<Person> s) {...} As you can see, both of these functions can work as a transformer from KStream<Employee> to KStream<Employee>. But if we try to use them as method references passed to the withFunction method, only the first one will do: Java Processor<Employee> processor = ... //Compiles processor.withFunction(this::transformA); //Won't compile with "KStream<Employee> is not convertible to KStream<Person>" processor.withFunction(this::transformB); The problem cannot be fixed by just adding more “? extends.” If we define the class in this way: Java class Processor<E> { //A mind-blowing number of question marks void withFunction(Function<? super KStream<? super E>, ? extends KStream<? extends E>> chain) {...} } then both lines Java processor.withFunction(this::transformA); processor.withFunction(this::transformB); will fail to compile with something like “KStream<capture of ? super Employee> is not convertible to KStream<Employee>.” Type calculation in Java is not too “wise” to support complex recursive definitions. Meanwhile in Kotlin, if we declare class KStream<out E> as covariant, this is easily possible: Kotlin /* LIBRARY CODE */ class KStream<out E> class Processor<E> { fun withFunction(chain: (KStream<E>) -> KStream<E>) {} } /* USER'S CODE */ fun transformA(s: KStream<Employee>): KStream<Employee> { ... } fun transformB(s: KStream<Person>): KStream<Manager> { ... } val processor: Processor<Employee> = Processor() processor.withFunction(this::transformA) processor.withFunction(this::transformB) All the lines will compile and run as intended (besides the fact that we have more concise syntax). Kotlin has a clear win in this scenario. Conclusion To sum up, here are some properties of different kinds of type variance. Covariance is: ? extends in Java out in Kotlin safe reading, unsafe or impossible writing described by the following diagram: When A is a supertype of B, then the matrix of possible assignments fills the lower left corner: Contravariance is: ? super in Java in in Kotlin safe writing, type information lost or impossible reading described by the following diagram: When A is a supertype of B, then the matrix of possible assignments fills the upper right corner: Invariance is: assumed in Java and Kotlin by default safe writing and reading when A is a supertype of B, then the matrix of possible assignments fills only the diagonal: To create good APIs, understanding type variance is necessary. Kotlin offers great enhancements for Java Generics, making usage of ready-made generic types even more straightforward. But to create your generic types in Kotlin, it’s even more important to understand the principles of type variance. I hope that it’s now clear how type variance works and how it can be used in your APIs. Thanks for reading.

By Ivan Ponomarev CORE
7 Awesome Libraries for Java Unit and Integration Testing
7 Awesome Libraries for Java Unit and Integration Testing

Looking to improve your unit and integration tests? I made a short video giving you an overview of 7 libraries that I regularly use when writing any sort of tests in Java, namely: AssertJ Awaitility Mockito Wiser Memoryfilesystem WireMock Testcontainers What’s in the Video? The video gives a short overview of how to use the tools mentioned above and how they work. In order of appearance: AssertJ JUnit comes with its own set of assertions (i.e., assertEquals) that work for simple use cases but are quite cumbersome to work with in more realistic scenarios. AssertJ is a small library giving you a great set of fluent assertions that you can use as a direct replacement for the default assertions. Not only do they work on core Java classes, but you can also use them to write assertions against XML or JSON files, as well as database tables! // basic assertions assertThat(frodo.getName()).isEqualTo("Frodo"); assertThat(frodo).isNotEqualTo(sauron); // chaining string specific assertions assertThat(frodo.getName()).startsWith("Fro") .endsWith("do") .isEqualToIgnoringCase("frodo"); (Note: Source Code from AssertJ) Awaitility Testing asynchronous workflows is always a pain. As soon as you want to make sure that, for example, a message broker received or sent a specific message, you'll run into race condition problems because your local test code executes faster than any asynchronous code ever would. Awaitility to the rescue: it is a small library that lets you write polling assertions, in a synchronous manner! @Test public void updatesCustomerStatus() { // Publish an asynchronous message to a broker (e.g. RabbitMQ): messageBroker.publishMessage(updateCustomerStatusMessage); // Awaitility lets you wait until the asynchronous operation completes: await().atMost(5, SECONDS).until(customerStatusIsUpdated()); ... } (Note: Source Code from Awaitility) Mockito There comes a time in unit testing when you want to make sure to replace parts of your functionality with mocks. Mockito is a battle-tested library to do just that. You can create mocks, configure them, and write a variety of assertions against those mocks. To top it off, Mockito also integrates nicely with a huge array of third-party libraries, from JUnit to Spring Boot. // mock creation List mockedList = mock(List.class); // or even simpler with Mockito 4.10.0+ // List mockedList = mock(); // using mock object - it does not throw any "unexpected interaction" exception mockedList.add("one"); mockedList.clear(); // selective, explicit, highly readable verification verify(mockedList).add("one"); verify(mockedList).clear(); (Note: Source Code from Mockito) Wiser Keeping your code as close to production and not just using mocks for everything is a viable strategy. When you want to send emails, for example, you neither need to completely mock out your email code nor actually send them out via Gmail or Amazon SES. Instead, you can boot up a small, embedded Java SMTP server called Wiser. Wiser wiser = new Wiser(); wiser.setPort(2500); // Default is 25 wiser.start(); Now you can use Java's SMTP API to send emails to Wiser and also ask Wiser to show you what messages it received. for (WiserMessage message : wiser.getMessages()) { String envelopeSender = message.getEnvelopeSender(); String envelopeReceiver = message.getEnvelopeReceiver(); MimeMessage mess = message.getMimeMessage(); // now do something fun! } (Note: Source Code from Wiser on GitHub) Memoryfilesystem If you write a system that heavily relies on files, the question has always been: "How do you test that?" File system access is somewhat slow, and also brittle, especially if you have your developers working on different operating systems. Memoryfilesystem to the rescue! It lets you write tests against a file system that lives completely in memory, but can still simulate OS-specific semantics, from Windows to macOS and Linux. try (FileSystem fileSystem = MemoryFileSystemBuilder.newEmpty().build()) { Path p = fileSystem.getPath("p"); System.out.println(Files.exists(p)); } (Note: Source Code from Memoryfilesystem on GitHub) WireMock How to handle flaky 3rd-party REST services or APIs in your tests? Easy! Use WireMock. It lets you create full-blown mocks of any 3rd-party API out there, with a very simple DSL. You can not only specify the specific responses your mocked API will return, but even go so far as to inject random delays and other unspecified behavior into your server or to do some chaos monkey engineering. // The static DSL will be automatically configured for you stubFor(get("/static-dsl").willReturn(ok())); // Instance DSL can be obtained from the runtime info parameter WireMock wireMock = wmRuntimeInfo.getWireMock(); wireMock.register(get("/instance-dsl").willReturn(ok())); // Info such as port numbers is also available int port = wmRuntimeInfo.getHttpPort(); (Note: Source Code from WireMock) Testcontainers Using mocks or embedded replacements for databases, mail servers, or message queues is all nice and dandy, but nothing beats using the real thing. In comes Testcontainers: a small library that allows you to boot up and shut down any Docker container (and thus software) that you need for your tests. This means your test environment can be as close as possible to your production environment. @Testcontainers class MixedLifecycleTests { // will be shared between test methods @Container private static final MySQLContainer MY_SQL_CONTAINER = new MySQLContainer(); // will be started before and stopped after each test method @Container private PostgreSQLContainer postgresqlContainer = new PostgreSQLContainer() .withDatabaseName("foo") .withUsername("foo") .withPassword("secret"); (Note: Source Code from Testcontainers) Enjoy the video!

By Marco Behler CORE
How to Use MQTT in Java
How to Use MQTT in Java

MQTT is an OASIS standard messaging protocol for the Internet of Things (IoT). It is designed as an extremely lightweight publish/subscribe messaging transport that is ideal for connecting remote devices with a small code footprint and minimal network bandwidth. MQTT today is used in a wide variety of industries, such as automotive, manufacturing, telecommunications, oil and gas, etc. This article introduces how to use MQTT in the Java project to realize the functions of connecting, subscribing, unsubscribing, publishing, and receiving messages between the client and the broker. Add Dependency The development environment for this article is: Build tool: Maven IDE: IntelliJ IDEA Java: JDK 1.8.0 We will use Eclipse Paho Java Client as the client, which is the most widely used MQTT client library in the Java language. Add the following dependencies to the pom.xml file. <dependencies> <dependency> <groupId>org.eclipse.paho</groupId> <artifactId>org.eclipse.paho.client.mqttv3</artifactId> <version>1.2.5</version> </dependency> </dependencies> Create an MQTT Connection MQTT Broker This article will use the public MQTT broker created based on EMQX Cloud. The server access information is as follows: Broker: broker.emqx.io TCP Port: 1883 SSL/TLS Port: 8883 Connect Set the basic connection parameters of MQTT. Username and password are optional. String broker = "tcp://broker.emqx.io:1883"; // TLS/SSL // String broker = "ssl://broker.emqx.io:8883"; String username = "emqx"; String password = "public"; String clientid = "publish_client"; Then create an MQTT client and connect to the broker. MqttClient client = new MqttClient(broker, clientid, new MemoryPersistence()); MqttConnectOptions options = new MqttConnectOptions(); options.setUserName(username); options.setPassword(password.toCharArray()); client.connect(options); Instructions: MqttClient: MqttClient provides a set of methods that block and return control to the application program once the MQTT action has been completed. MqttClientPersistence: Represents a persistent data store used to store outbound and inbound messages while they are in flight, enabling delivery to the QoS specified. MqttConnectOptions: Holds the set of options that control how the client connects to a server. Here are some common methods: setUserName: Sets the user name to use for the connection. setPassword: Sets the password to use for the connection. setCleanSession: Sets whether the client and server should remember the state across restarts and reconnects. setKeepAliveInterval: Sets the "keep alive" interval. setConnectionTimeout: Sets the connection timeout value. setAutomaticReconnect: Sets whether the client will automatically attempt to reconnect to the server if the connection is lost. Connecting With TLS/SSL If you want to use a self-signed certificate for TLS/SSL connections, add bcpkix-jdk15on to the pom.xml file. <!-- https://mvnrepository.com/artifact/org.bouncycastle/bcpkix-jdk15on --> <dependency> <groupId>org.bouncycastle</groupId> <artifactId>bcpkix-jdk15on</artifactId> <version>1.70</version> </dependency> Then create the SSLUtils.java file with the following code. package io.emqx.mqtt; import org.bouncycastle.jce.provider.BouncyCastleProvider; import org.bouncycastle.openssl.PEMKeyPair; import org.bouncycastle.openssl.PEMParser; import org.bouncycastle.openssl.jcajce.JcaPEMKeyConverter; import javax.net.ssl.KeyManagerFactory; import javax.net.ssl.SSLContext; import javax.net.ssl.SSLSocketFactory; import javax.net.ssl.TrustManagerFactory; import java.io.BufferedInputStream; import java.io.FileInputStream; import java.io.FileReader; import java.security.KeyPair; import java.security.KeyStore; import java.security.Security; import java.security.cert.CertificateFactory; import java.security.cert.X509Certificate; public class SSLUtils { public static SSLSocketFactory getSocketFactory(final String caCrtFile, final String crtFile, final String keyFile, final String password) throws Exception { Security.addProvider(new BouncyCastleProvider()); // load CA certificate X509Certificate caCert = null; FileInputStream fis = new FileInputStream(caCrtFile); BufferedInputStream bis = new BufferedInputStream(fis); CertificateFactory cf = CertificateFactory.getInstance("X.509"); while (bis.available() > 0) { caCert = (X509Certificate) cf.generateCertificate(bis); } // load client certificate bis = new BufferedInputStream(new FileInputStream(crtFile)); X509Certificate cert = null; while (bis.available() > 0) { cert = (X509Certificate) cf.generateCertificate(bis); } // load client private key PEMParser pemParser = new PEMParser(new FileReader(keyFile)); Object object = pemParser.readObject(); JcaPEMKeyConverter converter = new JcaPEMKeyConverter().setProvider("BC"); KeyPair key = converter.getKeyPair((PEMKeyPair) object); pemParser.close(); // CA certificate is used to authenticate server KeyStore caKs = KeyStore.getInstance(KeyStore.getDefaultType()); caKs.load(null, null); caKs.setCertificateEntry("ca-certificate", caCert); TrustManagerFactory tmf = TrustManagerFactory.getInstance("X509"); tmf.init(caKs); // client key and certificates are sent to server so it can authenticate KeyStore ks = KeyStore.getInstance(KeyStore.getDefaultType()); ks.load(null, null); ks.setCertificateEntry("certificate", cert); ks.setKeyEntry("private-key", key.getPrivate(), password.toCharArray(), new java.security.cert.Certificate[]{cert}); KeyManagerFactory kmf = KeyManagerFactory.getInstance(KeyManagerFactory .getDefaultAlgorithm()); kmf.init(ks, password.toCharArray()); // finally, create SSL socket factory SSLContext context = SSLContext.getInstance("TLSv1.2"); context.init(kmf.getKeyManagers(), tmf.getTrustManagers(), null); return context.getSocketFactory(); } } Set options as follows. String broker = "ssl://broker.emqx.io:8883"; // Set socket factory String caFilePath = "/cacert.pem"; String clientCrtFilePath = "/client.pem"; String clientKeyFilePath = "/client.key"; SSLSocketFactory socketFactory = getSocketFactory(caFilePath, clientCrtFilePath, clientKeyFilePath, ""); options.setSocketFactory(socketFactory); Publish MQTT Messages Create a class PublishSample that will publish a Hello MQTT message to the topic mqtt/test. package io.emqx.mqtt; import org.eclipse.paho.client.mqttv3.MqttClient; import org.eclipse.paho.client.mqttv3.MqttConnectOptions; import org.eclipse.paho.client.mqttv3.MqttException; import org.eclipse.paho.client.mqttv3.MqttMessage; import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence; public class PublishSample { public static void main(String[] args) { String broker = "tcp://broker.emqx.io:1883"; String topic = "mqtt/test"; String username = "emqx"; String password = "public"; String clientid = "publish_client"; String content = "Hello MQTT"; int qos = 0; try { MqttClient client = new MqttClient(broker, clientid, new MemoryPersistence()); MqttConnectOptions options = new MqttConnectOptions(); options.setUserName(username); options.setPassword(password.toCharArray()); options.setConnectionTimeout(60); options.setKeepAliveInterval(60); // connect client.connect(options); // create message and setup QoS MqttMessage message = new MqttMessage(content.getBytes()); message.setQos(qos); // publish message client.publish(topic, message); System.out.println("Message published"); System.out.println("topic: " + topic); System.out.println("message content: " + content); // disconnect client.disconnect(); // close client client.close(); } catch (MqttException e) { throw new RuntimeException(e); } } } Subscribe Create a class SubscribeSample that will subscribe to the topic mqtt/test. package io.emqx.mqtt; import org.eclipse.paho.client.mqttv3.*; import org.eclipse.paho.client.mqttv3.persist.MemoryPersistence; public class SubscribeSample { public static void main(String[] args) { String broker = "tcp://broker.emqx.io:1883"; String topic = "mqtt/test"; String username = "emqx"; String password = "public"; String clientid = "subscribe_client"; int qos = 0; try { MqttClient client = new MqttClient(broker, clientid, new MemoryPersistence()); // connect options MqttConnectOptions options = new MqttConnectOptions(); options.setUserName(username); options.setPassword(password.toCharArray()); options.setConnectionTimeout(60); options.setKeepAliveInterval(60); // setup callback client.setCallback(new MqttCallback() { public void connectionLost(Throwable cause) { System.out.println("connectionLost: " + cause.getMessage()); } public void messageArrived(String topic, MqttMessage message) { System.out.println("topic: " + topic); System.out.println("Qos: " + message.getQos()); System.out.println("message content: " + new String(message.getPayload())); } public void deliveryComplete(IMqttDeliveryToken token) { System.out.println("deliveryComplete---------" + token.isComplete()); } }); client.connect(options); client.subscribe(topic, qos); } catch (Exception e) { e.printStackTrace(); } } } MqttCallback: connectionLost(Throwable cause): This method is called when the connection to the server is lost. messageArrived(String topic, MqttMessage message): This method is called when a message arrives from the server. deliveryComplete(IMqttDeliveryToken token): Called when delivery for a message has been completed and all acknowledgments have been received. Test Next, run SubscribeSample to subscribe to the mqtt/test topic. Then run PublishSample to publish the message on the mqtt/test topic. We will see that the publisher successfully publishes the message, and the subscriber receives it. Summary Now we are done using Paho Java Client as an MQTT client to connect to the public MQTT server and implement message publishing and subscription. The full code is available on GitHub.

By Zhiwei Yu
Kotlin Is More Fun Than Java And This Is a Big Deal
Kotlin Is More Fun Than Java And This Is a Big Deal

I first dabbled in Kotlin soon after its 1.0 release in 2016. For lack of paying gigs in which to use it, I started my own open-source project and released the first alpha over the Christmas holidays. I’m now firmly in love with the language. But I’m not here to promote my pet project. I want to talk about the emotional value of the tools we use, the joys and annoyances beyond mere utility. Some will tell you that there’s nothing you can do in Kotlin that you can’t do just as fine with Java. There’s no compelling reason to switch. Kotlin is just a different tool doing the same thing. Software is a product of the mind, not of your keyboard. You’re not baking an artisanal loaf of bread, where ingredients and the right oven matter as much as your craftsmanship. Tools only support your creativity. They don’t create anything. I agree that we mustn’t get hung up on our tools, but they are important. Both the hardware and software we use to create our code matter a great deal. I’ll argue that we pick these tools not only for their usefulness but also for the joy of working with them. And don’t forget snob appeal. Kotlin can be evaluated on all these three motivations. Let’s take a detour outside the world of software to illustrate. I’m an amateur photographer who spends way too much on gear, knowing full well that it doesn’t improve my work. I’m the butt of Ken Rockwell’s amusing rant: “Your camera doesn’t matter." Only amateurs believe that it does. Get a decent starter kit and then go out to interesting places and take lots of pictures, is his advice. Better even, take classes to get professional feedback on your work. Spend your budget on that instead of on fancy gear. In two words: Leica shmeica. He’s right. Cameras and gear facilitate your creativity at best, but a backpack full of it can weigh you down. A photographer creates with their eyes and imagination. You only need the camera to register the result of that process. Your iPhone Pro has superior resolution and sharpness over Henri Cartier-Bresson’s single-lens compact camera (Leica, by the way) that he used in the 1940s for The Decisive Moment. But your pics won’t come anywhere near his greatness. I didn’t take Ken’s advice to heart of course. The outlay on photo gear still exceeds what I spent on training over the years by a factor of five. Who cares, it’s my hobby budget. I shouldn’t have a need for something to desire it. I’m drawing the parallel with photography because the high-tech component makes it an attractive analogy with programming, but it’s hardly the same thing. Photography is the art of recognizing a great subject and composition when you see it, not mastering your kit. Anyone can click a button and all good cameras are interchangeable. Do you care which one Steve McCurry used for his famous photo of the Afghan girl? I don’t. On the other hand, the programmer’s relationship to their tools, especially the language, is a much more intimate one, like the musician has with their instrument. Both take thousands of hours of hard practice. An accomplished violinist can’t just pick up a cello. Similarly, you don’t migrate from Haskell to C# like you switch from Nikon to Canon. The latter is closer to swapping a Windows laptop for a Mac: far less of a deal. If like musicians, we interact with our tools eight hours a day, they must be great, not just good. Quality hardware for the programmer should be a no-brainer. It infuriates me how many companies still don’t get this. Everybody should be given the best setup that money can buy when it costs less than a senior dev’s monthly rate. There’s a joy that comes from working with a superior tool. Working with junk is as annoying as working with premium tools is delightful. The mechanical DAS keyboard I’m writing this on isn’t faster, but still, the best 150 euros ever spent on office equipment. Thirdly, there is snob appeal and pride of ownership. If utility and quality were all that mattered, nobody would be buying luxury brands. Fashion would not exist. A limousine doesn’t get you any faster from A to B – except perhaps on a social ladder. Spending 7000 on a Leica compact as an amateur is extravagant, but you can flaunt its prestigious red dot and imagine yourself a pro. If I were filthy rich, I’d get one. I would also buy a Steinway grand and love it more than it’s appropriate to love an inanimate object. Let’s look at the parallels with the most important tool the programmer has in their belt: the language. Programming is creating a new virtual machine inside the Matryoshka doll of other virtual machines that make up a running application. As for plain utility, each modern language is Turing complete and can do the job, but nobody can reasonably argue that that makes them equally useful for every job. To not overcomplicate the argument, I’ll stay within the JVM ecosystem. There is no coding job that you could implement in any of the JVM languages (Java, Kotlin, Scala, Groovy, Ceylon, Frege, etc.) which would be impossible to emulate in any of the other ones, but they differ greatly in their syntax and idioms. That, and their snob appeal. Yes, programmers turn up their noses at competitive tools, maybe more secretly than openly, but they do. I spent two years on a Scala project and attended the Scala world conference. Scala’s advanced syntactical constructs (higher kinded types, multiple argument lists) have been known to give rise to much my-language-is-better-than-yours snootiness. Don’t get me wrong: it’s impressively powerful but has a steep learning curve. It may be free to download, but when time is money, it’s expensive to master. It’s the Leica of programming languages and for some, that’s exactly the appeal: learning something hard and then boasting it’s not hard at all. It’s a long-standing Linux tradition. Kotlin has no such snob appeal. It was conceived to be a more developer-friendly Java, to radically upgrade the old syntax in ways that would never be possible in Java itself, due to the non-negotiable requirement for every new compiler to support source code written in 1999. If mere utility and snob appeal don’t apply, then the argument left to favor Kotlin over Java must be the positive experience of working with it. While coding my Kotlin project I was also studying for the OCP-17 Java exam. This proved a revealing exercise in comparative language analysis. Some features simply delight. Kotlin’s built-in null safety is wonderful, a killer feature. Don’t tell me you don’t need it because you’re such a clean coder. That betrays a naive denial of other people’s sloppiness. Most stuff you write interacts with their code. Do you plan on educating them too? Other (Java) features simply keep annoying me. Evolution forbids breaking change because each new baby must be viable and produce offspring for incremental change to occur. Likewise, the more you work with Kotlin, the more certain architectural decisions in the Java syntax stick out like ugly quirks that nature (i.e., Gosling and Goetz) can’t correct without breaking the legacy. Many things in Java feel awkward and ugly for that very reason. Nobody would design a language with fixed-length arrays syntactically distinct from other collection types. Arrays can take primitive value types (numbers and booleans), which you need to (un)box in an object for lists, sets, and maps. You wouldn’t make those arrays mutable and covariant. I give you a bag of apples, you call it a bag of fruit, insert a banana, and give it back to me. Mayhem! The delight of working with a language that doesn’t have these design quirks is as strong as the annoyance over a language that does. I make no excuses for the fact that my reaction is more emotional than rational. To conclude, I don’t want to denigrate what Java designers have achieved over the years. They’re smarter than me, and they didn’t have a crystal ball. In twenty years, the Kotlin team may well find out that they painted themselves in a corner over some design decision they took in 2023. Who knows. I’ll be retired and expect to be coding only for pleasure, not to impress anyone, or even be useful.

By Jasper Sprengers CORE
Exploring Hazelcast With Spring Boot
Exploring Hazelcast With Spring Boot

For the use cases I am going to describe here, I have two services: courses-service and reviews-service: Courses-service provides CRUD operations for dealing with courses and instructors. Reviews-service is another CRUD operations provider for dealing with reviews for courses that are completely agnostic of courses from courses-service. Both apps are written in Kotlin using Spring Boot and other libraries. Having these two services, we are going to discuss distributed caching with Hazelcast and Spring Boot and see how we can use user code-deployment to invoke some code execution via Hazelcast on a service. Spoiler alert: The examples/use cases presented here are designed purely for the sake of demonstrating integration with some of Hazelcast’s capabilities. The discussed problems here can be solved in various ways and maybe even in better ways, so don’t spend too much on thinking, “why?” So, without further ado, let’s dive into code. Note: here is the source code in case you want to follow along. Simple Distributed Caching We’ll focus on courses-service for now. Having this entity: Kotlin @Entity @Table(name = "courses") class Course( var name: String, @Column(name = "programming_language") var programmingLanguage: String, @Column(name = "programming_language_description", length = 3000, nullable = true) var programmingLanguageDescription: String? = null, @Enumerated(EnumType.STRING) var category: Category, @ManyToOne(fetch = FetchType.LAZY) @JoinColumn(name = "instructor_id") var instructor: Instructor? = null ) : AbstractEntity() { override fun toString(): String { return "Course(id=$id, name='$name', category=$category)" } } And this method in CourseServiceImpl: Kotlin @Transactional override fun save(course: Course): Course { return courseRepository.save(course) } I want to enhance every course that is saved with a programming language description for the programming language that has been sent by the user. For this, I created a Wikipedia API client that will make the following request every time a new course is added: Plain Text GET https://en.wikipedia.org/api/rest_v1/page/summary/java_(programming_language) So, my method looks like this now: Kotlin @Transactional override fun save(course: Course): Course { enhanceWithProgrammingLanguageDescription(course) return courseRepository.save(course) } private fun enhanceWithProgrammingLanguageDescription(course: Course) { wikipediaApiClient.fetchSummaryFor("${course.programmingLanguage}_(programming_language)")?.let { course.programmingLanguageDescription = it.summary } } Now here comes our use case, we want to cache the Wikipedia response so we don’t call every single time. Our courses will be mostly oriented to a set of popular programming languages like Java, Kotlin, C#, and popular programming languages. We also don’t want to decrease our save()’s performance by querying every time for mostly the same language. Also, this can act as a guard in case the API server is down. Time to introduce Hazelcast! Hazelcast is a distributed computation and storage platform for consistently low-latency querying, aggregation and stateful computation against event streams and traditional data sources. It allows you to quickly build resource-efficient, real-time applications. You can deploy it at any scale from small edge devices to a large cluster of cloud instances. You can read about lots of places where Hazelcast is the appropriate solution and all the advantages on their homepage. When it comes to integrating a Spring Boot app with Hazelcast (embedded), it is straightforward. There are a few ways of configuring Hazelcast, via XML, YAML, or the programmatic way. Also, there is a nice integration with Spring Boot’s cache support via @EnableCaching and @Cacheable annotations. I picked the programmatic way of configuring Hazelcast and the manual way of using it—a bit more control and less magic. Here are the dependencies: Kotlin implementation("com.hazelcast:hazelcast:5.2.1") implementation("com.hazelcast:hazelcast-spring:5.2.1") And here is the configuration that we are going to add to courses-service: Kotlin @Configuration class HazelcastConfiguration { companion object { const val WIKIPEDIA_SUMMARIES = "WIKIPEDIA_SUMMARIES" } @Bean fun managedContext(): SpringManagedContext { return SpringManagedContext() } @Bean fun hazelcastConfig(managedContext: SpringManagedContext): Config { val config = Config() config.managedContext = managedContext config.networkConfig.isPortAutoIncrement = true config.networkConfig.join.multicastConfig.isEnabled = true config.networkConfig.join.multicastConfig.multicastPort = 5777 config.userCodeDeploymentConfig.isEnabled = true config.configureWikipediaSummaries() return config } private fun Config.configureWikipediaSummaries() { val wikipediaSummaries = MapConfig() wikipediaSummaries.name = WIKIPEDIA_SUMMARIES wikipediaSummaries.isStatisticsEnabled = true wikipediaSummaries.backupCount = 1 wikipediaSummaries.evictionConfig.evictionPolicy = EvictionPolicy.LRU wikipediaSummaries.evictionConfig.size = 10000 wikipediaSummaries.evictionConfig.maxSizePolicy = MaxSizePolicy.PER_NODE addMapConfig(wikipediaSummaries) } } So we declare a managedContext() bean, which is a container-managed context initialized with a Spring context implementation that is going to work along with the @SpringAware annotation to allow us to initialize/inject fields in deserialized instances. We’ll take a look at why we need this later when we discuss user code deployment. Then, we declare a hazelcastConfig() bean, which represents the brains of the whole integration. We set the managedContext, enable user code deployment, and set the networkConfig’s join option to “multicast.” Basically, the NetworkConfig is responsible for defining how a member will interact with other members or clients—there are multiple parameters available for configuration like port, isPortAutoIncrement, sslConfig, restApiConfig, joinConfig, and others. We configured the isPortAutoIncrement to true to allow hazelcastInstance to auto-increment the port if the picked one is already in use until it runs out of free ports. Also, we configured the JoinConfig, which contains multiple member/client join configurations like Eureka, Kubernetes, and others. We enable MulticastConfig, which allows Hazelcast members to find each other without the need to know concrete addresses via multicasting to everyone listening. Also, I encountered some issues with the port used by Multicast, so I set it to a hard-coded one to avoid the address already in use. Then we configure a Map config that is going to act as our distributed cache. MapConfig contains the configuration of an IMap—concurrent, distributed, observable, and queryable map: name: the name of IMap, WIKIPEDIA_SUMMARIES. isStatisticsEnabled: this is to enable IMap’s statistics like the total number of hits and others. backupCount: number of synchronous backups, where “0” means no backup. evictionConfig.evictionPolicy: can be LRU (Least Recently Used), LFU (Least Frequently Used), NONE, and RANDOM. evictionConfig.size: the size used by MaxSizePolicy. evictionConfig.maxSizePolicy: PER_NODE (policy based on the maximum number of entries per the Hazelcast instance). Having this configuration, all that is left is to adjust our enhanceWithProgrammingLanguageDescription method to use the above configured IMap to cache the fetched Wikipedia summaries: Kotlin private fun enhanceWithProgrammingLanguageDescription(course: Course) { val summaries = hazelcastInstance.getMap<String, WikipediaApiClientImpl.WikipediaSummary>(WIKIPEDIA_SUMMARIES) log.debug("Fetched hazelcast cache [$WIKIPEDIA_SUMMARIES] = [${summaries}(${summaries.size})] ") summaries.getOrElse(course.programmingLanguage) { wikipediaApiClient.fetchSummaryFor("${course.programmingLanguage}_(programming_language)")?.let { log.debug("No cache value found, using wikipedia's response $it to update $course programming language description") summaries[course.programmingLanguage] = it it } }?.let { course.programmingLanguageDescription = it.summary } } Basically, we are using the autowired Hazelcast instance to retrieve our configured IMap. Each instance is a member and/or client in a Hazelcast cluster. When you want to use Hazelcast’s distributed data structures, you must first create an instance. In our case, we simply autowire it as it will be created by the previously defined config. After getting a hold of the distributed Map, it is a matter of some simple checks. If we have a summary for the programming language key in our map, then we use that one. If not, we fetch it from Wikipedia API, add it to the map, and use it. Now, if we are to start our app, we’ll first see the huge Hazelcast banner and the following lines, meaning that Hazelcast has started: Plain Text INFO 30844 --- [ main] com.hazelcast.core.LifecycleService : [ip]:5701 [dev] [5.2.1] [ip]:5701 is STARTING INFO 30844 --- [ main] c.h.internal.cluster.ClusterService : [ip]:5701 [dev] [5.2.1] Members {size:1, ver:1} [ Member [ip]:5701 - e2a90d3e-b112-4e78-aa42-58a959d9273d this ] INFO 30844 --- [ main] com.hazelcast.core.LifecycleService : [ip]:5701 [dev] [5.2.1] [ip]:5701 is STARTED If we execute the following HTTP request: Plain Text POST http://localhost:8081/api/v1/courses Content-Type: application/json { "name": "C++ Development", "category": "TUTORIAL", "programmingLanguage" : "C++", "instructor": { "name": "Bjarne Stroustrup" } } We’ll see in the logs: Plain Text DEBUG 30844 --- [nio-8080-exec-1] i.e.c.s.i.CourseServiceImpl$Companion : Fetched hazelcast cache [WIKIPEDIA_SUMMARIES] = [IMap{name='WIKIPEDIA_SUMMARIES'}(0)] INFO 30844 --- [nio-8080-exec-1] e.c.s.i.WikipediaApiClientImpl$Companion : Request GET:https://en.wikipedia.org/api/rest_v1/page/summary/C++_(programming_language) INFO 30844 --- [nio-8080-exec-1] e.c.s.i.WikipediaApiClientImpl$Companion : Received response from Wikipedia... DEBUG 30844 --- [nio-8080-exec-1] i.e.c.s.i.CourseServiceImpl$Companion : No cache value found, using wikipedia... That we retrieved the previously configured IMap for Wikipedia summaries, but its size is “0;” therefore, the update took place using the Wikipedia’s API. Now, if we are to execute the same request again, we’ll notice a different behavior: Plain Text DEBUG 30844 --- [nio-8080-exec-3] i.e.c.s.i.CourseServiceImpl$Companion : Fetched hazelcast cache [WIKIPEDIA_SUMMARIES] = [IMap{name='WIKIPEDIA_SUMMARIES'}(1)] Now the IMap has a size of “1” since it was populated by our previous request, so no request to Wikipedia’s API can be observed. The beauty and simplicity of Hazelcast’s integration comes when we start another instance of our app on a different port using -Dserver.port=8081, and we witness the distributed cache in action. Plain Text INFO 8172 --- [ main] com.hazelcast.core.LifecycleService : [ip]:5702 [dev] [5.2.1] [ip]:5702 is STARTING INFO 8172 --- [ main] c.h.i.cluster.impl.MulticastJoiner : [ip]:5702 [dev] [5.2.1] Trying to join to discovered node: [ip]:5701 INFO 8172 --- [.IO.thread-in-0] c.h.i.server.tcp.TcpServerConnection : [ip]:5702 [dev] [5.2.1] Initialized new cluster connection between /ip:55309 and /ip:5701 INFO 8172 --- [ration.thread-0] c.h.internal.cluster.ClusterService : [ip]:5702 [dev] [5.2.1] Members {size:2, ver:2} [ Member [ip]:5701 - 69d6721d-179b-4dc8-8163-e3cb00e703eb Member [ip]:5702 - 9b689155-d4c3-4169-ae53-ff5d687f7ad2 this ] INFO 8172 --- [ main] com.hazelcast.core.LifecycleService : [ip]:5702 [dev] [5.2.1] [ip]:5702 is STARTED We see that MulticastJoiner discovered an already running Hazelcast node on port 5701, running together with our first courses-service instance on port 8080. A new cluster connection is made, and we see in the “Members” list both Hazelcast nodes on ports 5701 and 5702. Now, if we are to make a new HTTP request to create a course on the 8081 instance, we’ll see the following: Plain Text DEBUG 8172 --- [nio-8081-exec-4] i.e.c.s.i.CourseServiceImpl$Companion : Fetched hazelcast cache [WIKIPEDIA_SUMMARIES] = [IMap{name='WIKIPEDIA_SUMMARIES'}(1)] Distributed Locks Another useful feature that comes with Hazelcast is the API for distributed locks. Suppose our enhanceWithProgrammingLanguageDescription method is a slow intensive operation dealing with cache and other resources and we wouldn’t want other threads on the same instance or even other requests on a different instance to interfere or alter something until the operation is complete. So, here comes FencedLock into play—a linearizable, distributed, and reentrant implementation of the Lock. It is consistent and partition tolerant in the sense that if a network partition occurs, it will stay available on, at most, one side of the partition. Mostly, it offers the same API as the Lock interface. So, with this in mind, let’s try and guard our so-called “critical section.” Kotlin private fun enhanceWithProgrammingLanguageDescription(course: Course) { val lock = hazelcastInstance.cpSubsystem.getLock(SUMMARIES_LOCK) if (!lock.tryLock()) throw LockAcquisitionException(SUMMARIES_LOCK, "enhanceWithProgrammingLanguageDescription") Thread.sleep(2000) val summaries = hazelcastInstance.getMap<String, WikipediaApiClientImpl.WikipediaSummary>(WIKIPEDIA_SUMMARIES) log.debug("Fetched hazelcast cache [$WIKIPEDIA_SUMMARIES] = [${summaries}(${summaries.size})] ") summaries.getOrElse(course.programmingLanguage) { wikipediaApiClient.fetchSummaryFor("${course.programmingLanguage}_(programming_language)")?.let { log.debug("No cache value found, using wikipedia's response $it to update $course programming language description") summaries[course.programmingLanguage] = it it } }?.let { course.programmingLanguageDescription = it.summary } lock.unlock() } As you can see, the implementation is quite simple. We obtain the lock via the cpSubsystem’s, getLock(), and then we try acquiring the lock with tryLock(). The locking will succeed if the acquired lock is available or already held by the current thread, and it will immediately return true. Otherwise, it will return false, and a LockAcquisitionException will be thrown. Next, we simulate some intensive work by sleeping for two seconds with Thread.sleep(2000), and in the end, we release the acquired lock with unlock(). If we run a single instance of our app on port 8080 and try two subsequent requests, one will pass, and the other one will fail with: Plain Text ERROR 28956 --- [nio-8081-exec-6] e.c.w.r.e.RESTExceptionHandler$Companion : Exception while handling request [summaries-lock] could not be acquired for [enhanceWithProgrammingLanguageDescription] operation. Please try again. inc.evil.coursecatalog.common.exceptions.LockAcquisitionException: [summaries-lock] could not be acquired for [enhanceWithProgrammingLanguageDescription] operation. Please try again The same goes if we are to make one request to an 8080 instance of our app and the next one in the two seconds timeframe to the 8081 instance; the first request will succeed while the second one will fail. User Code Deployment Now, let’s switch our attention to reviews-service and remember—this service is totally unaware of courses; it is just a way to add reviews for some course_id. With this in mind, we have this entity: Kotlin @Table("reviews") data class Review( @Id var id: Int? = null, var text: String, var author: String, @Column("created_at") @CreatedDate var createdAt: LocalDateTime? = null, @LastModifiedDate @Column("last_modified_at") var lastModifiedAt: LocalDateTime? = null, @Column("course_id") var courseId: Int? = null ) And we have this method in ReviewServiceImpl : Kotlin override suspend fun save(review: Review): Review { return reviewRepository.save(review).awaitFirst() } So, our new silly feature request would be to somehow check for the existence of the course that the review has been written for. How can we do that? The most obvious choice would be to invoke a REST endpoint on courses-service to check if we have a course for the review’s course_id, but that is not what this article is about. We have Hazelcast, right? We are going to deploy some user code from reviews-service that courses-service is aware of and can execute it via Hazelcast’s user code deployment. To do that, we need to create some kind of API or gateway module that we are going to publish as an artifact, so courses-service can implement it, and reviews-service can depend on and use it to deploy the code. First things first, let’s design the new module as a courses-api module: Kotlin plugins { id("org.springframework.boot") version "2.7.3" id("io.spring.dependency-management") version "1.0.13.RELEASE" kotlin("jvm") version "1.6.21" kotlin("plugin.spring") version "1.6.21" kotlin("plugin.jpa") version "1.3.72" `maven-publish` } group = "inc.evil" version = "0.0.1-SNAPSHOT" repositories { mavenCentral() } publishing { publications { create<MavenPublication>("maven") { groupId = "inc.evil" artifactId = "courses-api" version = "1.1" from(components["java"]) } } } dependencies { implementation("org.springframework.boot:spring-boot-starter-actuator") implementation("org.springframework.boot:spring-boot-starter-web") implementation("com.fasterxml.jackson.module:jackson-module-kotlin") implementation("org.jetbrains.kotlin:kotlin-reflect") implementation("org.jetbrains.kotlin:kotlin-stdlib-jdk8") implementation("org.jetbrains.kotlinx:kotlinx-coroutines-rx2:1.6.4") implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:1.6.4") implementation("org.apache.commons:commons-lang3:3.12.0") implementation("com.hazelcast:hazelcast:5.2.1") implementation("com.hazelcast:hazelcast-spring:5.2.1") testImplementation("org.junit.jupiter:junit-jupiter-api:5.8.1") testRuntimeOnly("org.junit.jupiter:junit-jupiter-engine:5.8.1") } tasks.getByName<Test>("test") { useJUnitPlatform() } Nothing fancy here, except the maven-publish plugin that we’ll use to publish the artifact to the local maven repository. Here is the interface that courses-service will implement, and reviews-service will use: Kotlin interface CourseApiFacade { fun findById(id: Int): CourseApiResponse } data class InstructorApiResponse( val id: Int?, val name: String?, val summary: String?, val description: String? ) data class CourseApiResponse( val id: Int?, val name: String, val category: String, val programmingLanguage: String, val programmingLanguageDescription: String?, val createdAt: String, val updatedAt: String, val instructor: InstructorApiResponse ) Having this module properly configured, we can add it as a dependency in courses-service: Kotlin implementation(project(":courses-api")) And implement the exposed interface like this: Kotlin @Component class CourseApiFacadeImpl(val courseService: CourseService) : CourseApiFacade { override fun findById(id: Int): CourseApiResponse = courseService.findById(id).let { CourseApiResponse( id = it.id, name = it.name, category = it.category.toString(), programmingLanguage = it.programmingLanguage, programmingLanguageDescription = it.programmingLanguageDescription, createdAt = it.createdAt.toString(), updatedAt = it.updatedAt.toString(), instructor = InstructorApiResponse(it.instructor?.id, it.instructor?.name, it.instructor?.summary, it.instructor?.description) ) } } Now back to reviews-service where all the magic will happen. First of all, we want to add the required dependencies for Hazelcast: Kotlin implementation("com.hazelcast:hazelcast:5.2.1") implementation("com.hazelcast:hazelcast-spring:5.2.1") Previously, I mentioned that we are going to use that interface from courses-api. We can run the publishMavenPublicationToMavenLocal gradle task on courses-api to get our artifact published, and then we can add the following dependency to reviews-service: Kotlin implementation("inc.evil:courses-api:1.1") Now, is time to set up a Callable implementation on reviews-service that will be responsible for the code execution on courses-service, so, here it is: Kotlin @SpringAware class GetCourseByIdCallable(val id: Int) : Callable<CourseApiResponse?>, Serializable { @Autowired @Transient private lateinit var courseApiFacade: CourseApiFacade override fun call(): CourseApiResponse = courseApiFacade.findById(id) } Here we used the @SpringAware annotation to mark this class as a bean in Spring-Hazelcast way via SpringManagedContext since this class is going to be deployed in the cluster. Other than that, we have our courseApiFacade autowired and used in the overridden method call() from Callable interface. I decided to write another class to ease the Callable submission to the Hazelcast’s IExecutorService to act as some kind of a facade: Kotlin @Component class HazelcastGateway(private val hazelcastInstance: HazelcastInstance) { companion object { private const val EXECUTOR_SERVICE_NAME = "EXECUTOR_SERVICE" } fun <R> execute(executionRequest: Callable<R>): R { val ex = hazelcastInstance.getExecutorService(EXECUTOR_SERVICE_NAME) return ex.submit(executionRequest).get(15000L, TimeUnit.MILLISECONDS) } } We have a single method execution that is responsible to submit the passe—in Callable to the retrieved IExecutorService via the autowired hazelcastInstance. Now, this IExectuorService is a distributed implementation of ExecutorService that enables the running of Runnables/Callables on the Hazelcast cluster and has some additional methods that allows running the code on a particular member or multiple members. For example, we’ve used the submit, but there is also submitToMember and submitToAllMembers. For Runnable, there are the equivalents that start with execute***. Now, let’s use our newly defined HazelcastGateway in the save method from ReviewServiceImpl. Kotlin override suspend fun save(review: Review): Review { runCatching { hazelcastGateway.execute(GetCourseByIdCallable(review.courseId!!)).also { log.info("Call to hazelcast ended with $it") } }.getOrNull() ?: throw NotFoundException(CourseApiResponse::class, "course_id", review.courseId.toString()) return reviewRepository.save(review).awaitFirst() } The logic is as follows: before saving, we try to find the course by course_id from review by running GetCourseByIdCallable in our Hazelcast cluster. If we have an exception (CourseApiFacadeImpl will throw a NotFoundException if the requested course was not found), we swallow it and throw a reviews-service NotFoundException stating that the course could’ve not been retrieved. If a course was returned by our Callable, we proceed to save it—that’s it. All that is left is to configure Hazelcast, and I left this one last since we needed some of these classes to configure it. Kotlin @Configuration class HazelcastConfiguration { @Bean fun managedContext(): SpringManagedContext { return SpringManagedContext() } @Bean fun hazelcastClientConfig(managedContext: SpringManagedContext): ClientConfig { val config = ClientConfig() config.managedContext = managedContext config.connectionStrategyConfig.isAsyncStart = true config.userCodeDeploymentConfig.isEnabled = true config.classLoader = HazelcastConfiguration::class.java.classLoader config.userCodeDeploymentConfig.addClass(GetCourseByIdCallable::class.java) return config } } Regarding the managedContext bean, there’s nothing different from our previous member’s config. Now the hazelcastConfig bean is different. First of all, it is a ClientConfig now, meaning that reviews-service’s embedded Hazelcast will be a client to the courses-service Hazelcast member. We set the managedContext, the connectionStrategyConfig to be async (the client won’t wait for a connection to cluster, it will throw exceptions until connected and ready), and we will enable the user code deployment. Then we add GetCourseByIdCallable to be sent to the cluster. Now, having courses-service running and if we are to start reviews-service, we’ll see the following log: Plain Text INFO 34472 --- [nt_1.internal-1] com.hazelcast.core.LifecycleService : hz.client_1 [dev] [5.2.1] HazelcastClient 5.2.1 (20221114 - 531032a) is CLIENT_CONNECTED INFO 34472 --- [nt_1.internal-1] c.h.c.i.c.ClientConnectionManager : hz.client_1 [dev] [5.2.1] Authenticated with server [ip]:5701:69d6721d-179b-4dc8-8163-e3cb00e703eb, server version: 5.2.1, local address: /127.0.0.1:57050 The client is authenticated with courses-service’s Hazelcast member, where “69d6721d-179b-4dc8-8163-e3cb00e703eb” is the ID of the connected to server. Now, let’s try a request to add a review for an existing course (reviews-service is using GraphQL). Plain Text GRAPHQL http://localhost:8082/graphql Content-Type: application/graphql mutation { createReview(request: {text: "Amazing, loved it!" courseId: 2 author: "Mike Scott"}) { id text author courseId createdAt lastModifiedAt } } In the logs, we’ll notice: Plain Text INFO 34472 --- [actor-tcp-nio-1] i.e.r.s.i.ReviewServiceImpl$Companion : Call to hazelcast ended with CourseApiResponse(id=2, name=C++ Development, category=TUTORIAL, programmingLanguage=C++ ...) And in the courses-service logs, we’ll notice the code execution: Plain Text DEBUG 12608 --- [ached.thread-24] i.e.c.c.aop.LoggingAspect$Companion : before :: execution(public inc.evil.coursecatalog.model.Course inc.evil.coursecatalog.service.impl.CourseServiceImpl.findById(int)) Meaning that the request executed successfully. If we try the same request for a non-existent course, let’s say for ID 99, we’ll observe the NotFoundException in reviews-service: Plain Text WARN 34472 --- [actor-tcp-nio-2] .w.g.e.GraphQLExceptionHandler$Companion : Exception while handling request: CourseApiResponse with course_id equal to [99] could not be found! inc.evil.reviews.common.exceptions.NotFoundException: CourseApiResponse with course_id equal to [99] could not be found! Conclusion All-right folks, this is basically it. I hope you got a good feel of what Hazelcast is like. We took a look at how to design a simple distributed cache using Hazelcast and Spring Boot, made use of distributed locks to protect critical sections of code, and in the end, we’ve seen how Hazelcast’s user code deployment can be used to run some code in the cluster. In case you’ve missed it, all the source code can be found here. Happy coding!

By Ion Pascari
How To Add Three Photo Filters to Your Applications in Java
How To Add Three Photo Filters to Your Applications in Java

A unique image aesthetic makes a big difference in representing any personal or professional brand online. Career and hobby photographers, marketing executives, and casual social media patrons alike are in constant pursuit of easily distinguishable visual content, and this basic need to stand out from a crowd has, in turn, driven the democratization of photo editing and filtering services in the last decade or so. Nearly every social media platform you can think of (not to mention many e-commerce websites and various other casual sites where images are frequently uploaded) now incorporates some means for programmatically altering vanilla image files. These built-in services can vary greatly in complexity, ranging from simple brightness controls to gaussian blurs. With this newfound ease of access to photo filtering, classic image filtering techniques have experienced a widespread resurgence in popularity. For example, the timeless look associated with black and white images can now be hastily applied to any image upload on the fly. Through simple manipulations of brightness and contrast, the illusion of embossment can be harnessed, allowing us to effortlessly emulate a vaunted, centuries-old printing technique. Even posterization – a classic, bold aesthetic once humbly associated with the natural color limitations of early printing machines – can be instantly generated within any grid of pixels. Given the desirability of simplified image filtering (especially those with common-sense customization features), building these types of features into any application – especially those handling a large volume of image uploads – is an excellent idea for developers to consider. Of course, once we elect to go in that direction, an important question arises: how can we efficiently include these services in our applications, given the myriad lines of code associated with building even the simplest photo-filtering functionality? Thankfully, that question is answered yet again by supply and demand forces brought about naturally in the ongoing digital industrial revolution. Developers can rely on readily available Image Filtering API services to circumvent large-scale programming operations, thereby implementing robust image customization features in only a few lines of clean, simple code. API Descriptions The purpose of this article is to demonstrate three free-to-use image filtering API solutions which can be implemented into your applications using complementary, ready-to-run Java code snippets. These snippets are supplied below in this article, directly following brief instructions to help you install the SDK. Before we reach that point, I’ll first highlight each solution, providing a more detailed look at its respective uses and API request parameters. Please note that each API will require a free-tier Cloudmersive API key to complete your call (provides a limit of 800 API calls per month with no commitments). Grayscale (Black and White) Filter API Early photography was initially limited to a grayscale color spectrum due to the natural constraints of primitive photo technology. The genesis of color photography opened new doors, certainly, but it never completely replaced the black-and-white photo aesthetic. Even now, in the digital age, grayscale photos continue to offer a certain degree of depth and expression which many feel the broader color spectrum can’t bring out. The process of converting a color image to grayscale is straightforward. Color information is stored in the hundreds (or thousands) of pixels making up any digital image; grayscale conversion forces each pixel to ignore its color information and present varying degrees of brightness instead. Beyond its well-documented aesthetic effects, grayscale conversion offers practical benefits, too, by reducing the size of the image in question. Grayscale images are much easier to store, edit and subsequently process (especially in downstream operations such as Optical Character Recognition, for example). The grayscale filter API below performs a simple black-and-white conversion, requiring only an image’s file path (formats like PNG and JPG are accepted) in its request parameters. Embossment Filter API Embossment is a physical printing process with roots dating as far back as the 15th century, and it’s still used to this day in that same context. While true embossment entails the inclusion of physically raised shapes on an otherwise flat surface (offering an enhanced visual and tactile experience), digital embossment merely emulates this effect by manipulating brightness and contrast in key areas around the subject of a photo. An embossment photo filter can be used to quickly add depth to any image. The embossment filter API below performs a customizable digital embossment operation, requiring the following input request information: Radius: The radius of pixels of the embossment operation (larger values will produce a greater effect) Sigma: The variance of the embossment operation (higher values produce higher variance) Image file: The file path for the subject of the operation (supports common formats like PNG and JPG) Posterization API Given the ubiquity of high-quality smartphone cameras, it’s easy to take the prevalence of high-definition color photos for granted. The color detail we’re accustomed to seeing in everyday photos comes down to advancements in high-quality pixel storage. Slight variations in reds, blues, greens, and other elements on the color spectrum are mostly accounted for in a vast matrix of pixel coordinates. In comparison, during the bygone era of physical printing presses, the variety of colors used to form an image was typically far less varied, and digital posterization filters aim to emulate this old-school effect. It does so by reducing the volume of unique colors in an image, narrowing a distinct spectrum of hex values into a more homogenous group. The aesthetic effect is unmistakable, invoking a look one might have associated with political campaigns and movie theater posters in decades past. The posterization API provided below requires a user to provide the following request information: Levels: The number of unique colors which should be retained in the output image Image File: The image file to perform the operation on (supports common formats like PNG and JPG) API Demonstration To structure your API call to any of the three services outlined above, your first step is to install the Java SDK. To do so with Maven, first, add a reference to the repository in pom.xml: <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> Then add one to the dependency in pom.xml: <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> Alternatively, to install with Gradle, add it to your root build.gradle (at the end of repositories): allprojects { repositories { ... maven { url 'https://jitpack.io' } } } Then add the dependency in build.gradle: dependencies { implementation 'com.github.Cloudmersive:Cloudmersive.APIClient.Java:v4.25' } To use the Grayscale Filter API, use the following code to structure your API call: // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.FilterApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); FilterApi apiInstance = new FilterApi(); File imageFile = new File("/path/to/inputfile"); // File | Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. try { byte[] result = apiInstance.filterBlackAndWhite(imageFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling FilterApi#filterBlackAndWhite"); e.printStackTrace(); } To use the Embossment Filter API, use the below code instead (remembering to configure your radius and sigma in their respective parameters): // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.FilterApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); FilterApi apiInstance = new FilterApi(); Integer radius = 56; // Integer | Radius in pixels of the emboss operation; a larger radius will produce a greater effect Integer sigma = 56; // Integer | Sigma, or variance, of the emboss operation File imageFile = new File("/path/to/inputfile"); // File | Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. try { byte[] result = apiInstance.filterEmboss(radius, sigma, imageFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling FilterApi#filterEmboss"); e.printStackTrace(); } Finally, to use the Posterization Filter API, use the below code (remember to define your posterization level with an integer as previously described): // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.FilterApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); FilterApi apiInstance = new FilterApi(); Integer levels = 56; // Integer | Number of unique colors to retain in the output image File imageFile = new File("/path/to/inputfile"); // File | Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. try { byte[] result = apiInstance.filterPosterize(levels, imageFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling FilterApi#filterPosterize"); e.printStackTrace(); } When using the Embossment and Posterization APIs, I recommend experimenting with different radius, sigma, and level values to find the right balance.

By Brian O'Neill CORE
Java Development Trends 2023
Java Development Trends 2023

GitHub language statistics indicate that Java occupies second place among other programming codes, while in the TIOBE Index 2022, Java shifted to fourth position. The difference lies in the methodological approaches. Notwithstanding the ranking, Java is the coding language enterprises have used heavily since its inception, and it still holds the same position. As a programming language, it outperforms many of its competitors and continues to be the choice of most companies/organizations for software applications. However, Java doesn't stay the same; it goes through changes and modernization. In many ways, the development and innovation of this code and the surrounding ecosystem are propelled by new business demands. This article presents an overview of seven expected trends in Java based on the most significant events and achievements of 2022. Cloud architecture continues evolving, but costs are rising. According to the Flexera Report, public cloud spending exceeded budgets by 13% in 2022. Companies expect their cloud spending to increase by 29% over the next twelve months. What's worse, organizations waste 32% of their cloud spend. So the need for cloud cost optimization is out there. It will be one of the industry's driving forces in 2023, and we can hope to see more technological innovation and management solutions directed toward better efficiency and lesser costs. The new PaaS is a cloud computing model between IaaS and SaaS that's recently gained popularity. PaaS delivers third-party provider hardware and software tools to users. This approach allows greater flexibility for developers, and it's easier to handle finances because it's a pay-as-you-go payment model. PaaS enables developers to create or run new applications without spending extra time and resources on in-house hardware or software installations. Together with the still-rising popularity of cloud infrastructure, PaaS is predicted to evolve, too. We expect to see more support for Java-based PaaS applications with Java adapted to cloud environments. Spring Native 6.0 GA and Spring Boot 3.0 releases this year marked the beginning of a new framework generation, embracing current and upcoming innovations in OpenJDK and the Java ecosystem. In addition, spring 6.0 brought to life ahead-of-time transformations, focused on native image support for Spring applications and promising to deliver better application performance in the future. Spring Native updates in 2023 are definitely in the close loop of the Java community. CVEs in frameworks and libraries written in Java continue their unfortunate rise. The CVE Details source provides detailed information on how CVEs are expanding, and in 2022 reached a sad number of 25,036. These vulnerability types present an opportunity for attackers to take over sensitive resources and perform remote code execution. We cannot expect that 2023 will become an exception in this trend of a growing number of CVEs discovered. And there will be a trend for higher levels of security to be presented across the entire Java ecosystem. CVEs are also called zero-day vulnerabilities or Log4J. A zero-day vulnerability is one that has been disclosed but is not yet patched. Ensuring security requires keeping your dependencies on the schedule for the required updates. Organizations like Cyclonedx are entirely focused on this agenda and can offer great recommendations and practices to ensure your Java application stays in the secure zone. 2023 is expected to become a year of more extensive adoption of Lambdas for Java. In 2022 AWS presented a new feature for their AWS Lambda project, Lambda SnapStart. SnapStart helps to improve startup latency significantly and is specifically relevant for software applications using synchronous APIs, interactive microservices, or data processing. SnapStart has already been implemented by Quarkus and Micronaut, and there is no doubt that more acceptance of Lambda in Java will follow in 2023. Virtual Threads (2nd preview) in JDK 20, due in March, is another event to watch out for in 2023. Virtual threads support thread-local variables, synchronization blocks, thread interruptions, etc. Virtual threads are lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications. The March preview is focused on: the ability for better scaling; thread API adoption of virtual threads with minimal change; and easier troubleshooting, debugging, and profiling of virtual threads. As announced by Oracle in 2022, parts/portions of GraalVM Community Edition Java code will move to OpenJDK. This initiative will affiliate the development of GraalVM and Java technologies, benefiting all contributors and users. In addition, the community editions of the GraalVM JIT and Ahead-of-Time (AOT) compilers will move to OpenJDK in 2023. This change will signify a security improvement and synchronization in release schedules, features, and development processes. These trends and events to expect in 2023 demonstrate how the industry is moving forward and reflect how continuous Java success comes about within the Java ecosystem community and via business demands for better cloud Java operation. The only negative side for all Java developers is still the security question. However, downturns are also driving progress forward, and we should see new and more effective solutions to ensure better security to revert this trend in 2023. With a great number of initiatives presented in 2022, Java in 2023 should become more flexible for the cloud environment. Java is the most popular language for enterprise applications, and many of them were built before the cloud age. In the cloud, Java can be costlier than other programming languages and needs adoption. Making Java cloud-native is among the highest priorities for the industry, and many of the most expected events of 2023 relate to improving Java operations in the cloud. Java application modernization is not that simple, and there is no single button to press to convert your Java application to cloud-native. Making Java effective, less expensive, and high performing requires integrating a set of components allowing this language to be adapted to its cloud-native version. 2023 promises more of these elements to make more sustainable cloud-based applications being developed. In 2023 we can also expect further expansion of the PaaS computing model as more convenient for the developers building products in the cloud. Negative trends of overall tech debt and rising security concerns have attracted the attention of software development companies. As a result, new development practices in 2023 will suggest tighter security and a more accurate investment in IT innovation. However, downturns are also driving progress forward, and we should see new and more effective solutions to revert these trends in 2023.

By Alexander Belokrylov
What Java Version Are You Running? Let’s Take a Look Under the Hood of the JDK!
What Java Version Are You Running? Let’s Take a Look Under the Hood of the JDK!

From time to time, you need to check which Java version is installed on your computer or server, for instance, when starting a new project or configuring an application to run on a server. But did you know there are multiple ways you can do this and even get much more information than you might think very quickly? Let's find out... Reading the Java Version in the Terminal Probably the easiest way to find the installed version is by using the java -version terminal command: $ java -version openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) Checking Version Files in the Installation Directory The above output results from info read by the java executable from a file inside its installation directory. Let's explore what we can find there. On my machine, as I use SDKMAN to switch between different Java versions, all my versions are stored here: $ ls -l /Users/frankdelporte/.sdkman/candidates/java/ total 0 drwxr-xr-x 15 frankdelporte staff 480 Apr 17 2022 11.0.15-zulu drwxr-xr-x 16 frankdelporte staff 512 Apr 17 2022 17.0.3.fx-zulu drwxr-xr-x 15 frankdelporte staff 480 Mar 29 2022 18.0.1-zulu drwxr-xr-x 15 frankdelporte staff 480 Sep 7 18:36 19-zulu drwxr-xr-x 18 frankdelporte staff 576 Apr 18 2022 8.0.332-zulu lrwxr-xr-x 1 frankdelporte staff 7 Nov 21 21:09 current -> 19-zulu And in each of these directories, a release file can be found, which also shows us the version information, including some extra information. $ cat /Users/frankdelporte/.sdkman/candidates/java/19-zulu/release IMPLEMENTOR="Azul Systems, Inc." IMPLEMENTOR_VERSION="Zulu19.28+81-CA" JAVA_VERSION="19" JAVA_VERSION_DATE="2022-09-20" LIBC="default" MODULES="java.base java.compiler ... jdk.unsupported jdk.unsupported.desktop jdk.xml.dom" OS_ARCH="aarch64" OS_NAME="Darwin" SOURCE=".:git:3d665268e905" $ cat /Users/frankdelporte/.sdkman/candidates/java/8.0.332-zulu//release JAVA_VERSION="1.8.0_332" OS_NAME="Darwin" OS_VERSION="11.2" OS_ARCH="aarch64" SOURCE="git:f4b2b4c5882e" Getting More Information With ShowSettings In 2010, an experimental flag (indicated with the X) was added to OpenJDK to provide more configuration information: -XshowSettings. This flag can be called with different arguments, each producing another information output. The cleanest way to call this flag is by adding -version; otherwise, you will get the long Java manual output as no application code was found to be executed. Reading the System Properties By using the -XshowSettings:properties flag, a long list of various properties is shown. $ java -XshowSettings:properties -version Property settings: file.encoding = UTF-8 file.separator = / ftp.nonProxyHosts = local|*.local|169.254/16|*.169.254/16 http.nonProxyHosts = local|*.local|169.254/16|*.169.254/16 java.class.path = java.class.version = 63.0 java.home = /Users/frankdelporte/.sdkman/candidates/java/19-zulu/zulu-19.jdk/Contents/Home java.io.tmpdir = /var/folders/np/6j1kls013kn2gpg_k6tz2lkr0000gn/T/ java.library.path = /Users/frankdelporte/Library/Java/Extensions /Library/Java/Extensions /Network/Library/Java/Extensions /System/Library/Java/Extensions /usr/lib/java . java.runtime.name = OpenJDK Runtime Environment java.runtime.version = 19+36 java.specification.name = Java Platform API Specification java.specification.vendor = Oracle Corporation java.specification.version = 19 java.vendor = Azul Systems, Inc. java.vendor.url = http://www.azul.com/ java.vendor.url.bug = http://www.azul.com/support/ java.vendor.version = Zulu19.28+81-CA java.version = 19 java.version.date = 2022-09-20 java.vm.compressedOopsMode = Zero based java.vm.info = mixed mode, sharing java.vm.name = OpenJDK 64-Bit Server VM java.vm.specification.name = Java Virtual Machine Specification java.vm.specification.vendor = Oracle Corporation java.vm.specification.version = 19 java.vm.vendor = Azul Systems, Inc. java.vm.version = 19+36 jdk.debug = release line.separator = \n native.encoding = UTF-8 os.arch = aarch64 os.name = Mac OS X os.version = 13.0.1 path.separator = : socksNonProxyHosts = local|*.local|169.254/16|*.169.254/16 stderr.encoding = UTF-8 stdout.encoding = UTF-8 sun.arch.data.model = 64 sun.boot.library.path = /Users/frankdelporte/.sdkman/candidates/java/19-zulu/zulu-19.jdk/Contents/Home/lib sun.cpu.endian = little sun.io.unicode.encoding = UnicodeBig sun.java.launcher = SUN_STANDARD sun.jnu.encoding = UTF-8 sun.management.compiler = HotSpot 64-Bit Tiered Compilers user.country = BE user.dir = /Users/frankdelporte user.home = /Users/frankdelporte user.language = en user.name = frankdelporte openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) If you ever faced the problem of an unsupported Java version 59 (are similar), you'll now also understand where this value is defined; it's right here in this list as java.class.version. It's an internal number used by Java to define the version. Java release 8 9 10 11 12 13 14 15 16 17 18 19 Class version 52 53 54 55 56 57 58 59 60 61 62 63 Reading the Locale Information In case you didn't know yet, I live in Belgium and use English as my computer language, as you can see when using the -XshowSettings:locale flag: $ java -XshowSettings:locale -version Locale settings: default locale = English (Belgium) default display locale = English (Belgium) default format locale = English (Belgium) available locales = , af, af_NA, af_ZA, af_ZA_#Latn, agq, agq_CM, agq_CM_#Latn, ak, ak_GH, ak_GH_#Latn, am, am_ET, am_ET_#Ethi, ar, ar_001, ar_AE, ar_BH, ar_DJ, ar_DZ, ar_EG, ar_EG_#Arab, ar_EH, ar_ER, ... zh_MO_#Hant, zh_SG, zh_SG_#Hans, zh_TW, zh_TW_#Hant, zh__#Hans, zh__#Hant, zu, zu_ZA, zu_ZA_#Latn openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) Reading the VM Settings With the -XshowSettings:vm flag, some info is shown about the Java Virtual Machine. As you can see in the second example, the amount of maximum heap memory size can be defined with the -Xmx flag. $ java -XshowSettings:vm -version VM settings: Max. Heap Size (Estimated): 8.00G Using VM: OpenJDK 64-Bit Server VM openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) $ java -XshowSettings:vm -Xmx512M -version VM settings: Max. Heap Size: 512.00M Using VM: OpenJDK 64-Bit Server VM openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) Reading All at Once If you want all of the information above with one call, use the -XshowSettings:all flag. Conclusion Next to the java -version, we can also use java -XshowSettings:all -version to get more info about our Java environment.

By Frank Delporte

Top Java Experts

expert thumbnail

Nicolas Fränkel

Developer Advocate,
Api7

Developer Advocate with 15+ years experience consulting for many different customers, in a wide range of contexts (such as telecoms, banking, insurances, large retail and public sector). Usually working on Java/Java EE and Spring technologies, but with focused interests like Rich Internet Applications, Testing, CI/CD and DevOps. Currently working for Hazelcast. Also double as a trainer and triples as a book author.
expert thumbnail

Shai Almog

OSS Hacker, Developer Advocate and Entrepreneur,
Codename One

Software developer with ~30 years of professional experience in a multitude of platforms/languages. JavaOne rockstar/highly rated speaker, author, blogger and open source hacker. Shai has extensive experience in the full stack of backend, desktop and mobile. This includes going all the way into the internals of VM implementation, debuggers etc. Shai started working with Java in 96 (the first public beta) and later on moved to VM porting/authoring/internals and development tools. Shai is the co-founder of Codename One, an Open Source project allowing Java developers to build native applications for all mobile platforms in Java. He's the coauthor of the open source LWUIT project from Sun Microsystems and has developed/worked on countless other projects both open source and closed source. Shai is also a developer advocate at Lightrun.
expert thumbnail

Marco Behler

Hi, I'm Marco. Say hello, I'd like to get in touch! twitter: @MarcoBehler
expert thumbnail

Ram Lakshmanan

Architect,
yCrash

In pursuit of answering the beautiful question 'Why Crash?' before this life ends.

The Latest Java Topics

article thumbnail
Hidden Classes in Java 15
The why and how of hidden classes, with a helping hand from SourceBuddy.
February 5, 2023
by Peter Verhas CORE
· 638 Views · 1 Like
article thumbnail
Apache Kafka Introduction, Installation, and Implementation Using .NET Core 6
A step-by-step introduction to Apache Kafka, its install, and its implementation using .NET Core 6 with background information, code blocks, and guide pictures.
February 4, 2023
by Jaydeep Patil
· 1,893 Views · 2 Likes
article thumbnail
3 Ways That You Can Operate Record Beyond DTO [Video]
The record feature has arrived in the latest LTS version, Java 17! But how can we use it? This post explores design capabilities with a record exceeding DTO.
February 3, 2023
by Otavio Santana CORE
· 2,731 Views · 1 Like
article thumbnail
How To Avoid “Schema Drift”
This article will explain the existing solutions and strategies to mitigate the challenge and avoid schema drift, including data versioning using lakeFS.
February 3, 2023
by Yaniv Ben Hemo
· 4,131 Views · 1 Like
article thumbnail
Part I: Creating Custom API Endpoints in Salesforce With Apex
In Part I of this two-part series, we'll take a look at how to create custom API endpoints in Salesforce with APEX.
February 2, 2023
by Michael Bogan CORE
· 2,246 Views · 1 Like
article thumbnail
Remote Debugging Dangers and Pitfalls
Debugging over the network using a protocol like JDWP isn’t hard. However, there are risks that aren’t immediately intuitive and some subtle solutions.
February 2, 2023
by Shai Almog CORE
· 4,962 Views · 3 Likes
article thumbnail
How and Why You Should Start Automating DevOps
Integration of DevOps and automation is what leads to a more efficient software development lifecycle. Understand what it is about automating DevOps and how.
February 1, 2023
by Susmitha Vakkalanka
· 4,574 Views · 1 Like
article thumbnail
Asynchronous HTTP Requests With RxJava
In this article, readers/developers will follow guide code to call two API’s and learn how to send long blocking requests asynchronously with RxJava and Vertx.
February 1, 2023
by Andrius Kaliacius
· 3,544 Views · 2 Likes
article thumbnail
Architectural Miscalculation and Hibernate Problem "Type UUID but Expression Is of Type Bytea"
A story about how I tried to solve an architectural error with minimal code changes and what problems sometimes occur in popular libraries. So, let's go!
February 1, 2023
by Artem Artemev
· 2,031 Views · 2 Likes
article thumbnail
Top 5 Java REST API Frameworks
The top five frameworks for building a REST API with Java and how to choose the right framework for your project.
February 1, 2023
by Preet Kaur
· 2,689 Views · 2 Likes
article thumbnail
Real-Time Stream Processing With Hazelcast and StreamNative
In this article, readers will learn about real-time stream processing with Hazelcast and StreamNative in a shorter time, along with demonstrations and code.
January 31, 2023
by Fawaz Ghali
· 9,292 Views · 5 Likes
article thumbnail
Express Hibernate Queries as Type-Safe Java Streams
In this article, you will learn how the JPAstreamer Quarkus extension facilitates type-safe Hibernate queries without unnecessary wordiness and complexity.
January 30, 2023
by Julia Gustafsson
· 4,211 Views · 5 Likes
article thumbnail
How To Create and Edit Excel XLSX Documents in Java
Programmatically create and edit Excel documents using API solutions (with Java code examples) that work together to provide an Excel automation service.
January 30, 2023
by Brian O'Neill CORE
· 4,960 Views · 5 Likes
article thumbnail
The Quest for REST
This post focuses on listing some of the lurking issues in the "Glory of REST" and provides hints at ways to solve them.
January 26, 2023
by Nicolas Fränkel CORE
· 6,669 Views · 5 Likes
article thumbnail
Fraud Detection With Apache Kafka, KSQL, and Apache Flink
Exploring fraud detection case studies and architectures with Apache Kafka, KSQL, and Apache Flink with examples, guide images, and informative details.
January 26, 2023
by Kai Wähner CORE
· 8,280 Views · 6 Likes
article thumbnail
Upgrade Guide To Spring Data Elasticsearch 5.0
Learn about the latest Spring Data Elasticsearch 5.0.1 with Elasticsearch 8.5.3, starting with the proper configuration of the Elasticsearch Docker image.
January 26, 2023
by Arnošt Havelka CORE
· 5,058 Views · 2 Likes
article thumbnail
Commonly Occurring Errors in Microsoft Graph Integrations and How to Troubleshoot Them (Part 3)
This third article explains common integration errors that may be seen in the transition from EWS to Microsoft Graph as to the resource type To Do Tasks.
January 25, 2023
by Constantin Kwiatkowski
· 3,303 Views · 1 Like
article thumbnail
A Brief Overview of the Spring Cloud Framework
Readers will get an overview of the Spring Cloud framework, a list of its main packages, and their relation with the Microservice Architectural patterns.
January 25, 2023
by Mario Casari
· 9,229 Views · 6 Likes
article thumbnail
Spring Cloud: How To Deal With Microservice Configuration (Part 1)
In this article, we cover how to use a Spring Cloud Configuration module to implement a minimal microservice scenario based on a remote configuration.
January 24, 2023
by Mario Casari
· 6,301 Views · 2 Likes
article thumbnail
Microservices Discovery With Eureka
In this article, let's explore how to integrate services discovery into a microservices project.
January 22, 2023
by Jennifer Reif CORE
· 6,506 Views · 7 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: