DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Java

Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.

icon
Latest Refcards and Trend Reports
Trend Report
Low Code and No Code
Low Code and No Code
Trend Report
Modern Web Development
Modern Web Development
Refcard #024
Core Java
Core Java

DZone's Featured Java Resources

Type Variance in Java and Kotlin

Type Variance in Java and Kotlin

By Ivan Ponomarev CORE
“There are three kinds of variance: invariance, covariance, and contravariance…” It looks pretty scary already, doesn’t it? If we search Wikipedia, we will find covariance and contravariance in category theory and linear algebra. Some of you who learned these subjects in university might be having dreadful flashbacks because it can be complex stuff. Because these terms look so scary, people avoid learning this topic regarding programming languages. From my experience, many middle-level and sometimes senior-level Java and Kotlin developers fail to understand type variance. This leads to a poor design of internal APIs because to create convenient APIs using generics, you need to understand type variance, otherwise, you either don’t use generics at all or use them incorrectly. It is all about creating better APIs. If we compare a program to a building, then its internal API is the foundation of the building. If your internal API is convenient, then your code is more robust and maintainable. So let’s fill this gap in our knowledge. The best way to explain this topic is with a historical and evolutionary perspective. I will start by considering examples from ancient and primitive concepts such as arrays that appeared in early Java versions, through Java Collections API, and finally, Kotlin, which has an advanced type variance. Going from simple to more complex examples, you’ll see how language features have evolved and what problems were solved by introducing these language features. After reading this article no mysteries will remain about Java’s “? extends,” “? super,” or Kotlin’s “in” and “out.” For illustration purposes, I’ll be using the same example of type hierarchy everywhere: We have a base class called Person, a subclass called Employee, and another subclass called Manager. Each Employee is a Person, each Manager is a Person, and each Manager is an Employee, but not necessarily vice versa: some of the Persons are not Employees. In Java and Kotlin, this means you can assign an expression of type Manager to a variable of type Employee and so on, but not vice versa. We will also consider a lot of code examples and for all of them, we’re interested in only four kinds of possible outcomes. We will use emojis to identify them: The code won’t compile. The code will compile and run, but there will be a runtime exception. The code will compile and run normally. The heap pollution will occur. Heap pollution is a situation where a variable of a certain type contains an object of the wrong type. For example, a variable declared as a String refers to an instance of a Manager or Employee. Yes, it is what it looks like: a flaw in the language’s type system. In general, this should not happen, but this sometimes happens both in Java and in Kotlin, and I’ll show you an example of heap pollution as well. Covariance of Reified Java Arrays Arrays have been present in Java for more than twenty-five years, starting from Java 1.0, and in a way, we can consider arrays as a prototype for generics. For example, when we have a Manager type, we can build an array Manager[] and by getting elements of this array, we are getting values of the Manager type. It is trivial about the types of variables that we get from the array, but what about assigning the values to the array’s elements? Can we assign a Manager as an element of Employee[]? And what about the Person? All of the possible combinations are represented in the table below. Have a look and try to figure out what is going on: The result of assigning a value to an element of a Java array The rightmost column is green because in Java null can be assigned (and returned) everywhere. In the lower-left corner, we have cases that won’t compile, which also makes sense: you cannot assign a Person to an Employee or Manager without an explicit type cast, and thus, you cannot set a Person as an element of an array of employees or managers. That’s the main idea of type checking! Everything was understandable so far, but what about the rest of the combinations? We would expect that assigning an Employee to an element of Employee[], Person[], or Object[] will cause no problems, just as assigning it to a variable of type Employee, Person, or Object. What do these exclamation marks mean? A runtime exception? Why? What is this exception and what can go wrong? I will explain this soon. Meanwhile, let’s consider another question: Can we assign a Java array of a given type to an array of another type? That is, can we assign Employee[] to Person[]? And vice versa? All the possible combinations are given in the following table: Can we assign a Java array of a given type to an array of another type? We could remove square brackets and this would give us a table of possible assignments of simple objects: Employee is assignable to Person, but Person is not assignable to Employee. Since each Manager is an Employee, then an array of managers is an array of employees, right? At this point, we can already say that arrays in Java are covariant against the types of their elements, but we will go back to strict terms soon. The following UML diagram is valid: Covariance Now have a look at the code below to see how it behaves: Java Manager[] managers = new Manager[10]; Person[] persons = managers; //this should compile and run persons[0] = new Person(); //line 1 ?? Manager m = managers[0]; //line 2 ?! Nothing special happens in the beginning. Since the Manager is a Person, the assignment is possible. But since arrays, just like any objects, are reference types in Java, both manager and person variables keep the reference to the same object. On line 1, we are trying to insert a Person into this array. Note: the compiler type-checking cannot prevent us from doing this. But if this line is allowed to be executed, then, on line 2, we should expect a catastrophic error: an array of Managers will contain someone who is not a Manager—in other words, heap pollution. But Java won’t let you do it here. Experienced Java developers might know that an ArrayStoreException will occur on line 1. To prevent heap pollution, an array object “knows” the type of its elements in runtime, and each time we assign a value, a runtime check is performed. This explains the exclamation marks in one of the previous tables: writing a non-null value to any Java array, generally speaking, may lead to an ArrayStoreException if the actual type of the array is the subtype of the array variable. The ability of a container to “know” the type of its elements is called “reification.” So now we know that arrays in Java are covariant and reified. To sum up, we may say that: The need for arrays reification and runtime check (and possible runtime exceptions) comes from the covariance of arrays (the fact that the Manager[] array can be assigned to Person[]). Covariance is safe when we read values, but can lead to problems when we write values. Note: the problem is so huge that Java even abandoned the main static language objective here, that is to have all the type checking in compile time, and behaves more like a dynamically-typed language (e.g., Python) in this scenario. You might ask: “Was covariance the right choice for Java arrays?” “What if we just prohibit the assignment of arrays of different types?” In this case, it would have been impossible to assign Manager[] to Person[], we would have known the array elements type at compile time, and there would have been no need to resort to run-time checking. The ability of the type to be assignable only to the variables of the same type strictly is called invariance, and we will discover it in Java and Kotlin generics very soon. But imagine the problems that the invariance of arrays would have led to in Java. Imagine we have a method that accepts a Person[] as its argument and calculates, for example, the average age of the given people: Java Double calculateAverageAge(Person[] people) Now we have a variable of type Manager[]. Managers are people, but can we pass this variable as an argument for calculateAverageAge? In Java we can because of the covariance of arrays. If arrays were invariant, we would have to create a new array of type Person[], copy all the values from Manager[] to this array, and only then call the method. The memory and CPU overhead would have been enormous. This is why invariance is impractical in APIs, and this is the real reason why Java arrays are covariant (although covariance implies difficulties with value assignments). The example of Java arrays shows the full range of problems associated with type variance. Java and Kotlin generics tried to address these problems. Invariance of Java and Kotlin Mutable Lists I believe you are familiar with the concept of generics. In Java and Kotlin, given that list is not empty, we will have the following return types of list.get(0): type of list type of list.get(0) List<Person> Person List<?> Object List<*> Any? The difference between Java and Kotlin is in the last two lines. Both Java and Kotlin have a notion of an “unknown” type parameter: both List<?> in Java and List<*> in Kotlin denote “a List of elements of some type, and we don’t know/don’t care what the type is.” In Java, everything is nullable, thus the Object type returned by list.get(...) can be null. In Kotlin, we have to care about nullability, thus get the method for List<*> returns Any? Now, let’s build the same tables we have previously built for Java arrays. First, let’s consider the assignment of elements. Here we will find a huge difference between Java and Kotlin Collections API (and as we will discover very soon, this difference is tightly related to the difference between type variance in Java and Kotlin). In Java, every List has methods for its modification (add, remove, and so on). The difference between mutable and immutable collections in Java is visible only in runtime. We may have UnsupportedModificationException if we try to change an immutable list. In Kotlin, mutability is visible at compile time. The List interface itself does not have any modification methods, and if we want mutability, we need to use MutableList. In other respects, List<..> in Java and MutableList<..> in Kotlin are nearly the same. Here are the results of the list.add(…) method in Java and Kotlin: What is the result of list.add(…) method in Java and Kotlin? Why we cannot add a null to MutableList<*> is understandable: “star” may mean any type, both nullable and non-nullable. Since we don’t know anything about the actual type and its nullability, we cannot allow adding nullable values to MutableList<*>. Note: we don’t have anything similar to ArrayStoreException, although the table looks similar to the one we have built for arrays so far. Now, let’s try to figure out when we can assign Java and Kotlin lists to each other. All the possible combinations are presented here: Can we assign these lists to each other? The rightmost green column means that List<?>/MutableList<*> are universally assignable: since we “don’t care” about the actual type parameter, we can assign anything. In the rest of the diagram, we see the green diagonal, which means that MutableList<...> can be assigned only to a MutableList parameterized with the same type. In other words, List<T> in Java and MutableList<T> in Kotlin are invariant against the type parameters. This cuts off the possibility of insertion of elements of the wrong type already in compilation time: Java List<Manager> managers = new ArrayList<>(); List<Person> persons = managers; //won't compile persons.add(new Person()); //no runtime check is possible Two concerns may arise at this point: As we know from the Java arrays example, invariance is bad for building APIs. What if we need a method that processes List<Person>, but can be called with List<Manager> without having to copy the whole list element by element? Why not implement everything the same way as for arrays? The answer for the first concern is the declaration site and use site variance that we are going to consider soon. The answer to the second question is that, unlike arrays, which are reified, generics in Java and Kotlin are type erased, which means they have no information about their type parameters in run time, and run-time type checking is impossible. Let’s dive deeper into type erasure now. Type Erasure, Generics/Arrays Incompatibility, and Heap Pollution One of the reasons why the Java platform implements generics via type erasure is purely historical. Generics appeared in Java version 5 when the Java platform already was quite mature. Java keeps backward compatibility at the source code and bytecode level, which means that very old source code can be compiled in modern Java versions, and very old compiled libraries can be used in modern applications by placing them on the classpath. To facilitate an upgrade to Java 5, the decision had been made to implement Generics as a language feature, not a platform feature. This means that in run time JVM doesn’t know anything about generics and their type parameters. For example, a simple Pair<T> class is compiled to byte code in the following way (type parameter T is “erased” and replaced with Object): Generic Type (Source) Raw Type (compiled) Java class Pair<T> { private T first; private T second; Pair(T first, T second) { this.first = first; this.second = second; } T getFirst() {return first; } T getSecond() {return second; } void setFirst(T newValue) {first = newValue;} void setSecond(T newValue) {second = newValue;} } Java class Pair { private Object first; private Object second; Pair(Object first, Object second) { this.first = first; this.second = second; } Object getFirst() {return first; } Object getSecond() {return second; } void setFirst(Object newValue) {first = newValue;} void setSecond(Object newValue) {second = newValue;} } Or, if we use bounded types in the generic type definition, the type parameter is replaced with boundary type: Generic Type (source) Raw Type (compiled) Java class Pair<T extends Employee>{ private T first; private T second; Pair(T first, T second) { this.first = first; this.second = second; } T getFirst() {return first; } T getSecond() {return second; } void setFirst(T newValue) {first = newValue;} void setSecond(T newValue) {second = newValue;} } Java class Pair { private Employee first; private Employee second; Pair(Employee first, Employee second) { this.first = first; this.second = second; } Employee getFirst() {return first; } Employee getSecond() {return second; } void setFirst(Employee newValue) {first = newValue;} void setSecond(Employee newValue) {second = newValue;} } This implies many strict and sometimes counterintuitive limitations on how we can use generics in Java and Kotlin. If you want to know more details (e.g. if you want to know more about bounded types, and know what “bridge methods” are), you can refer to my lecture on Java Generics titled Mainor 2022: Java Generics. But the most important restriction is the following: neither in Java nor Kotlin can we determine the type parameter in the runtime. In the following situation, These code snippets won’t compile: Java Kotlin Java if (a instanceof Pair<String>) ... Kotlin if (a is Pair<String>) ... But these will compile and run successfully, although probably we would like to know more about a: Java Kotlin Java if (a instanceof Pair<?>) ... Kotlin if (a is Pair<*>) ... An important implication of this is Java arrays and generic incompatibility. For example, the following line wont compile in Java with the error “generic array creation:” Java List<String>[] a = new ArrayList<String>[10]; As we know, Java arrays need to keep the full type information in runtime, while all the information that will be available in this case is that it is an array of ArrayList of something unknown (“String” type parameter will be erased). Interestingly, we can overcome this protection and create an array of generics in Java (either via type cast or varargs (variable arguments) parameter), and then easily make heap pollution with it. But let’s consider another example. It doesn’t involve Java arrays and thus it is possible both in Java and Kotlin: Java Kotlin Java Pair<Integer> intPair = new Pair<>(42, 0); Pair<?> pair = intPair; Pair<String> stringPair = (Pair<String>) pair; stringPair.b = "foo"; System.out.println(intPair.a * intPair.b); Kotlin var intPair = Pair<Int>(42, 0) var pair: Pair<*> = intPair var stringPair: Pair<String> = pair as Pair<String> stringPair.b = "foo" println(intPair.a * intPair.b) An example of heap pollution. A chimera appears! First, we create a pair of integers. Then we “forget” its type in compile time and through explicit typecast we are casting it to a pair of Strings. Note: we cannot cast intPair to stringPair straightforwardly: Integer cannot be cast to String, and the compiler will warn us about it. But we can do this via Pair<?> / Pair <*>: although there will be a warning about unsafe typecast, the compiler won’t prohibit the typecast in this scenario (we can imagine a Pair<String> casted to Pair<?> and then explicitly casted back to Pair<String>). Then something weird happens: we assign a String to the second component of our object, and this code is going to compile and run. It compiles because the compiler “thinks” that b has a type of String. It runs because in runtime there are no checks, and the type of b is Object. After the execution of this line, we have a “chimera” object: its first variable is Integer, its second variable is String, and it’s neither Pair<String> nor Pair<Integer>. We’ve broken the type safety of Java and Kotlin and made heap pollution. To sum up: Because of type erasure, it’s impossible to perform type checking of objects passed to generics in run time. It’s unsafe to store type-erased generics in Java native reified arrays. Both Java and Kotlin languages permit heap pollution: a situation where a variable of some type refers to an object that is not of that type. Use Site Covariance Imagine we are facing the following practical task: we are implementing a class MyList<E>, and we want it to have the ability to add elements from other lists via the addAllFrom method and the ability to add its elements to another list via addAllTo. Since we have the usual Manager – Employee – Person inheritance chain, these must be the valid and invalid options: Java MyList<Manager> managers = ... MyList<Employee> employees = ... //Valid options, we want these to be compilable! employees.addAllFrom(managers); managers.addAllTo(employees); //Invalid options, we don't want these to be compilable! managers.addAllFrom(employees); employees.addAllTo(managers); A naive approach (the one that, unfortunately, I’ve seen many times in real life projects) is to use type parameters straightforwardly: Java class MyList<E> implements Iterable<E> { void add(E item) { ... } //Don't do this :-( void addAllFrom(MyList<E> list) { for (E item : list) this.add(item); } void addAllTo(MyList<E> list) { for (E item : this) list.add(item); } ...} Now, when we try to write the following code, it will not compile. Java MyList<Manager> managers = ...; MyList<Employee> employees = ...; employees.addAllFrom(managers); managers.addAllTo(employees); I often see people struggling with this: they tried to introduce generic classes in their code, but these classes were unusable. Now we know why this happens: it is due to the invariance of MyList. We have figured out that due to the lack of runtime type-checking, type invariance is the best that can be done for type safety of Java’s List/ Kotlin’s MutableList. Both Java and Kotlin offer a solution for this problem: to create convenient APIs, we need to use wildcard types in Java or type projections in Kotlin. Let’s look at Java first: Java class MyList<E> implements Iterable<E> { void addAllFrom (List<? extends E> list){ for (Е item: list) add(item); } } MyList<Manager> managers = ...; MyList<Employee> employees = ... employees.addAllFrom(managers); List<? extends E> means: “a list of any type will do as soon as this type is a subtype of E.” When we iterate over this list, the items can be safely cast to “E.” And since our list is a list of “E,” then we can safely add these elements to our list. The program will compile and run. In Kotlin, this looks very similar, but instead of “? extends E,” we are using “out E:” Kotlin class MyList<E> : Iterable<E> { fun addAllFrom(list: MyList<out E>) { for (item in list) add(item) } } val managers: MyList<Manager> = ... ; val employees: MyList<Employee> = ... employees.addAllFrom(managers) By declaring <? extends E> or <out E> are making the type of the argument covariant. But to avoid heap pollution, this implies certain limitations to what can be done with the variable declared with wildcard types/type projections. One of my favourite questions for a Java technical interview is: given a variable declared as List<? extends E> list in Java, what can be done with this variable? Of course, we can use list.get(...), and the return type will be E. On the other hand, if we have a variable E element, we cannot use list.add(element): such code won’t compile. Why? We know that the list is a list of elements of some type which is a subtype of E. But we don’t know what subtype. For example, if E is Person, then ? extends E might be Employee or Manager. We cannot blindly append a Person to such a list then. An interesting exception: list.add(null) will compile and run. This happens because null in Java is assignable to a variable of any type, and thus it is safe to add it to any list. We can also use an “unbounded wildcard” in Java, which is just a question mark in triangular braces, like in Foo<?>. The rules for it are as follows: If Foo<T extends Bound>, then Foo<?> is the same as Foo<? extends Bound>. We can read elements, but only as Bound (or Object, if no Bound is given). If we’re using intersection types Foo<T extends Bound1 & Bound2>, any of the bound types will do. We can put only null values. What about covariant types in Kotlin? Unlike Java, nullability now plays a role. If we have a function parameter with a type MyList<out E?>: We can read values typed E?. We cannot add anything. Even null won’t do because, although we have nullable E?, out means any subtype. In Kotlin, a non-nullable type is a subtype of a nullable type. So the actual type of the list element might be non-nullable, and this is why Kotlin won’t let you add an even null to such a list. Use Site Contravariance We’ve been talking about covariance so far. Covariant types are good for reading values and bad for writing. What about contravariance? Before figuring out where it might be needed, let’s have a look at the following diagram: Unlike in covariant types, subtyping works the other way around in contravariant ones, and this makes them good for writing values, but bad for reading. The classical example of a use case for contravariance is Predicate<E>, which is a functional type that takes E as an argument and returns a boolean value. The wider the type of E in a predicate, the more “powerful” it is. For example, Predicate<Person> can substitute Predicate<Employee> (because Employee is a Person), and thus it can be considered as its subtype. Of course, everything is invariant in Java and Kotlin by default, and this is why we need to use another kind of wildcard type and type projections. The addAllTo method of our MyList class can be implemented the following way: Java class MyList<E> implements Iterable<E> { void addAllTo (List<? super E> list) { for (Е item: this) list.add(item); } } MyList<Employee> employees = ...; MyList<Person> people = ...; employees.addAllTo(people); List<? super E> means “a list of any type will do as soon as this type is E or a supertype of E, up to Object.” When we iterate over our list, our items, which have type E, can be safely cast to this unknown type and can be safely added to another list. The program will compile and run. In Kotlin it looks the same, but we use MyList<in E> instead of MyList<? super E>: Kotlin class MyList<E> : Iterable<E> { fun addAllTo(list: MyList<in E>) { for (item in this) list.add(item) } } val employees: MyList<Employee> = ... ; val people: MyList<Person> = ... employees.addAllTo(people) What Can Be Done With an Object Typed List<? super E> in Java? When we have an element of type E, we can successfully add it to this list. The same works for null. null can be added everywhere in Java. We can call get(..) method for such a list, but we read its values only as Objects. Indeed, <? super E> means that the actual type parameter is unknown and can be anything up to Object, so Object is the only safe assumption about the type of list.get(..). And what about Kotlin? Again, nullability plays a role. If we have a parameter list: MyList<in E>, then: We can add elements of type E to the list. We cannot add nulls (but we can add nulls if the variable is declared like MyList<in E?>). The type of its elements (e. g. the type of list.first()) is Any? – mind the question mark. In Kotlin, “Any?” is a universal supertype, while “Any” is a subtype of “Any?”. If a type is contravariant, it can always potentially hold nulls. PECS The Mnemonic Rule for Java Now we know that covariance is for reading (and writing is generally prohibited to a covariantly-typed object), and contravariance is for writing (and although we can read values for contravariant-typed objects, all the type information is lost). Joshua Bloch in his famous “Effective Java” book proposes the following mnemonic rule for Java programmers: PECS: Producer — Extends, Consumer — Super This rule makes it simple to reason about the correct wildcard types in your API. If, for example, an argument for our method is a Function, we should always (no exceptions here) declare it this way: Java void myMethod(Function<? super T, ? extends R> arg) The T parameter in Function is a type of the input, i.e. something that is being consumed, and thus we use ? super for it. The R parameter is the result, something that is produced, and thus we use ? extends. This trick will allow us to use any compatible Function as an argument. Any Function that can process T or its supertype will do, as well as any Function that yields R or any of its subtypes. In the standard Java library API, we can see a lot of examples of wildcard types, all of them following the PECS rule. For example, a method that finds a maximum number in a Collection given a Comparator is defined like this: Java public static <T> T max (Collection<? extends T> coll, Comparator<? super T> comp) This allows us to conveniently use the following parameters: Collections.max(List<Integer>, Comparator<Number>) (if we can compare any Numbers, then we can compare Integers), Collections.max(List<String>, Comparator<Object>) (if we can compare Objects, then we can compare Strings). In Kotlin, it is easy to memorize that producers always use the “out” keyword and consumers use “in.” Although Kotlin syntax is more concise and “in/out” keywords make it clearer which type is used for producer and which for consumer, it is still very useful to understand that “out” actually means a subtype, while “in” means a supertype. Declaration Site Variance in Kotlin Now we’re going to consider a feature that Kotlin has and Java doesn’t have: declaration site variance. Let’s have a look at Kotlin’s immutable List. When we check the assignability of Kotlin’s List, we find that it looks similar to Java arrays. In other words, Kotlin’s List is covariant itself: Can we assign these immutable lists to each other? Сovariance for Kotlin’s List doesn’t imply any problems related to Java covariant arrays, since you cannot add or modify anything. When just reading the values, we can safely cast Manager to Employee. That’s why a Kotlin function that requires List<Person> as its parameter will happily accept, say, List<Manager> even if that parameter does not use type projections. There is no similar functionality in Java. When we compare the declaration of the List interface in Java and Kotlin, we’ll see the difference: Java Kotlin Java public interface List<E> extends Collection<E> {...} Kotlin public interface List<out E> : Collection<E> {...} The keyword “out” in type declaration makes the List interface in Kotlin a covariant type everywhere. Of course, you cannot make any type covariant in Kotlin: only those that are not using type parameters as an argument of a public method (while return type for E is OK). So it’s a good idea to declare all your immutable classes as covariant in Kotlin. In our ‘MyList’ example we might also want to introduce an immutable pair like this: Kotlin class MyImmutablePair<out E>(val a: E, val b: E) In this class, we can only declare methods that return something of type E, but not public methods that will have E-typed arguments. Note: constructor parameters and private methods with E-typed arguments are OK. Now, if we want to add a method that takes values from MyImmutablePair, we don’t need to bother about use-site variance. Kotlin class MyList<E> : Iterable<E> { //Don't bother about use-site type variance! fun addAllFrom(pair: MyImmutablePair<E>){ add(pair.a); add(pair.b) } ... } val twoManagers: MyImmutablePair<Manager> = ... employees.addAllFrom(twoManagers) The same applies to contravariance, of course. We might want to define a contravariant class MyConsumer in this way: Kotlin class MyConsumer<in E> { fun consume(p: E){ ... } } As soon as we defined a type as contravariant, the following limitations emerge: We can define methods that have E-typed arguments, but we cannot expose anything of type E. We can have private class variables of type E, and even private methods that return E. The addAllTo method, which dumps all the values to the given consumer, now doesn’t need to use type projections. The following code will compile and run: Kotlin class MyList<E> : Iterable<E> { //Don't bother about use-site type variance! fun addAllTo(consumer: MyConsumer<E>){ for (item in this) consumer.consume(item) } ... } val employees: MyList<Employee> = ... val personConsumer: MyConsumer<Person> = ... employees.addAllTo(personConsumer) The one thing that’s worth mentioning is how declaration-site variance influences star projection Foo<*>. If we have an object typed Foo<*>, does it matter if Foo class is defined as invariant, covariant, or contravariant if we want to do something with this object? If the original type declaration is Foo<T : TUpper> (invariant), then, of course, you can read values as TUpper, and you cannot write anything (even null), because we don’t know the exact type. If Foo<out T : TUpper> is covariant, you can still read values as TUpper, and you cannot write anything just because there are no public methods for writing in this class. If Foo<in T : TUpper> is contravariant, then you cannot read anything (because there are no such public methods) and you still cannot write anything (because you “forgot” the exact type). So the contravariant Foo<*> variable is the most useless thing in Kotlin. Kotlin Is Better for the Creation of Fluent APIs When we consider switching between languages, the most important question is: what can a new language provide that cannot be achieved with the old one? The more concise syntax is good, but if everything that a new language offers is just syntactic sugar, then maybe it is not worth switching from familiar tools and ecosystems. In regards to type variance in Kotlin vs. Java, the question is: does declaration site variance provide the options that are impossible in Java with wildcard types? In my opinion, the answer is definitely yes, as use site variance is not just about getting rid of “? extends” and “? super” everywhere. Here’s a real-life example of the problems that arise when we design APIs for data streaming processing frameworks (in particular, this example relates to Apache Kafka Streams API). The key classes of such frameworks are abstractions of data streams, like KStream<K>, which are semantically covariant: stream of Employee can be safely considered as a stream of Person if all that we are interested in are Person’s properties. Now imagine that in library code we have a class which accepts a funciton capable of transforming into a stream. Java class Processor<E> { void withFunction(Function<? super KStream<E>, ? extends KStream<E>> chain) {...} } In the user’s code these functions may look like this: Java KStream<Employee> transformA(KStream<Employee> s) {...} KStream<Manager> transformB(KStream<Person> s) {...} As you can see, both of these functions can work as a transformer from KStream<Employee> to KStream<Employee>. But if we try to use them as method references passed to the withFunction method, only the first one will do: Java Processor<Employee> processor = ... //Compiles processor.withFunction(this::transformA); //Won't compile with "KStream<Employee> is not convertible to KStream<Person>" processor.withFunction(this::transformB); The problem cannot be fixed by just adding more “? extends.” If we define the class in this way: Java class Processor<E> { //A mind-blowing number of question marks void withFunction(Function<? super KStream<? super E>, ? extends KStream<? extends E>> chain) {...} } then both lines Java processor.withFunction(this::transformA); processor.withFunction(this::transformB); will fail to compile with something like “KStream<capture of ? super Employee> is not convertible to KStream<Employee>.” Type calculation in Java is not too “wise” to support complex recursive definitions. Meanwhile in Kotlin, if we declare class KStream<out E> as covariant, this is easily possible: Kotlin /* LIBRARY CODE */ class KStream<out E> class Processor<E> { fun withFunction(chain: (KStream<E>) -> KStream<E>) {} } /* USER'S CODE */ fun transformA(s: KStream<Employee>): KStream<Employee> { ... } fun transformB(s: KStream<Person>): KStream<Manager> { ... } val processor: Processor<Employee> = Processor() processor.withFunction(this::transformA) processor.withFunction(this::transformB) All the lines will compile and run as intended (besides the fact that we have more concise syntax). Kotlin has a clear win in this scenario. Conclusion To sum up, here are some properties of different kinds of type variance. Covariance is: ? extends in Java out in Kotlin safe reading, unsafe or impossible writing described by the following diagram: When A is a supertype of B, then the matrix of possible assignments fills the lower left corner: Contravariance is: ? super in Java in in Kotlin safe writing, type information lost or impossible reading described by the following diagram: When A is a supertype of B, then the matrix of possible assignments fills the upper right corner: Invariance is: assumed in Java and Kotlin by default safe writing and reading when A is a supertype of B, then the matrix of possible assignments fills only the diagonal: To create good APIs, understanding type variance is necessary. Kotlin offers great enhancements for Java Generics, making usage of ready-made generic types even more straightforward. But to create your generic types in Kotlin, it’s even more important to understand the principles of type variance. I hope that it’s now clear how type variance works and how it can be used in your APIs. Thanks for reading. More
7 Awesome Libraries for Java Unit and Integration Testing

7 Awesome Libraries for Java Unit and Integration Testing

By Marco Behler CORE
Looking to improve your unit and integration tests? I made a short video giving you an overview of 7 libraries that I regularly use when writing any sort of tests in Java, namely: AssertJ Awaitility Mockito Wiser Memoryfilesystem WireMock Testcontainers What’s in the Video? The video gives a short overview of how to use the tools mentioned above and how they work. In order of appearance: AssertJ JUnit comes with its own set of assertions (i.e., assertEquals) that work for simple use cases but are quite cumbersome to work with in more realistic scenarios. AssertJ is a small library giving you a great set of fluent assertions that you can use as a direct replacement for the default assertions. Not only do they work on core Java classes, but you can also use them to write assertions against XML or JSON files, as well as database tables! // basic assertions assertThat(frodo.getName()).isEqualTo("Frodo"); assertThat(frodo).isNotEqualTo(sauron); // chaining string specific assertions assertThat(frodo.getName()).startsWith("Fro") .endsWith("do") .isEqualToIgnoringCase("frodo"); (Note: Source Code from AssertJ) Awaitility Testing asynchronous workflows is always a pain. As soon as you want to make sure that, for example, a message broker received or sent a specific message, you'll run into race condition problems because your local test code executes faster than any asynchronous code ever would. Awaitility to the rescue: it is a small library that lets you write polling assertions, in a synchronous manner! @Test public void updatesCustomerStatus() { // Publish an asynchronous message to a broker (e.g. RabbitMQ): messageBroker.publishMessage(updateCustomerStatusMessage); // Awaitility lets you wait until the asynchronous operation completes: await().atMost(5, SECONDS).until(customerStatusIsUpdated()); ... } (Note: Source Code from Awaitility) Mockito There comes a time in unit testing when you want to make sure to replace parts of your functionality with mocks. Mockito is a battle-tested library to do just that. You can create mocks, configure them, and write a variety of assertions against those mocks. To top it off, Mockito also integrates nicely with a huge array of third-party libraries, from JUnit to Spring Boot. // mock creation List mockedList = mock(List.class); // or even simpler with Mockito 4.10.0+ // List mockedList = mock(); // using mock object - it does not throw any "unexpected interaction" exception mockedList.add("one"); mockedList.clear(); // selective, explicit, highly readable verification verify(mockedList).add("one"); verify(mockedList).clear(); (Note: Source Code from Mockito) Wiser Keeping your code as close to production and not just using mocks for everything is a viable strategy. When you want to send emails, for example, you neither need to completely mock out your email code nor actually send them out via Gmail or Amazon SES. Instead, you can boot up a small, embedded Java SMTP server called Wiser. Wiser wiser = new Wiser(); wiser.setPort(2500); // Default is 25 wiser.start(); Now you can use Java's SMTP API to send emails to Wiser and also ask Wiser to show you what messages it received. for (WiserMessage message : wiser.getMessages()) { String envelopeSender = message.getEnvelopeSender(); String envelopeReceiver = message.getEnvelopeReceiver(); MimeMessage mess = message.getMimeMessage(); // now do something fun! } (Note: Source Code from Wiser on GitHub) Memoryfilesystem If you write a system that heavily relies on files, the question has always been: "How do you test that?" File system access is somewhat slow, and also brittle, especially if you have your developers working on different operating systems. Memoryfilesystem to the rescue! It lets you write tests against a file system that lives completely in memory, but can still simulate OS-specific semantics, from Windows to macOS and Linux. try (FileSystem fileSystem = MemoryFileSystemBuilder.newEmpty().build()) { Path p = fileSystem.getPath("p"); System.out.println(Files.exists(p)); } (Note: Source Code from Memoryfilesystem on GitHub) WireMock How to handle flaky 3rd-party REST services or APIs in your tests? Easy! Use WireMock. It lets you create full-blown mocks of any 3rd-party API out there, with a very simple DSL. You can not only specify the specific responses your mocked API will return, but even go so far as to inject random delays and other unspecified behavior into your server or to do some chaos monkey engineering. // The static DSL will be automatically configured for you stubFor(get("/static-dsl").willReturn(ok())); // Instance DSL can be obtained from the runtime info parameter WireMock wireMock = wmRuntimeInfo.getWireMock(); wireMock.register(get("/instance-dsl").willReturn(ok())); // Info such as port numbers is also available int port = wmRuntimeInfo.getHttpPort(); (Note: Source Code from WireMock) Testcontainers Using mocks or embedded replacements for databases, mail servers, or message queues is all nice and dandy, but nothing beats using the real thing. In comes Testcontainers: a small library that allows you to boot up and shut down any Docker container (and thus software) that you need for your tests. This means your test environment can be as close as possible to your production environment. @Testcontainers class MixedLifecycleTests { // will be shared between test methods @Container private static final MySQLContainer MY_SQL_CONTAINER = new MySQLContainer(); // will be started before and stopped after each test method @Container private PostgreSQLContainer postgresqlContainer = new PostgreSQLContainer() .withDatabaseName("foo") .withUsername("foo") .withPassword("secret"); (Note: Source Code from Testcontainers) Enjoy the video! More
How to Use MQTT in Java
How to Use MQTT in Java
By Zhiwei Yu
Kotlin Is More Fun Than Java And This Is a Big Deal
Kotlin Is More Fun Than Java And This Is a Big Deal
By Jasper Sprengers CORE
Exploring Hazelcast With Spring Boot
Exploring Hazelcast With Spring Boot
By Ion Pascari
How To Add Three Photo Filters to Your Applications in Java
How To Add Three Photo Filters to Your Applications in Java

A unique image aesthetic makes a big difference in representing any personal or professional brand online. Career and hobby photographers, marketing executives, and casual social media patrons alike are in constant pursuit of easily distinguishable visual content, and this basic need to stand out from a crowd has, in turn, driven the democratization of photo editing and filtering services in the last decade or so. Nearly every social media platform you can think of (not to mention many e-commerce websites and various other casual sites where images are frequently uploaded) now incorporates some means for programmatically altering vanilla image files. These built-in services can vary greatly in complexity, ranging from simple brightness controls to gaussian blurs. With this newfound ease of access to photo filtering, classic image filtering techniques have experienced a widespread resurgence in popularity. For example, the timeless look associated with black and white images can now be hastily applied to any image upload on the fly. Through simple manipulations of brightness and contrast, the illusion of embossment can be harnessed, allowing us to effortlessly emulate a vaunted, centuries-old printing technique. Even posterization – a classic, bold aesthetic once humbly associated with the natural color limitations of early printing machines – can be instantly generated within any grid of pixels. Given the desirability of simplified image filtering (especially those with common-sense customization features), building these types of features into any application – especially those handling a large volume of image uploads – is an excellent idea for developers to consider. Of course, once we elect to go in that direction, an important question arises: how can we efficiently include these services in our applications, given the myriad lines of code associated with building even the simplest photo-filtering functionality? Thankfully, that question is answered yet again by supply and demand forces brought about naturally in the ongoing digital industrial revolution. Developers can rely on readily available Image Filtering API services to circumvent large-scale programming operations, thereby implementing robust image customization features in only a few lines of clean, simple code. API Descriptions The purpose of this article is to demonstrate three free-to-use image filtering API solutions which can be implemented into your applications using complementary, ready-to-run Java code snippets. These snippets are supplied below in this article, directly following brief instructions to help you install the SDK. Before we reach that point, I’ll first highlight each solution, providing a more detailed look at its respective uses and API request parameters. Please note that each API will require a free-tier Cloudmersive API key to complete your call (provides a limit of 800 API calls per month with no commitments). Grayscale (Black and White) Filter API Early photography was initially limited to a grayscale color spectrum due to the natural constraints of primitive photo technology. The genesis of color photography opened new doors, certainly, but it never completely replaced the black-and-white photo aesthetic. Even now, in the digital age, grayscale photos continue to offer a certain degree of depth and expression which many feel the broader color spectrum can’t bring out. The process of converting a color image to grayscale is straightforward. Color information is stored in the hundreds (or thousands) of pixels making up any digital image; grayscale conversion forces each pixel to ignore its color information and present varying degrees of brightness instead. Beyond its well-documented aesthetic effects, grayscale conversion offers practical benefits, too, by reducing the size of the image in question. Grayscale images are much easier to store, edit and subsequently process (especially in downstream operations such as Optical Character Recognition, for example). The grayscale filter API below performs a simple black-and-white conversion, requiring only an image’s file path (formats like PNG and JPG are accepted) in its request parameters. Embossment Filter API Embossment is a physical printing process with roots dating as far back as the 15th century, and it’s still used to this day in that same context. While true embossment entails the inclusion of physically raised shapes on an otherwise flat surface (offering an enhanced visual and tactile experience), digital embossment merely emulates this effect by manipulating brightness and contrast in key areas around the subject of a photo. An embossment photo filter can be used to quickly add depth to any image. The embossment filter API below performs a customizable digital embossment operation, requiring the following input request information: Radius: The radius of pixels of the embossment operation (larger values will produce a greater effect) Sigma: The variance of the embossment operation (higher values produce higher variance) Image file: The file path for the subject of the operation (supports common formats like PNG and JPG) Posterization API Given the ubiquity of high-quality smartphone cameras, it’s easy to take the prevalence of high-definition color photos for granted. The color detail we’re accustomed to seeing in everyday photos comes down to advancements in high-quality pixel storage. Slight variations in reds, blues, greens, and other elements on the color spectrum are mostly accounted for in a vast matrix of pixel coordinates. In comparison, during the bygone era of physical printing presses, the variety of colors used to form an image was typically far less varied, and digital posterization filters aim to emulate this old-school effect. It does so by reducing the volume of unique colors in an image, narrowing a distinct spectrum of hex values into a more homogenous group. The aesthetic effect is unmistakable, invoking a look one might have associated with political campaigns and movie theater posters in decades past. The posterization API provided below requires a user to provide the following request information: Levels: The number of unique colors which should be retained in the output image Image File: The image file to perform the operation on (supports common formats like PNG and JPG) API Demonstration To structure your API call to any of the three services outlined above, your first step is to install the Java SDK. To do so with Maven, first, add a reference to the repository in pom.xml: <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> Then add one to the dependency in pom.xml: <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> Alternatively, to install with Gradle, add it to your root build.gradle (at the end of repositories): allprojects { repositories { ... maven { url 'https://jitpack.io' } } } Then add the dependency in build.gradle: dependencies { implementation 'com.github.Cloudmersive:Cloudmersive.APIClient.Java:v4.25' } To use the Grayscale Filter API, use the following code to structure your API call: // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.FilterApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); FilterApi apiInstance = new FilterApi(); File imageFile = new File("/path/to/inputfile"); // File | Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. try { byte[] result = apiInstance.filterBlackAndWhite(imageFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling FilterApi#filterBlackAndWhite"); e.printStackTrace(); } To use the Embossment Filter API, use the below code instead (remembering to configure your radius and sigma in their respective parameters): // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.FilterApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); FilterApi apiInstance = new FilterApi(); Integer radius = 56; // Integer | Radius in pixels of the emboss operation; a larger radius will produce a greater effect Integer sigma = 56; // Integer | Sigma, or variance, of the emboss operation File imageFile = new File("/path/to/inputfile"); // File | Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. try { byte[] result = apiInstance.filterEmboss(radius, sigma, imageFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling FilterApi#filterEmboss"); e.printStackTrace(); } Finally, to use the Posterization Filter API, use the below code (remember to define your posterization level with an integer as previously described): // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.FilterApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); FilterApi apiInstance = new FilterApi(); Integer levels = 56; // Integer | Number of unique colors to retain in the output image File imageFile = new File("/path/to/inputfile"); // File | Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. try { byte[] result = apiInstance.filterPosterize(levels, imageFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling FilterApi#filterPosterize"); e.printStackTrace(); } When using the Embossment and Posterization APIs, I recommend experimenting with different radius, sigma, and level values to find the right balance.

By Brian O'Neill CORE
Java Development Trends 2023
Java Development Trends 2023

GitHub language statistics indicate that Java occupies second place among other programming codes, while in the TIOBE Index 2022, Java shifted to fourth position. The difference lies in the methodological approaches. Notwithstanding the ranking, Java is the coding language enterprises have used heavily since its inception, and it still holds the same position. As a programming language, it outperforms many of its competitors and continues to be the choice of most companies/organizations for software applications. However, Java doesn't stay the same; it goes through changes and modernization. In many ways, the development and innovation of this code and the surrounding ecosystem are propelled by new business demands. This article presents an overview of seven expected trends in Java based on the most significant events and achievements of 2022. Cloud architecture continues evolving, but costs are rising. According to the Flexera Report, public cloud spending exceeded budgets by 13% in 2022. Companies expect their cloud spending to increase by 29% over the next twelve months. What's worse, organizations waste 32% of their cloud spend. So the need for cloud cost optimization is out there. It will be one of the industry's driving forces in 2023, and we can hope to see more technological innovation and management solutions directed toward better efficiency and lesser costs. The new PaaS is a cloud computing model between IaaS and SaaS that's recently gained popularity. PaaS delivers third-party provider hardware and software tools to users. This approach allows greater flexibility for developers, and it's easier to handle finances because it's a pay-as-you-go payment model. PaaS enables developers to create or run new applications without spending extra time and resources on in-house hardware or software installations. Together with the still-rising popularity of cloud infrastructure, PaaS is predicted to evolve, too. We expect to see more support for Java-based PaaS applications with Java adapted to cloud environments. Spring Native 6.0 GA and Spring Boot 3.0 releases this year marked the beginning of a new framework generation, embracing current and upcoming innovations in OpenJDK and the Java ecosystem. In addition, spring 6.0 brought to life ahead-of-time transformations, focused on native image support for Spring applications and promising to deliver better application performance in the future. Spring Native updates in 2023 are definitely in the close loop of the Java community. CVEs in frameworks and libraries written in Java continue their unfortunate rise. The CVE Details source provides detailed information on how CVEs are expanding, and in 2022 reached a sad number of 25,036. These vulnerability types present an opportunity for attackers to take over sensitive resources and perform remote code execution. We cannot expect that 2023 will become an exception in this trend of a growing number of CVEs discovered. And there will be a trend for higher levels of security to be presented across the entire Java ecosystem. CVEs are also called zero-day vulnerabilities or Log4J. A zero-day vulnerability is one that has been disclosed but is not yet patched. Ensuring security requires keeping your dependencies on the schedule for the required updates. Organizations like Cyclonedx are entirely focused on this agenda and can offer great recommendations and practices to ensure your Java application stays in the secure zone. 2023 is expected to become a year of more extensive adoption of Lambdas for Java. In 2022 AWS presented a new feature for their AWS Lambda project, Lambda SnapStart. SnapStart helps to improve startup latency significantly and is specifically relevant for software applications using synchronous APIs, interactive microservices, or data processing. SnapStart has already been implemented by Quarkus and Micronaut, and there is no doubt that more acceptance of Lambda in Java will follow in 2023. Virtual Threads (2nd preview) in JDK 20, due in March, is another event to watch out for in 2023. Virtual threads support thread-local variables, synchronization blocks, thread interruptions, etc. Virtual threads are lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications. The March preview is focused on: the ability for better scaling; thread API adoption of virtual threads with minimal change; and easier troubleshooting, debugging, and profiling of virtual threads. As announced by Oracle in 2022, parts/portions of GraalVM Community Edition Java code will move to OpenJDK. This initiative will affiliate the development of GraalVM and Java technologies, benefiting all contributors and users. In addition, the community editions of the GraalVM JIT and Ahead-of-Time (AOT) compilers will move to OpenJDK in 2023. This change will signify a security improvement and synchronization in release schedules, features, and development processes. These trends and events to expect in 2023 demonstrate how the industry is moving forward and reflect how continuous Java success comes about within the Java ecosystem community and via business demands for better cloud Java operation. The only negative side for all Java developers is still the security question. However, downturns are also driving progress forward, and we should see new and more effective solutions to ensure better security to revert this trend in 2023. With a great number of initiatives presented in 2022, Java in 2023 should become more flexible for the cloud environment. Java is the most popular language for enterprise applications, and many of them were built before the cloud age. In the cloud, Java can be costlier than other programming languages and needs adoption. Making Java cloud-native is among the highest priorities for the industry, and many of the most expected events of 2023 relate to improving Java operations in the cloud. Java application modernization is not that simple, and there is no single button to press to convert your Java application to cloud-native. Making Java effective, less expensive, and high performing requires integrating a set of components allowing this language to be adapted to its cloud-native version. 2023 promises more of these elements to make more sustainable cloud-based applications being developed. In 2023 we can also expect further expansion of the PaaS computing model as more convenient for the developers building products in the cloud. Negative trends of overall tech debt and rising security concerns have attracted the attention of software development companies. As a result, new development practices in 2023 will suggest tighter security and a more accurate investment in IT innovation. However, downturns are also driving progress forward, and we should see new and more effective solutions to revert these trends in 2023.

By Alexander Belokrylov
What Java Version Are You Running? Let’s Take a Look Under the Hood of the JDK!
What Java Version Are You Running? Let’s Take a Look Under the Hood of the JDK!

From time to time, you need to check which Java version is installed on your computer or server, for instance, when starting a new project or configuring an application to run on a server. But did you know there are multiple ways you can do this and even get much more information than you might think very quickly? Let's find out... Reading the Java Version in the Terminal Probably the easiest way to find the installed version is by using the java -version terminal command: $ java -version openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) Checking Version Files in the Installation Directory The above output results from info read by the java executable from a file inside its installation directory. Let's explore what we can find there. On my machine, as I use SDKMAN to switch between different Java versions, all my versions are stored here: $ ls -l /Users/frankdelporte/.sdkman/candidates/java/ total 0 drwxr-xr-x 15 frankdelporte staff 480 Apr 17 2022 11.0.15-zulu drwxr-xr-x 16 frankdelporte staff 512 Apr 17 2022 17.0.3.fx-zulu drwxr-xr-x 15 frankdelporte staff 480 Mar 29 2022 18.0.1-zulu drwxr-xr-x 15 frankdelporte staff 480 Sep 7 18:36 19-zulu drwxr-xr-x 18 frankdelporte staff 576 Apr 18 2022 8.0.332-zulu lrwxr-xr-x 1 frankdelporte staff 7 Nov 21 21:09 current -> 19-zulu And in each of these directories, a release file can be found, which also shows us the version information, including some extra information. $ cat /Users/frankdelporte/.sdkman/candidates/java/19-zulu/release IMPLEMENTOR="Azul Systems, Inc." IMPLEMENTOR_VERSION="Zulu19.28+81-CA" JAVA_VERSION="19" JAVA_VERSION_DATE="2022-09-20" LIBC="default" MODULES="java.base java.compiler ... jdk.unsupported jdk.unsupported.desktop jdk.xml.dom" OS_ARCH="aarch64" OS_NAME="Darwin" SOURCE=".:git:3d665268e905" $ cat /Users/frankdelporte/.sdkman/candidates/java/8.0.332-zulu//release JAVA_VERSION="1.8.0_332" OS_NAME="Darwin" OS_VERSION="11.2" OS_ARCH="aarch64" SOURCE="git:f4b2b4c5882e" Getting More Information With ShowSettings In 2010, an experimental flag (indicated with the X) was added to OpenJDK to provide more configuration information: -XshowSettings. This flag can be called with different arguments, each producing another information output. The cleanest way to call this flag is by adding -version; otherwise, you will get the long Java manual output as no application code was found to be executed. Reading the System Properties By using the -XshowSettings:properties flag, a long list of various properties is shown. $ java -XshowSettings:properties -version Property settings: file.encoding = UTF-8 file.separator = / ftp.nonProxyHosts = local|*.local|169.254/16|*.169.254/16 http.nonProxyHosts = local|*.local|169.254/16|*.169.254/16 java.class.path = java.class.version = 63.0 java.home = /Users/frankdelporte/.sdkman/candidates/java/19-zulu/zulu-19.jdk/Contents/Home java.io.tmpdir = /var/folders/np/6j1kls013kn2gpg_k6tz2lkr0000gn/T/ java.library.path = /Users/frankdelporte/Library/Java/Extensions /Library/Java/Extensions /Network/Library/Java/Extensions /System/Library/Java/Extensions /usr/lib/java . java.runtime.name = OpenJDK Runtime Environment java.runtime.version = 19+36 java.specification.name = Java Platform API Specification java.specification.vendor = Oracle Corporation java.specification.version = 19 java.vendor = Azul Systems, Inc. java.vendor.url = http://www.azul.com/ java.vendor.url.bug = http://www.azul.com/support/ java.vendor.version = Zulu19.28+81-CA java.version = 19 java.version.date = 2022-09-20 java.vm.compressedOopsMode = Zero based java.vm.info = mixed mode, sharing java.vm.name = OpenJDK 64-Bit Server VM java.vm.specification.name = Java Virtual Machine Specification java.vm.specification.vendor = Oracle Corporation java.vm.specification.version = 19 java.vm.vendor = Azul Systems, Inc. java.vm.version = 19+36 jdk.debug = release line.separator = \n native.encoding = UTF-8 os.arch = aarch64 os.name = Mac OS X os.version = 13.0.1 path.separator = : socksNonProxyHosts = local|*.local|169.254/16|*.169.254/16 stderr.encoding = UTF-8 stdout.encoding = UTF-8 sun.arch.data.model = 64 sun.boot.library.path = /Users/frankdelporte/.sdkman/candidates/java/19-zulu/zulu-19.jdk/Contents/Home/lib sun.cpu.endian = little sun.io.unicode.encoding = UnicodeBig sun.java.launcher = SUN_STANDARD sun.jnu.encoding = UTF-8 sun.management.compiler = HotSpot 64-Bit Tiered Compilers user.country = BE user.dir = /Users/frankdelporte user.home = /Users/frankdelporte user.language = en user.name = frankdelporte openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) If you ever faced the problem of an unsupported Java version 59 (are similar), you'll now also understand where this value is defined; it's right here in this list as java.class.version. It's an internal number used by Java to define the version. Java release 8 9 10 11 12 13 14 15 16 17 18 19 Class version 52 53 54 55 56 57 58 59 60 61 62 63 Reading the Locale Information In case you didn't know yet, I live in Belgium and use English as my computer language, as you can see when using the -XshowSettings:locale flag: $ java -XshowSettings:locale -version Locale settings: default locale = English (Belgium) default display locale = English (Belgium) default format locale = English (Belgium) available locales = , af, af_NA, af_ZA, af_ZA_#Latn, agq, agq_CM, agq_CM_#Latn, ak, ak_GH, ak_GH_#Latn, am, am_ET, am_ET_#Ethi, ar, ar_001, ar_AE, ar_BH, ar_DJ, ar_DZ, ar_EG, ar_EG_#Arab, ar_EH, ar_ER, ... zh_MO_#Hant, zh_SG, zh_SG_#Hans, zh_TW, zh_TW_#Hant, zh__#Hans, zh__#Hant, zu, zu_ZA, zu_ZA_#Latn openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) Reading the VM Settings With the -XshowSettings:vm flag, some info is shown about the Java Virtual Machine. As you can see in the second example, the amount of maximum heap memory size can be defined with the -Xmx flag. $ java -XshowSettings:vm -version VM settings: Max. Heap Size (Estimated): 8.00G Using VM: OpenJDK 64-Bit Server VM openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) $ java -XshowSettings:vm -Xmx512M -version VM settings: Max. Heap Size: 512.00M Using VM: OpenJDK 64-Bit Server VM openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) Reading All at Once If you want all of the information above with one call, use the -XshowSettings:all flag. Conclusion Next to the java -version, we can also use java -XshowSettings:all -version to get more info about our Java environment.

By Frank Delporte
New MacBook Air Beats M1 Max for Java Development
New MacBook Air Beats M1 Max for Java Development

This is a shocker…I just switched laptops, and I thought I was downgrading from the “top of the line” M1 Max with 64 GB (14.1-inch version) to a “tiny” MacBook Air M2 with “only” 24gb of RAM. Turns out I was wrong. The new M2 seems to be noticeably faster for my use cases as a Java developer. I was at first shocked, but in retrospect, I guess this makes sense. I recently left my job at Lightrun. I usually buy my own laptops, as I don’t enjoy constantly switching devices when I’m at work or working on personal things. But since I worked at Lightrun for so long, I accepted their offer for a laptop. One year after I got the new laptop, I found myself leaving the company. So arguably, this should have been a big mistake. Turns out it wasn’t. I wanted to buy the same machine. I was very pleased with the M1 Max. It’s powerful, fast, light, and has a battery that lasts forever. It runs cool and looks cool. I placed an order with a local vendor who had the worst possible service. I ended up canceling that. Then I started looking around. I initially dismissed the MacBook Airs. I had a couple of MacBook Airs in the past, and they were good for some things. But I do a lot of video editing nowadays, and I also can’t stand their sharp edges. They are uncomfortable to hold in some situations. The new MacBook Airs finally increased the RAM. Not to 32 GB as I’d have wanted, but 24 GB is already a vast improvement over the minuscule 16 GB of older devices. They also come in black and cost much less than the equivalent pro. MacBook Airs are lighter and thinner. I’m very mobile, both because I travel and because I work everywhere. For me, the thin and light device is a tremendous advantage. You can check out the comparison tool on Apple's website. They do have two big disadvantages: No HDMI port— that sucks a bit. It was nice plugging in at conferences with no dongle. But it’s not something I do often, and AirPlay works for most other use cases. Only one external monitor — I don’t use external monitors when I work, so this wasn’t a problem for me. If you’re the type that needs two monitors for work, then this isn’t the laptop for you. Since both aren’t an issue for me and the other benefits are great. I saved some money and bought the Air. I expected to take a performance hit. Turns out I got a major performance boost! Migration I used Time Machine to back up my older Mac and restore it to the new Mac. In terms of installed software and settings, both devices should be identical. The stuff that is running on the old machine should be on the new machine as well. Including all the legacy that might be slowing it down. However, I wouldn’t consider my findings as scientific as this isn’t a clean environment. Everything is based on my use cases. Professional sites have better benchmarks for common use cases. I suggest referring to them for a more complete picture. However, for me, the machine is MUCH better. Probably most glaring is the IDE startup. I use IntelliJ/IDEA Ultimate for most of my day-to-day work. I just started writing a new book after completing my latest book (which you can preorder now), and for that purpose, I installed a fresh version of IntelliJ Community Edition. It doesn’t include all the plugins and setup in my typical install. It’s the perfect environment to check IDE startup time. Notice that I measured this with a stopwatch app, not ideal. I started the stopwatch with the icon click and stopped when the IDE fully loaded the Codename One project. MBP M1 Max 64 GB - 6.30 MBA M2 24 GB - 4.54 This is a sizeable gap in performance, and it’s consistent with these types of IO-bound operations. Running mvn package on the Codename One project for both showed slightly lower but still consistent improvements. I ran this multiple times to make sure: MBP M1 Max 64 GB - 20.211 MBA M2 24 GB - 18.346 These are not medians or averages, just the output of one specific execution. But I ran the build repeatedly, and the numbers were consistent with a rough 2-second advantage to the M2. Notice I used the ARM build of the JDK and not the x86 versions. As part of my work, I also create media and presentations. I work a lot in keynote and export content from it. The next obvious test is to export a small presentation I made to both PDF and a movie. In the PDF export, I made both exports for all the stages of the build. MBP M1 Max 64 GB - 2.8 MBA M2 24 GB - 2.13 Again this shows a healthy advantage to the M2 device. But here’s the twist, when exporting to a movie, the benchmark flipped completely, and the MBP wiped the floor with the MBA. MBP M1 Max 64 GB - 26.8 MBA M2 24 GB - 39.59 Explanation In retrospect, these numbers shouldn’t have surprised me. The M2 would be faster for these sorts of loads. IO would be faster. The only point where the M1 would gain an advantage would be if the 24 GB of the Air would be depleted. This isn’t likely for the current test, so the air wins. Where the Air loss is in GPU-bound work. I’m assuming the movie export code does all the encoding on the GPU, which is huge and powerful on the M1 Max. I hope this won't be a problem for my video editing work, but I guess I’ll manage with what I have. Even though the device is smaller by one inch only, the size difference is hard to get used to at this point. I worked on a MacBook Air in the past, so I’m sure this will pass as I get used to it. It’s a process. I’m thrilled with my decision, and the black device is such a refreshing feeling after all of those silver and gray Macs. The power brick is also much smaller, which is one of those details that matter so much to frequent travelers. Why Am I Using a Mac? This might be the obvious question. I don’t use an iPhone, so I might as well get a Linux laptop like a good hacker. I still develop things on Codename One; here, I occasionally need a Mac for iOS-related work. It’s not as often, but it happens. The second reason is that I’m pretty used to it by now. The desktop on Linux still feels not as productive to me. There is one reason I considered going back to Linux, and that’s docker. I love the M1/2 chips. They are fantastic. Unfortunately, many docker images are Intel only, and that’s pretty hard to work with when setting up anything sophisticated. The problem is solving itself as ARM machines gain traction. But we aren’t there yet. Finally Yes, I know. This article is shocking: a newer machine is faster than an older machine. But keep in mind that the M1 was top of the line in all regards, and the Air has half the performance cores. It's much thinner, fanless, and around 30% lighter. That's amazing over a single-generation update. Amazingly I think the M2 is powerful enough in a MacBook Air for most people. I think I would pick it even if the M1 Max was at the same price point. It’s better looking. It’s lighter. Most of the things that matter to me perform better on the Air. It’s small but not too small, and the screen is pretty great. I can live with all of those. It doesn’t have that weird MBA sharp edge older versions have. It’s a great machine. Hopefully, I’ll feel the same way when the honeymoon period is over, so if you’re reading this in 2023, feel free to comment/ping me, I might have additional insights. The one point I’m conflicted about is stickers. The black finish is so pretty. But I want stickers. I had such a hard time removing them from the M1 machine. It’s too soon…

By Shai Almog CORE
The Generic Way To Convert Between Java and PostgreSQL Enums
The Generic Way To Convert Between Java and PostgreSQL Enums

An enumerated type (enum) is a handy data type that allows us to specify a list of constants to which an object field or database column can be set. The beauty of the enums is that we can enforce data integrity by providing the enum constants in a human-readable format. As a result, it’s unsurprising that this data type is natively supported in Java and PostgreSQL. However, the conversion between Java and PostgreSQL enums doesn’t work out of the box. The JDBC API doesn’t recognize enums as a distinct data type, leaving it up to the JDBC drivers to decide how to deal with the conversion. And, usually, the drivers do nothing about it — the chicken-and-egg problem. Many solutions help you map between Java and PostgreSQL enums, but most are ORM or JDBC-specific. This means that what is suggested for Spring Data will not work for Quarkus and vice versa. In this article, I will review a generic way of handling the Java and PostgreSQL enums conversion. This approach works for plain JDBC APIs and popular ORM frameworks such as Spring Data, Hibernate, Quarkus, and Micronaut. Moreover, it’s supported by databases built on PostgreSQL, including Amazon Aurora, Google AlloyDB, and YugabyteDB. Creating Java Entity Object and Enum Assume that we have a Java entity object for a pizza order: Java public class PizzaOrder { private Integer id; private OrderStatus status; private Timestamp orderTime; // getters and setters are omitted } The status field of the object is of an enumerated type defined as follows: Java public enum OrderStatus { Ordered, Baking, Delivering, YummyInMyTummy } The application sets the status to Ordered once we order a pizza online. The status changes to Baking as soon as the chef gets to our order. Once the pizza is freshly baked, it is picked up by someone and delivered to our door - the status is then updated to Delivering. In the end, the status is set to YummyInMyTummy meaning that we enjoyed the pizza (hopefully!) Creating Database Table and Enum To persist the pizza orders in PostgreSQL, let’s create the following table that is mapped to our PizzaOrder entity class: SQL CREATE TABLE pizza_order ( id int PRIMARY KEY, status order_status NOT NULL, order_time timestamp NOT NULL DEFAULT now() ); The table comes with a custom type named order_status. The type is an enum that is defined as follows: SQL CREATE TYPE order_status AS ENUM( 'Ordered', 'Baking', 'Delivering', 'YummyInMyTummy'); The type defines constants (statuses) similar to the Java counterpart. Hitting the Conversion Issue If we connect to PostgreSQL using psql (or another SQL tool) and execute the following INSERT statement, it will complete successfully: SQL insert into pizza_order (id, status, order_time) values (1, 'Ordered', now()); The statement nicely accepts the order status (the enum data type) in a text representation - Ordered. After seeing that, we may be tempted to send a Java enum value to PostgreSQL in the String format. If we use the JDBC API directly, the PreparedStatement can look as follows: Java PreparedStatement statement = conn .prepareStatement("INSERT INTO pizza_order (id, status, order_time) VALUES(?,?,?)"); statement.setInt(1, 1); statement.setString(2, OrderStatus.Ordered.toString()); statement.setTimestamp(3, Timestamp.from(Instant.now())); statement.executeUpdate(); However, the statement will fail with the following exception: Java org.postgresql.util.PSQLException: ERROR: column "status" is of type order_status but expression is of type character varying Hint: You will need to rewrite or cast the expression. Position: 60 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355) at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408) Even though PostgreSQL accepts the enum text representation when an INSERT/UPDATE statement is executed directly via a psql session, it doesn’t support the conversion between the varchar (passed by Java) and our enum type. One way to fix this for the plain JDBC API is by persisting the Java enum as an object of the java.sql.Types.OTHER type: Java PreparedStatement statement = conn .prepareStatement("INSERT INTO pizza_order (id, status, order_time) VALUES(?,?,?)"); statement.setInt(1, 1); statement.setObject(2, OrderStatus.Ordered, java.sql.Types.OTHER); statement.setTimestamp(3, Timestamp.from(Instant.now())); statement.executeUpdate(); But, as I said earlier, this approach is not generic. While it works for the plain JDBC API, you need to look for another solution if you are on Spring Data, Quarkus, or another ORM. Casting Types at the Database Level The database provides a generic solution. PostgreSQL supports the cast operator that can perform a conversion between two data types automatically. So, in our case, all we need to do is to create the following operator: SQL CREATE CAST (varchar AS order_status) WITH INOUT AS IMPLICIT; The created operator will map between the varchar type (passed by the JDBC driver) and our database-level order_status enum type. The WITH INOUT AS IMPLICIT clause ensures that the cast will happen transparently and automatically for all the statements using the order_status type. Testing With Plain JDBC API After we create that cast operator in PostgreSQL, the earlier JDBC code snippet inserts an order with no issues: Java PreparedStatement statement = conn .prepareStatement("INSERT INTO pizza_order (id, status, order_time) VALUES(?,?,?)"); statement.setInt(1, 1); statement.setString(2, OrderStatus.Ordered.toString()); statement.setTimestamp(3, Timestamp.from(Instant.now())); statement.executeUpdate(); All we need is to pass the Java enum value as a String, and the driver will send it to PostgreSQL in the varchar representation that will automatically convert the varchar value to the order_status type. If you read the order back from the database, then you can easily reconstruct the Java-level enum from a String value: Java PreparedStatement statement = conn.prepareStatement("SELECT id, status, order_time " + "FROM pizza_order WHERE id = ?"); statement.setInt(1, 1); ResultSet resultSet = statement.executeQuery(); resultSet.next(); PizzaOrder order = new PizzaOrder(); order.setId(resultSet.getInt(1)); order.setStatus(OrderStatus.valueOf(resultSet.getString(2))); order.setOrderTime(resultSet.getTimestamp(3)); Testing With Spring Data Next, let’s validate the cast operator-based approach with Spring Data. Nowadays, you’re likely to use an ORM rather than the JDBC API directly. First, we need to label our PizzaOrder entity class with a few JPA and Hibernate annotations: Java @Entity public class PizzaOrder { @Id private Integer id; @Enumerated(EnumType.STRING) private OrderStatus status; @CreationTimestamp private Timestamp orderTime; // getters and setters are omitted } The @Enumerated(EnumType.STRING) instructs a JPA implementation (usually Hibernate) to pass the enum value as a String to the driver. Second, we create PizzaOrderRepository and save an entity object using the Spring Data API: Java // The repository interface public interface PizzaOrderRepository extends JpaRepository<PizzaOrder, Integer> { } // The service class @Service public class PizzaOrderService { @Autowired PizzaOrderRepository repo; @Transactional public void addNewOrder(Integer id) { PizzaOrder order = new PizzaOrder(); order.setId(id); order.setStatus(OrderStatus.Ordered); repo.save(order); } ... // Somewhere in the source code pizzaService.addNewOrder(1); } When the pizzaService.addNewOrder(1) method is called somewhere in our source code, the order will be created and persisted successfully to the database. The conversion between the Java and PostgreSQL enums will occur with no issues. Lastly, if we need to read the order back from the database, we can use the JpaRepository.findById(ID id) method, which recreates the Java enum from its String representation: Java PizzaOrder order = repo.findById(orderId).get(); System.out.println("Order status: " + order.getStatus()); Testing With Quarkus How about Quarkus, which might be your #1 ORM? There is no significant difference from Spring Data as long as Quarkus favours Hibernate as a JPA implementation. First, we annotate our PizzaOrder entity class with JPA and Hibernate annotations: Java @Entity(name = "pizza_order") public class PizzaOrder { @Id private Integer id; @Enumerated(EnumType.STRING) private OrderStatus status; @CreationTimestamp @Column(name = "order_time") private Timestamp orderTime; // getters and setters are omitted } Second, we introduce PizzaOrderService that uses the EntityManager instance for database requests: Java @ApplicationScoped public class PizzaOrderService { @Inject EntityManager entityManager; @Transactional public void addNewOrder(Integer id) { PizzaOrder order = new PizzaOrder(); order.setId(id); order.setStatus(OrderStatus.Ordered); entityManager.persist(order); } ... // Somewhere in the source code pizzaService.addNewOrder(1); When we call the pizzaService.addNewOrder(1) somewhere in our application logic, Quarkus will persist the order successfully, and PostgreSQL will take care of the Java and PostgreSQL enums conversion. Finally, to read the order back from the database, we can use the following method of the EntityManager that maps the data from the result set to the PizzaOrder entity class (including the enum field): Java PizzaOrder order = entityManager.find(PizzaOrder.class, 1); System.out.println("Order status: " + order.getStatus()); Testing With Micronaut Alright, alright, how about Micronaut? I love this framework, and you might favour it as well. The database-side cast operator is a perfect solution for Micronaut as well. To make things a little different, we won’t use Hibernate for Micronaut. Instead, we’ll rely on Micronaut’s own capabilities by using the micronaut-data-jdbc module: XML <dependency> <groupId>io.micronaut.data</groupId> <artifactId>micronaut-data-jdbc</artifactId> </dependency> // other dependencies First, let’s annotate the PizzaOrder entity: Java @MappedEntity public class PizzaOrder { @Id private Integer id; @Enumerated(EnumType.STRING) private OrderStatus status; private Timestamp orderTime; // getters and setters are omitted } Next, define PizzaRepository: Java @JdbcRepository(dialect = Dialect.POSTGRES) public interface PizzaRepository extends CrudRepository<PizzaOrder, Integer> { } And, then store a pizza order in the database by invoking the following code snippet somewhere in the application logic: Java PizzaOrder order = new PizzaOrder(); order.setId(1); order.setStatus(OrderStatus.Ordered); order.setOrderTime(Timestamp.from(Instant.now())); repository.save(order); As with Spring Data and Quarkus, Micronaut persists the object to PostgreSQL with no issues letting the database handle the conversion between the Java and PostgreSQL enum types. Finally, whenever we need to read the order back from the database, we can use the following JPA API: Java PizzaOrder order = repository.findById(id).get(); System.out.println("Order status: " + order.getStatus()); The findById(ID id) method retrieves the record from the database and recreates the PizzaOrder entity, including the PizzaOrder.status field of the enum type. Wrapping Up Nowadays, it’s highly likely that you will use Java enums in your application logic and as a result will need to persist them to a PostgreSQL database. You can use an ORM-specific solution for the conversion between Java and PostgreSQL enums, or you can take advantage of the generic approach based on the cast operator of PostgreSQL. The cast operator-based approach works for all ORMs, including Spring Data, Hibernate, Quarkus, and Micronaut, as well as popular PostgreSQL-compliant databases like Amazon Aurora, Google AlloyDB, and YugabyteDB.

By Denis Magda CORE
Kubernetes Remote Development in Java Using Kubernetes Maven Plugin
Kubernetes Remote Development in Java Using Kubernetes Maven Plugin

Introduction In this article, we’re going to look at some pain points while developing Java applications on top of Kubernetes. We’re going to look at newly added functionality in Eclipse JKube’s Kubernetes Maven Plugin that allows your application running on your local machine to get exposed in Kubernetes Cluster. If you haven’t heard about Eclipse JKube or Kubernetes Maven Plugin, I’d suggest you read the following DZone articles first: DZone: Deploy Maven Apps to Kubernetes With JKube Kubernetes Maven Plugin DZone: Containerize Gradle Apps and Deploy to Kubernetes With JKube Kubernetes Gradle Plugin Target Audience: This blog post targets Java developers who are working with Kubernetes and are familiar with containerized application development. We’re assuming that the reader has experience with Docker and Kubernetes. Eclipse JKube’s Kubernetes Remote Development functionality is suitable for Java developers working on Java applications communicating with several micro-services in Kubernetes, which is difficult to replicate on your local machine. Current Solutions Docker Compose: You can provide your own YAML file to configure your application services and start all services by your provided YAML configuration. This is only limited to Docker environment. Sometimes these services can be impossible to start due to resource constraints. Also, we may not be allowed to duplicate sensitive data locally. Dev Services: Some popular frameworks also support the automatic provisioning of dependent services in development/testing environments. Developers only need to worry about enabling this feature, and the framework takes care of starting the service and wiring it with your application. This is also limited to Docker environment. Build and Deploy Tooling: Use Kubernetes-related tooling to deploy all dependent services and then deploy your application to Kubernetes. Not smooth as compared to previous alternatives that are limited to Docker Building and deploying applications on every small change leads to slower development iterations What Is Eclipse JKube Kubernetes Remote Development Our team at Eclipse Cloud Tooling is focused on creating tools that ease developer activity and development workflow across distributed services. While working and testing on Kubernetes Maven Plugin, we noticed that repeatedly building and deploying applications to Kubernetes while developing locally isn’t the most effective way of working. In v1.10.1 of Kubernetes Maven Plugin, we added a new goal k8s:remote-dev . This goal tries to ease java developer workflow across distributed services via: Consuming remote services that are running inside the Kubernetes cluster Live application coding while interacting with other services running in Kubernetes Cluster Exposing applications running locally or by connecting to remote services Why Kubernetes Remote Development? Let’s consider a scenario where we’re writing a joke microservice that tries to fetch joke strings from other microservices. Here is a diagram for you to get a better understanding: Figure 1: Simple Joke application using two existing services Custom Joke Service is our main application which has one endpoint /random-joke. It depends on two other microservices ChuckNorris and Jokes via /chuck-norris and /joke endpoints, respectively. The user requests a joke using /random-joke endpoint, and our application fetches a joke string from one of the two microservices randomly. In order to develop and test our application, we need access to these dependent ChuckNorris and Jokes services, respectively. Let’s see what the developer’s workflow would look like: While developing and testing Custom Joke Microservice locally, the developer has to set up dependent microservices locally again and again. In order to verify the application is working properly in Kubernetes. The developer has to build, package and deploy the Custom Joke application to Kubernetes in every development iteration. The dependent Services (ChuckNorris and Joke) might be quite heavyweight services and might have some dependent services of their own. It might not be straightforward to set up these locally in the developer’s environment. Exposing Remote Kubernetes Services Locally Let’s assume you have two applications already running in Kubernetes Cluster on which your current application is dependent: $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service1 NodePort 10.101.224.227 <none> 8080:31878/TCP 113s service2 NodePort 10.101.224.227 <none> 8080:31879/TCP 113s Let us expose these remote services running in Kubernetes Cluster to our local machine. Here is a diagram for you to better understand: Figure 2: JKube's remote development simplifying remote development In order to do that, we need to provide XML configuration to our plugin for exposing these services: XML <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>kubernetes-maven-plugin</artifactId> <version>${jkube.version}</version> <configuration> <remoteDevelopment> <remoteServices> <remoteService> <hostname>service1</hostname> <!-- Name of Service --> <port>8080</port> <!-- Service port --> <localPort>8081</localPort> <!-- Local Port where to expose --> </remoteService> <remoteService> <hostname>service2</hostname> <!-- Name of Service --> <port>8080</port> <!-- Service Port --> <localPort>8082</localPort> <!-- Local Port where to expose --> </remoteService> </remoteServices> </remoteDevelopment> </configuration> </plugin> The above configuration is doing these two things: Expose Kubernetes service named service1 on port 8081 on your local machine Expose Kubernetes service named service2 on port 8082 on your local machine Run Kubernetes Remote Development goal: Shell $ mvn k8s:remote-dev [INFO] Scanning for projects... [INFO] [INFO] -----------< org.eclipse.jkube.demos:random-jokes-generator >----------- [INFO] Building random-jokes-generator 1.0.0-SNAPSHOT [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- kubernetes-maven-plugin:1.10.1:remote-dev (default-cli) @ random-jokes-generator --- [INFO] k8s: Waiting for JKube remote development Pod [jkube-remote-dev-9cafc1e1-054b-4fab-8f4e-b4345056478e] to be ready... [INFO] k8s: JKube remote development Pod [jkube-remote-dev-9cafc1e1-054b-4fab-8f4e-b4345056478e] is ready [INFO] k8s: Opening remote development connection to Kubernetes: jkube-remote-dev-9cafc1e1-054b-4fab-8f4e-b4345056478e:54252 [INFO] k8s: Kubernetes Service service1:8080 is now available at local port 8081 [INFO] k8s: Kubernetes Service service2:8080 is now available at local port 8082 Try accessing services available locally on ports: Shell $ curl localhost:8081/ Chuck Norris's OSI network model has only one layer - Physical. $ curl localhost:8082/ Why do Java programmers have to wear glasses? Because they don't C#. As you can see, You are able to access Kubernetes services service1 and service2 locally on ports 8081 and 8082, respectively. Conclusion In this article, you learned about Eclipse JKube’s Kubernetes Maven Plugin’s remote development goal and how you can expose your local applications to the Kubernetes cluster and vice versa. In case you’re interested in knowing more about Eclipse JKube, you can check these links: Documentation Github Issue Tracker StackOverflow YouTube Channel Twitter Gitter Chat

By Rohan Kumar
How To Build a Command-Line Text Editor With Java (Part 3)
How To Build a Command-Line Text Editor With Java (Part 3)

Let's continue building our Java-based, command-line text editor that we started in Part 1 and Part 2. Here in Part 3, we will cover the following: How to implement page-up and page-down functionality How to make the end key work properly, including cursor snapping How to make our text editor work on all operating systems, including macOS and Windows - not just Linux You're in for a ride! What’s in the Video In the previous episode, we got vertical scrolling with the arrow keys working. Unfortunately, when you press page up or page down, nothing happens - and we need to change that! We'll use a tiny trick to simulate the page up/down functionality, mapping it to pressing the arrow up/down for the number of rows our screen has. It serves as a good initial implementation, though there are a couple of edge cases we need to iron out. Once we have the page up and down working, it's time to care about horizontal scrolling. At the moment, our text viewer renders lines overflowing the screen, leading to heavy flickering whenever we vertically move our cursor. Ideally, we only want to render as much text as we have columns on the screen - and then we want to move the screen's contents horizontally, whenever we press the left or right keys at the beginning or end of the screen. To implement horizontal scrolling we can take most of the code for vertical scrolling, copy and paste it, and just replace a couple of key variables - done! After horizontal scrolling, let's take care of a couple of minor editing issues: first of all, the end key. It currently makes the user jump to the end of the screen. Ideally, we'd like the end key to only jump to the end of the current line. With a couple of small changes to our moveCursor() function, we can implement that behavior. This opens up another problem: when we are at the end of a line and then move vertically upwards or downwards, we also want to automatically snap to the end of the new line, not just end up somewhere in the middle. So, we'll need to fix our cursor-snapping implementation. In between, I'll leave a couple of notes for you regarding cursor line wrapping. We don't have enough time to implement it in this episode, but it would serve as a great exercise, for you, the watcher, to implement. Last but not least, we'll need to fix a couple of issues for our macOS and Windows platform support. The issue with macOS is that while it uses the same OS APIs as Linux, it uses different values for the OS calls. Hence, we'll need to invent an abstraction/delegation layer that detects if the current OS is macOS or Linux, and then use the corresponding, OS-specific classes. Windows uses a completely different API to put terminals into raw mode or get the current terminal size, and we'll have to dig deep into Microsoft's API documentation to find out which Windows methods we'll need to implement on our JNA side. That's it for today! See you in the next episode, where we'll implement searching across your text file.

By Marco Behler CORE
Run Java Microservices Across Multiple Cloud Regions With Spring Cloud
Run Java Microservices Across Multiple Cloud Regions With Spring Cloud

If you want to run your Java microservices on a public cloud infrastructure, you should take advantage of the multiple cloud regions. There are several reasons why this is a good idea. First, cloud availability zones and regions fail regularly due to hardware issues, bugs introduced after a cloud service upgrade, or banal human errors. One of the most well-known S3 outages happened when an AWS employee messed with an operational command! If a cloud region fails, so do your microservices from that region. But, if you run microservice instances across multiple cloud regions, you remain up and running even if an entire US East region is melting. Second, you may choose to deploy microservices in the US East, but the application gets traction across the Atlantic in Europe. The roundtrip latency for users from Europe to your application instances in the US East will be around 100ms. Compare this to the 5ms roundtrip latency for the user traffic originating from the US East (near the data centers running microservices), and don't be surprised when European users say your app is slow. You shouldn't hear this negative feedback if microservice instances are deployed in both the US East and Europe West regions. Finally, suppose a Java microservice serves a user request from Europe but requests data from a database instance in the USA. In that case, you might fall foul of data residency requirements (if the requested data is classified as personal by GDPR). However, if the microservice instance runs in Europe and gets the personal data from a database instance in one of the European cloud regions, you won't have the same problems with regulators. This was a lengthy introduction to the article's main topic, but I wanted you to see a few benefits of running Java microservices in multiple distant cloud locations. Now, let's move on to the main topic and see how to develop and deploy multi-region microservices with Spring Cloud. High-Level Concept Let’s take a geo-distributed Java messenger as an example to form a high-level understanding of how microservices and Spring Cloud function in a multi-region environment. The application (comprised of multiple microservices) runs across multiple distant regions: US West, US Central, US West, Europe West, and Asia South. All application instances are stateless. Spring Cloud components operate in the same regions where the application instances are located. The application uses Spring Config Server for configuration settings distribution and Spring Discovery Server for smooth and fault-tolerant inter-service communication. YugabyteDB is selected as a distributed database that can easily function across distant locations. Plus, as long as it’s built on the PostgreSQL source code, it naturally integrates with Spring Data and other components of the Spring ecosystem. I’m not going to review YugabyteDB multi-region deployment options in this article. Check out this article if you’re curious about those options and how to select the best one for this geo-distributed Java messenger. The user traffic gets to the microservice instances via a Global External Cloud Load Balancer. In short, the load balancer comes with a single IP address that can be accessed from any point on the planet. That IP address (or a DNS name that translates to the address) is given to your web or mobile front end, which uses the IP to connect to the application backend. The load balancer forwards user requests to the nearest application instance automatically. I’ll demonstrate this cloud component in greater detail below. Target Architecture A target architecture of the multi-region Java messenger looks like this: The whole solution runs on the Google Cloud Platform. You might prefer another cloud provider, so feel free to go with it. I usually default to Google for its developer experience, abundant and reasonably priced infrastructure, fast and stable network, and other goodies I’ll be referring to throughout the article. The microservice instances can be deployed in as many cloud regions as necessary. In the picture above, there are two random regions: Region A and Region B. Microservice instances can run in several availability zones of a region (Zone A and B of Region A) or within a single zone (Zone A of Region B). It’s also reasonable to have a single instance of the Spring Discovery and Config servers per region, but I purposefully run an instance of each server per availability zone to bring the latency to a minimum. Who decides which microservice instance will serve a user request? Well, the Global External Load Balancer is the decision-maker! Suppose a user pulls up her phone, opens the Java messenger, and sends a message. The request with the message will go to the load balancer, and it might forward it this way: Region A is the closest to the user, and it’s healthy at the time of the request (no outages). The load balancer selects this region based on those conditions. In that region, microservice instances are available in both Zone A and B. So, the load balancer can pick any zone if both are live and healthy. Let’s suppose that the request went to Zone B. I’ll explain what each microservice is responsible for in the next section. As of now, all you should know is that the Messaging microservice stores all application data (messages, channels, user profiles, etc.) in a multi-region YugabyteDB deployment. The Attachments microservice uses a globally distributed Google Cloud Storage for user pictures. Microservices and Spring Cloud Let’s talk more about microservices and how they utilize Spring Cloud. The Messenger microservice implements the key functionality that every messenger app must possess—the ability to send messages across channels and workspaces. The Attachments microservice uploads pictures and other files. You can check their source code in the geo-messenger’s repository. Spring Cloud Config Server Both microservices are built on Spring Boot. When they start, they retrieve configuration settings from the Spring Cloud Config Server, which is an excellent option if you need to externalize the config files in a distributed environment. The config server can host and pull your configuration from various backends, including a Git repository, Vault, and a JDBC-compliant database. In the case of the Java geo-messenger, the Git option is used, and the following line from the application.properties file of both microservices requests Spring Boot to load the settings from the Config Server: YAML spring.config.import=configserver:http://${CONFIG_SERVER_HOST}:${CONFIG_SERVER_PORT} Spring Cloud Discovery Server Once the Messenger and Attachments microservices are booted, they register with their zone-local instance of the Spring Cloud Discovery Server (that belongs to the Spring Cloud Netflix component). The location of a Discovery Server instance is defined in the following configuration setting that is transferred from the Config Server instance: YAML eureka.client.serviceUrl.defaultZone=http://${DISCOVERY_SERVER_HOST}:${DISCOVERY_SERVER_PORT}/eureka You can also open the HTTP address in the browser to confirm the services have successfully registered with the Discovery Server: The microservice register with the server using the name you pass via the spring.application.name setting of the application.properties file. As the above picture shows, I’ve chosen the following names: spring.application.name=messenger for the Messenger microservice spring.application.name=attachments for the Attachments service The microservice instances use those names to locate and send requests to each other via the Discovery Server. For example, when a user wants to upload a picture in a discussion channel, the request goes to the Messenger service first. But then, the Messenger delegates this task to the Attachments microservice with the help of the Discovery Server. First, the Messenger service gets an instance of the Attachments counterpart: Java List<ServiceInstance> serviceInstances = discoveryClient.getInstances("ATTACHMENTS"); ServiceInstance instance; if (!serviceInstances.isEmpty()) { instance = serviceInstances .get(ThreadLocalRandom.current().nextInt(0, serviceInstances.size())); } System.out.printf("Connected to service %s with URI %s\n", instance.getInstanceId(), instance.getUri()); Next, the Messenger microservice creates an HTTP client using the Attachments’ instance URI and sends a picture via an InputStream: Java HttpClient httpClient = HttpClient.newBuilder().build(); HttpRequest request = HttpRequest.newBuilder() .uri(URI.create(instance.getUri() + "/upload?fileName=" + fileName)) .header("Content-Type", mimeType) .POST(HttpRequest.BodyPublishers.ofInputStream(new Supplier<InputStream>() { @Override public InputStream get() { return inputStream; } })).build(); The Attachments service receives the request via a REST endpoint and eventually stores the picture in Google Cloud Storage, returning a picture URL to the Messenger microservice: Java public Optional<String> storeFile(String filePath, String fileName, String contentType) { if (client == null) { initClient(); } String objectName = generateUniqueObjectName(fileName); BlobId blobId = BlobId.of(bucketName, objectName); BlobInfo blobInfo = BlobInfo.newBuilder(blobId).build(); try { client.create(blobInfo, Files.readAllBytes(Paths.get(filePath))); } catch (IOException e) { System.err.println("Failed to load the file:" + fileName); e.printStackTrace(); return Optional.empty(); } System.out.printf( "File %s uploaded to bucket %s as %s %n", filePath, bucketName, objectName); String objectFullAddress = "http://storage.googleapis.com/" + bucketName + "/" + objectName; System.out.println("Picture public address: " + objectFullAddress); return Optional.of(objectFullAddress); } If you’d like to explore a complete implementation of the microservices and how they communicate via the Discovery Server, visit the GitHub repo, linked earlier in this article. Deploying on Google Cloud Platform Now, let’s deploy the Java geo-messenger on GCP across three geographies and five cloud regions - North America ('us-west2,' 'us-central1,' 'us-east4'), Europe ('europe-west3') and Asia ('asia-east1'). Follow these deployment steps: Create a Google project. Create a custom premium network. Configure Google Cloud Storage. Create Instance Templates for VMs. Start VMs with application instances. Configure Global External Load Balancer. I’ll skip the detailed instructions for the steps above. You can find them here. Instead, let me use the illustration below to clarify why the premium Google network was selected in step #2: Suppose an application instance is deployed in the USA on GCP, and the user connects to the application from India. There are slow and fast routes to the app from the user’s location. The slow route is taken if you select the Standard Network for your deployment. In this case, the user request travels over the public Internet, entering and exiting the networks of many providers before getting to the USA. Eventually, in the USA, the request gets to Google’s PoP (Point of Presence) near the application instance, enters the Google network, and gets to the application. The fast route is selected if your deployment uses the Premium Network. In this case, the user request enters the Google Network at the PoP closest to the user and never leaves it. That PoP is in India, and the request will speed to the application instance in the USA via a fast and stable connection. Plus, the Cloud External Load Balancer requires the premium tier. Otherwise, you won’t be able to intercept user requests at the nearest PoP and forward them to the nearby application instances. Testing Fault Tolerance Once the microservices are deployed across continents, you can witness how the Cloud Load Balancer functions at normal times and during outages. Open an IP address used by the load balancer in your browser and send a few messages with photos in one of the discussion channels: Which instance of the Messenger and Attachments microservices served your last requests? Well, it depends on where you are in the world. In my case, the instances from the US East (ig-us-east) serve my traffic: What would happen with the application if the US East region became unavailable, bringing down all microservices in that location? Not a problem for my multi-region deployment. The load balancer will detect issues in the US East and forward my traffic to another closest location. In this case, the traffic is forwarded to Europe as long as I live in the US East Coast near the Atlantic Ocean: To emulate the US East region outage, I connected to the VM in that region and shut down all of the microservices. The load balancer detected that the microservices no longer responded in that region and started forwarding my traffic to a European data center. Enjoy the fault tolerance out of the box! Testing Performance Apart from fault tolerance, if you deploy Java microservices across multiple cloud regions, your application can serve user requests at low latency regardless of their location. To make this happen, first, you need to deploy the microservice instances in the cloud locations where most of your users live and configure the Global External Load Balancer that can do routing for you. This is what I discussed in "Automating Java Application Deployment Across Multiple Cloud Regions." Second, you need to arrange your data properly in those locations. Your database needs to function across multiple regions, the same as microservice instances. Otherwise, the latency between microservices and the database will be high and overall performance will be poor. In the discussed architecture, I used YugabyteDB as it is a distributed SQL database that can be deployed across multiple cloud regions. The article, "Geo-Distributed Microservices and Their Database: Fighting the High Latency" shows how latency and performance improve if YugabyteDB stores data close to your microservice instances. Think of that article as the continuation of this story, but with a focus on database deployment. As a spoiler, I improved latency from 450ms to 5ms for users who used the Java messenger from South Asia. Wrapping Up If you develop Java applications for public cloud environments, you should utilize the global cloud infrastructure by deploying application instances across multiple regions. This will make your solution more resilient, performant, and compliant with the data regulatory requirements. It‘s important to remember that it’s not that difficult to create microservices that function and coordinate across distant cloud locations. The Spring ecosystem provides you with the Spring Cloud framework, and public cloud providers like Google offer the infrastructure and services needed to make things simple

By Denis Magda CORE
A Maven Archetype for Jakarta EE 10 Applications
A Maven Archetype for Jakarta EE 10 Applications

Jakarta EE 10 is probably the most important event of this year in the Java world. Since this fall, software editors providing Jakarta EE-compliant platforms are working hard to validate their respective implementations against the TCK (Technology Compatibility Kit) supplied by the Eclipse Foundation. At Payara, as much as in bigger companies like Oracle, Red Hat, or IBM, they aren't left behind, and, as of last September, they announced the availability of the Payara 6 Platform, declined in three versions: Server, Micro, and Cloud. As an implementation of Jakarta EE 10 Web, Core, and Micro Profile, Payara Server 6 is itself proposed in two editions: Community and Enterprise. But how does this impact Java developers? What does it mean in terms of application development and portability? Is it easier or more difficult to write and deploy code compliant to the new specifications than it was with Release 9 or 8 of the Jakarta EE drafts? Well, it depends. While any new Jakarta EE release aims at simplifying the whole bunch of the API (Application Programming Interface) set and at facilitating the developers' work, the fact that, on a total of 20 specifications, 16 have been updated, and a new one has been added, shows how dynamic the communities and the working groups involved in this process are. Which isn't without some difficulties when trying to transition to the newest releases with a minimal impact. This is especially true when it comes to combining, for example, JAX-RS 4.0 and its implementation by Jersey 3.1.0 with JSON-B 3.0 and its Yasson provider by Eclipse or when experiencing NoSuchMethodException, due to an inconvenient combination of versions. Or when noticing that a transitive Maven dependency, pulled out by a nonupdated library like RESTassured, still uses the old javax namespace. In order to avoid all these troubles, as marginal as they may be, one of the most practical solutions is to use Maven archetypes. A Maven archetype is a set of templates used to generate a Java project skeleton. They use Velocity placeholders which, at the generation time, are replaced by actual values that make sense in the current context. Hence, software editors, different OSS communities and work groups, or even individuals provide such Maven archetypes. The Apache community, for example, provides several hundreds of such Maven archetypes, and one may find one for almost any type of Java project. The advantage of using them is that the developers can generate a basic and clean skeleton of their Java project, on which they can build while avoiding some minor but painful annoyances. Jakarta EE 10 is so recent that most of its implementations are still in beta testing and consequently, the Maven archetypes dedicated to the Java projects using this release aren't yet available. In this blog post, I'm demonstrating such an archetype that generates a Jakarta EE 10 web applications skeleton and its associated artifacts to be deployed on a Payara 6 server. The code might be found here. The figure below shows the structure of our Maven archetype: A Maven archetype is a Maven project like any other, and, as such, it is driven by a pom.xml file, which the most essential part is reproduced below: XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>fr.simplex-software.archetypes</groupId> <artifactId>jakartaee10-basic-archetype</artifactId> <version>1.0-SNAPSHOT</version> <name>Basic Java EE 10 project archetype</name> ... <packaging>maven-archetype</packaging> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <build> <extensions> <extension> <groupId>org.apache.maven.archetype</groupId> <artifactId>archetype-packaging</artifactId> <version>3.1.1</version> </extension> </extensions> </build> </project> The only notable thing that our pom.xml file should contain is the declaration of the archetype-packaging Maven plugin that will be used to generate our Java project skeleton. In order to do that, proceed as follows: Shell $ git clone https://github.com/nicolasduminil/jakartaee10-basic-archetype.git $ cd jakartaee10-basic-archetype $ mvn clean install [INFO] Scanning for projects... [INFO] [INFO] -----< fr.simplex-software.archetypes:jakartaee10-basic-archetype >----- [INFO] Building Basic Java EE 10 project archetype 1.0-SNAPSHOT [INFO] --------------------------[ maven-archetype ]--------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ jakartaee10-basic-archetype --- [INFO] [INFO] --- maven-resources-plugin:3.3.0:resources (default-resources) @ jakartaee10-basic-archetype --- [INFO] Copying 11 resources [INFO] [INFO] --- maven-resources-plugin:3.3.0:testResources (default-testResources) @ jakartaee10-basic-archetype --- [INFO] skip non existing resourceDirectory /home/nicolas/jakartaee10-basic-archetype/src/test/resources [INFO] [INFO] --- maven-archetype-plugin:3.2.1:jar (default-jar) @ jakartaee10-basic-archetype --- [INFO] Building archetype jar: /home/nicolas/jakartaee10-basic-archetype/target/jakartaee10-basic-archetype-1.0-SNAPSHOT.jar [INFO] Building jar: /home/nicolas/jakartaee10-basic-archetype/target/jakartaee10-basic-archetype-1.0-SNAPSHOT.jar [INFO] [INFO] --- maven-archetype-plugin:3.2.1:integration-test (default-integration-test) @ jakartaee10-basic-archetype --- [WARNING] No Archetype IT projects: root 'projects' directory not found. [INFO] [INFO] --- maven-install-plugin:3.1.0:install (default-install) @ jakartaee10-basic-archetype --- [INFO] Installing /home/nicolas/jakartaee10-basic-archetype/pom.xml to /home/nicolas/.m2/repository/fr/simplex-software/archetypes/jakartaee10-basic-archetype/1.0-SNAPSHOT/jakartaee10-basic-archetype-1.0-SNAPSHOT.pom [INFO] Installing /home/nicolas/jakartaee10-basic-archetype/target/jakartaee10-basic-archetype-1.0-SNAPSHOT.jar to /home/nicolas/.m2/repository/fr/simplex-software/archetypes/jakartaee10-basic-archetype/1.0-SNAPSHOT/jakartaee10-basic-archetype-1.0-SNAPSHOT.jar [INFO] [INFO] --- maven-archetype-plugin:3.2.1:update-local-catalog (default-update-local-catalog) @ jakartaee10-basic-archetype --- [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.151 s [INFO] Finished at: 2022-12-02T13:32:35+01:00 [INFO] ------------------------------------------------------------------------ Here we're cloning first the GIT repository containing our archetype, and then, we're installing it in our local Maven repository, such that we can use it in order to generate projects. As explained earlier, our archetype is a set of template files located in the directory src/main/resources/atchetype-resources. These files use the Velocity notation to express placeholders that will be processed and replaced during the generation process. For example, looking at the file MyResource.java which exposes a simple REST API: Java package $package; import jakarta.ws.rs.*; import jakarta.ws.rs.core.*; import jakarta.inject.*; import org.eclipse.microprofile.config.inject.*; @Path("myresource") public class MyResource { @Inject @ConfigProperty(name = "message") private String message; @GET @Produces(MediaType.TEXT_PLAIN) public String getIt() { return message; } } Here, the placeholder $package will be replaced by the actual Java package name of the generated class. The whole bunch of resources that will be included in the generated project are described by the file archetype-metadata.xml located in src/main/resources/META-INF/maven. XML <archetype-descriptor xsi:schemaLocation="http://maven.apache.org/plugins/maven-archetype-plugin/archetype-descriptor/1.0.0 http://maven.apache.org/xsd/archetype-descriptor-1.0.0.xsd" xmlns="http://maven.apache.org/plugins/maven-archetype-plugin/archetype-descriptor/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="jakarta-ee-10-webapp"> <fileSets> <fileSet filtered="true" packaged="false" encoding="UTF-8"> <directory>src/main/java</directory> <includes> <include>**/*.java</include> </includes> </fileSet> <fileSet filtered="true" packaged="false" encoding="UTF-8"> <directory>src/test/java</directory> <includes> <include>**/*.java</include> </includes> </fileSet> <fileSet filtered="true" packaged="false" encoding="UTF-8"> <directory>src/main/resources</directory> <includes> <include>**/*.xml</include> <include>**/*.properties</include> </includes> </fileSet> <fileSet filtered="false" packaged="false" encoding="UTF-8"> <directory>src/main/webapp</directory> <includes> <include>index.jsp</include> </includes> </fileSet> <fileSet filtered="false" packaged="false" encoding="UTF-8"> <directory></directory> <includes> <include>.gitignore</include> </includes> </fileSet> <fileSet filtered="true" packaged="false" encoding="UTF-8"> <directory></directory> <includes> <include>README.md</include> <include>Dockerfile</include> <include>build.sh</include> </includes> </fileSet> </fileSets> </archetype-descriptor> The syntax above is self-descriptive and probably already known by all the Maven users. Once we installed our Maven archetype in our local Maven repository, we can proceed with the generation process: Shell $ cd example $ ../jakartaee10-basic-archetype/generate.sh [INFO] Scanning for projects... [INFO] [INFO] ------------------< org.apache.maven:standalone-pom >------------------- [INFO] Building Maven Stub Project (No POM) 1 [INFO] --------------------------------[ pom ]--------------------------------- [INFO] [INFO] >>> maven-archetype-plugin:3.2.1:generate (default-cli) > generate-sources @ standalone-pom >>> [INFO] [INFO] <<< maven-archetype-plugin:3.2.1:generate (default-cli) < generate-sources @ standalone-pom <<< [INFO] [INFO] [INFO] --- maven-archetype-plugin:3.2.1:generate (default-cli) @ standalone-pom --- [INFO] Generating project in Batch mode [INFO] Archetype repository not defined. Using the one from [fr.simplex-software.archetypes:jakartaee10-basic-archetype:1.0-SNAPSHOT] found in catalog local [INFO] ---------------------------------------------------------------------------- [INFO] Using following parameters for creating project from Archetype: jakartaee10-basic-archetype:1.0-SNAPSHOT [INFO] ---------------------------------------------------------------------------- [INFO] Parameter: groupId, Value: com.exemple [INFO] Parameter: artifactId, Value: test [INFO] Parameter: version, Value: 1.0-SNAPSHOT [INFO] Parameter: package, Value: com.exemple [INFO] Parameter: packageInPathFormat, Value: com/exemple [INFO] Parameter: package, Value: com.exemple [INFO] Parameter: groupId, Value: com.exemple [INFO] Parameter: artifactId, Value: test [INFO] Parameter: version, Value: 1.0-SNAPSHOT [INFO] Project created from Archetype in dir: /home/nicolas/toto/test [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.898 s [INFO] Finished at: 2022-12-02T13:53:32+01:00 [INFO] ------------------------------------------------------------------------ $ cd test $ ./build.sh [INFO] Scanning for projects... [INFO] [INFO] --------------------------< com.exemple:test >-------------------------- [INFO] Building test 1.0-SNAPSHOT [INFO] --------------------------------[ war ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ test --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ test --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 1 resource [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ test --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 2 source files to /home/nicolas/toto/test/target/classes [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ test --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/nicolas/toto/test/src/test/resources [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ test --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 1 source file to /home/nicolas/toto/test/target/test-classes [INFO] [INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ test --- [INFO] [INFO] --- maven-war-plugin:3.3.1:war (default-war) @ test --- [INFO] Packaging webapp [INFO] Assembling webapp [test] in [/home/nicolas/toto/test/target/test] [INFO] Processing war project [INFO] Copying webapp resources [/home/nicolas/toto/test/src/main/webapp] [INFO] Building war: /home/nicolas/toto/test/target/test.war [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.620 s [INFO] Finished at: 2022-12-02T13:54:33+01:00 [INFO] ------------------------------------------------------------------------ Sending build context to Docker daemon 13.66MB Step 1/2 : FROM payara/server-full:6.2022.1 ---> ada23f507bd2 Step 2/2 : COPY ./target/test.war $DEPLOY_DIR ---> 96650dc307b0 Successfully built 96650dc307b0 Successfully tagged com.exemple/test:latest Error: No such container: test 39934e82c8b164c4e6cd91036df7e2b0731254cdb869d7f2321ad1f2aaf37350 The generate.sh script that we're running above only contains the maven archetype:generate goal, as shown: Shell #!/bin/sh mvn -B archetype:generate \ -DarchetypeGroupId=fr.simplex-software.archetypes \ -DarchetypeArtifactId=jakartaee10-basic-archetype \ -DarchetypeVersion=1.0-SNAPSHOT \ -DgroupId=com.exemple \ -DartifactId=test Here we're using our Maven archetype in order to generate a new artifact which GAV (GroupID, ArtifactID, Version) are: com.example:test:1.0-SNAPSHOT. Once generated, the new project may be imported in your preferred IDE. As you may see, it consists of a simple REST API exposing and endpoint returning some text. For this purpose, we leverage Jakarta JAX-RS 4.0 and its Jersey 3.1 implementation with the Eclipse Microprofile Configuration 5.0. Please take some time to look at the generation project, including the pom.xml file and the dependencies used with their associated versions. All these dependencies are mandatory in order to get a valid artifact. We just generated our new project; let's build it now. In the listing above, we did that by running the script build.sh. Shell #!/bin/sh mvn clean package && docker build -t ${groupId}/${artifactId} . docker rm -f ${artifactId} || true && docker run -d -p 8080:8080 -p 4848:4848 --name ${artifactId} ${groupId}/${artifactId} This script is first packaging the newly generated Java project in a WAR, and after that, it builds a new Docker image based on the Dockerfile below: Dockerfile FROM payara/server-full:6.2022.1 COPY ./target/${artifactId}.war $DEPLOY_DIR As you may see, this Dockerfile is just extending the standard Payara server Docker image provided by the company to copy the WAR that was previously packaged into the auto-deployment server directory. Copying the WAR in the mentioned directory, which by the way is /opt/payara/deployments, automatically deploys the packaged application. Once this new Docker image built, we run it under the same name as our Maven artifactId, by mapping the ports 8080 and 4848. Please notice the way that the Velocity placeholders are again used. Once the Maven build process is successfully finished, a Docker container of the name of the test should be running. Of course, you need to have a running Docker daemon. You can test that everything is okay using the following curl request: Shell curl http://localhost:8080/test/api/myresource or by executing the script myresource.sh. An integration test is generated as well. It leverages test containers to execute an instance of Payara server 6 in a Docker container in which the application has been deployed. Then it uses the JAX-RS client, as implemented by Jersey Client 3.1, to perform HTTP requests to the exposed endpoint. You can experience it by running the following maven command: Shell $ mvn verify Please notice that this command can only be run after having previously executed the build.sh script or having manually run: Shell $ mvn -DskipTests clean package This is because the integration test uses testcontainers to deploy the WAR and, consequently, the WAR has to exist then. Hence, the package goal which creates the WAR should have been executed already. And we need to skip tests in order to avoid trying to execute them before packaging. Enjoy!

By Nicolas Duminil CORE
How To Modify HTTP Request Headers in Java Using Selenium Webdriver
How To Modify HTTP Request Headers in Java Using Selenium Webdriver

One of the most common test automation challenges is how do we modify the request headers in Selenium WebDriver. As an automation tester, you would come across this challenge for any programming language, including Java. Before coming to a solution, we need to understand the problem statement better and arrive at different possibilities to modify the header request in Java while working with Selenium WebDriver Tutorial. In this Selenium Java tutorial, we will learn how to modify HTTP request headers in Java using Selenium WebDriver with different available options. Starting your journey with Selenium WebDriver? Check out this step-by-step guide to perform Automation testing using Selenium WebDriver. So let’s get started! What Are HTTP Headers HTTP headers are an important part of the HTTP protocol. They define an HTTP message (request or response) and allow the client and server to exchange optional metadata with the message. They are composed of a case-insensitive header field name followed by a colon, then a header field value. Header fields can be extended over multiple lines by preceding each extra line with at least one space or horizontal tab. Headers can be grouped according to their contexts: Request Headers: HTTP request headers are used to supply additional information about the resource being fetched and the client making the request. Response Headers: HTTP response headers provide information about the response. The Location header specifies the location of a resource, and the server header presents information about the server providing the resource. Representation Headers: HTTP representation headers are an important part of any HTTP response. They provide information about protocol elements like mime types, character encodings, and more. This makes them a vital part of processing resources over the internet. Payload Headers: HTTP payload headers contain data about the payload of an HTTP message (such as its length and encoding) but are representation-independent.Deep Dive Into HTTP Request HeadersThe HTTP Request header is a communication mechanism that enables browsers or clients to request specific webpages or data from a (Web) server. When used in web communications or internet browsing, the HTTP Request Header enables browsers and clients to communicate with the appropriate Web server by sending requests.The HTTP request headers describe the request sent by the web browser to load a page. It’s also referred to as the client-to-server protocol. The header includes details of the client’s request, such as the type of browser and operating system used by the user and other parameters required for the proper display of the requested content on the screen.Here is the major information included within the HTTP request headers: IP address (source) and port number. URL of the requested web page. Web Server or the destination website (host). Data type that the browser will accept (text, html, xml, etc.). Browser type (Mozilla, Chrome, IE) to send compatible data. In response, an HTTP response header containing the requested data is sent back by the. The Need to Change the HTTP Request Headers Can you guess why we even need to change the request header once it is already set into the scripts? Here are some of the scenarios where you might need to change the HTTP Request Headers: Testing the control and/or testing the different variants by establishing appropriate HTTP headers. The need to test the cases when different aspects of the web application or even the server logic have to be thoroughly tested. Since the HTTP request headers come to use to enable some specific parts of a web application logic, which in general would be disabled in a normal mode, modification of the HTTP request headers may be required from time to time per the test scenario. Testing the guest mode on a web application under test is the ideal case where you might need to modify the HTTP request headers. However, the function of modifying the HTTP request header, which Selenium RC once supported, is now not handled by Selenium Webdriver. This is why the question arises about how we change the header request when the test automation project is written using the Selenium framework and Java. How To Modify Header Requests in Selenium Java Project In this part of the Selenium Java tutorial, we look at the numerous ways to modify header requests in Java. Broadly, there are a few possibilities, following which one can modify the header request in the Java-Selenium project. Using a driver/library like REST Assured instead of Selenium. Using a reverse proxy such as browser mob-proxy or some other proxy mechanism. Using a Firefox browser extension, which would help to modify the headers for the request. Let us explore each possibility one by one: Modify HTTP Request Headers Using REST Assured Library Along with Selenium, we can make use of REST Assured, which is a wonderful tool to work with REST services in a simple way. The prerequisites to configure REST Assured with your project in any IDE (e.g., Eclipse) is fairly easy. After setting up Java, Eclipse, and TestNG, you would need to download the required REST Assured jar files. After the jar files are downloaded, you have to create a project in Eclipse and add the downloaded jar files as external jars to the Properties section. This is again similar to the manner in which we add Selenium jar files to the project. Once you have successfully set up the Java project with the REST Assured library, you are good to go. We intend to create a mechanism so that the request header is customizable. To achieve this with the possibility mentioned above, we first need to know the conventional way to create a request header. Let’s consider the following scenario: We have one Java class named RequestHeaderChangeDemo where we maintain the basic configurations We have a test step file named TestSteps, where we will call the methods from the RequestHeaderChangeDemo Java class through which we will execute our test. Observe the below Java class named RequestHeaderChangeDemo. The BASE_URL is the Amazon website on which the following four methods are applied: authenticateUser getProducts addProducts removeProduct public class RequestHeaderChangeDemo { private static final String BASE_URL = "https://amazon.com"; public static IRestResponse<Token> authenticateUser(AuthorizationRequest authRequest) { RestAssured.baseURI = BASE_URL; RequestSpecification request = RestAssured.given(); request.header("Content-Type", "application/json"); Response response = request.body(authRequest).post(Route.generateToken()); return new RestResponse(Token.class, response); } public static IRestResponse<Products> getProducts() { RestAssured.baseURI = BASE_URL; RequestSpecification request = RestAssured.given(); request.header("Content-Type", "application/json"); Response response = request.get(Route.products()); return new RestResponse(Products.class, response); } public static IRestResponse<UserAccount> addProduct(AddProductsRequest addProductsRequest, String token) { RestAssured.baseURI = BASE_URL; RequestSpecification request = RestAssured.given(); request.header("Authorization", "Bearer " + token) .header("Content-Type", "application/json"); Response response = request.body(addProductsRequest).post(Route.products()); return new RestResponse(UserAccount.class, response); } public static Response removeProduct(RemoveProductRequest removeProductRequest, String token) { RestAssured.baseURI = BASE_URL; RequestSpecification request = RestAssured.given(); request.header("Authorization", "Bearer " + token) .header("Content-Type", "application/json"); return request.body(removeProductRequest).delete(Route.product());, } } In the above Java class file, we have repeatedly sent the BASE_URL and headers in every consecutive method. Example is shown below: RestAssured.baseURI = BASE_URL; RequestSpecification request = RestAssured.given(); request.header("Content-Type", "application/json"); Response response = request.body(authRequest).post(Route.generateToken()); The request.header method requests the header in the JSON format. There is a significant amount of duplication of code which reduces the maintainability aspect of the code. This can be avoided if we initialize the RequestSpecification object in the constructor and make these methods non-static (i.e. creating the instance method). Since the instance method in Java belongs to the Object of the class and not to the class itself, the method can be called even after creating the Object of the class. Along with this, we will also override the instance method. Converting the method to an instance method results in the following advantages: Authentication is done only once in one RequestSpecification Object. There won’t be any further need to create the same for other requests. Flexibility to modify the request header in the project. Therefore, let us see how both the Java class RequestHeaderChangeDemo and the test step file TestSteps look when we use the instance method. Java Class for class RequestHeaderChangeDemo with instance method public class RequestHeaderChangeDemo { private final RequestSpecification request; public RequestHeaderChangeDemo(String baseUrl) { RestAssured.baseURI = baseUrl; request = RestAssured.given(); request.header("Content-Type", "application/json"); } public void authenticateUser(AuthorizationRequest authRequest) { Response response = request.body(authRequest).post(Route.generateToken()); if (response.statusCode() != HttpStatus.SC_OK) throw new RuntimeException("Authentication Failed. Content of failed Response: " + response.toString() + " , Status Code : " + response.statusCode()); Token tokenResponse = response.body().jsonPath().getObject("$", Token.class); request.header("Authorization", "Bearer " + tokenResponse.token); } public IRestResponse<Products> getProducts() { Response response = request.get(Route.products()); return new RestResponse(Products.class, response); } public IRestResponse<UserAccount> addProduct(AddProductsRequest addProductsRequest) { Response response = request.body(addProductsRequest).post(Route.products()); return new RestResponse(UserAccount.class, response); } public Response removeProducts(RemoveProductRequest removeProductRequest) { return request.body(removeProductRequest).delete(Route.product()); } } Code Walkthrough We have created a constructor to initialize the RequestSpecification object containing BaseURL and Request Headers. Earlier, we had to pass the token in every request header. Now, we put the tokenresponse into the same instance of the request as soon as we receive it in the method authenticateUser(). This enables the test step execution to move forward without adding the token for every request like it was done earlier. This makes the header available for the subsequent calls to the server. This RequestHeaderChangeDemo Java class will now be initialized in the TestSteps file. We change the TestSteps file in line with the changes in the RequestHeaderChangeDemo Java class. public class TestSteps { private final String USER_ID = " (Enter the user id from your test case )"; private Response response; private IRestResponse<UserAccount> userAccountResponse; private Product product; private final String BaseUrl = "https://amazon.com"; private RequestHeaderChangeDemo endPoints; @Given("^User is authorized$") public void authorizedUser() { endPoints = new RequestHeaderChangeDemo (BaseUrl); AuthorizationRequest authRequest = new AuthorizationRequest("(Username)", "(Password)"); endPoints.authenticateUser(authRequest); } @Given("^Available Product List$") public void availableProductLists() { IRestResponse<Products> productsResponse = endPoints.getProducts(); Product = productsResponse.getBody().products.get(0); } @When("^Adding the Product in Wishlist$") public void addProductInWishList() { ADDPROD code = new ADDPROD(product.code); AddProductsRequest addProductsRequest = new AddProductsRequest(USER_ID, code); userAccountResponse = endPoints.addProduct(addProductsRequest); } @Then("^The productis added$") public void productIsAdded() { Assert.assertTrue(userAccountResponse.isSuccessful()); Assert.assertEquals(201, userAccountResponse.getStatusCode()); Assert.assertEquals(USER_ID, userAccountResponse.getBody().userID); Asert.assertEquals(product.code, userAccountResponse.getBody().products.get(0).code); } @When("^Product to be removed from the list$") public void removeProductFromList() { RemoveProductRequest removeProductRequest = new RemoveProductRequest(USER_ID, product.code); response = endPoints.removeProduct(removeProductRequest); } @Then("^Product is removed$") public void productIsRemoved() { Assert.assertEquals(204, response.getStatusCode()); userAccountResponse = endPoints.getUserAccount(USER_ID); Assert.assertEquals(200, userAccountResponse.getStatusCode()); Assert.assertEquals(0, userAccountResponse.getBody().products.size()); } } Code Walkthrough Here’s what we have done in the modified implementation: Initiatialized RequestHeaderChangeDemo class objects as endpoints. The BaseURL was passed in the first method (i.e. authorizedUser). Within the method authorizedUser, we invoked the constructor authenticateUser of the RequestHeaderChangeDemo class. Hence the same endpoint object is used by the subsequent step definitions. Modify HTTP Request Headers Using Reverse Proxy Like Browser Mob-Proxy As the name suggests, we can opt for using proxies when dealing with the request header changes in a Java-Selenium automation test suite. As Selenium forbids injecting information amidst the browser and the server, proxies can come to a rescue. This approach is not preferred if the testing is being performed behind a corporate firewall. Being a web infrastructure component, Proxy makes the web traffic move through it by positioning itself between the client and the server. In the corporate world, proxies work similarly, making the traffic pass through it, allowing the ones that are safe and blocking the potential threats. Proxies come with the capability to modify both the requests and the responses, either partially or completely. The core idea is to send the authorization headers, bypassing the phase that includes the credential dialogue, also known as the basic authentication dialog. However, this turns out to be a tiring process, especially if the test cases demand frequent reconfigurations. This is where the browser mob-proxy library comes into the picture. When you make the proxy configuration part of the Selenium automation testing suite, the proxy configuration will stand valid each time you execute the test suite. Let us see how we can use the browser mob-proxy with a sample website that is secured with basic authentication. To tackle this, we might narrow down two possible ways: Add authorization headers to all requests with no condition or exception. Add headers only to the requests which meet certain conditions. Though we will not address headers management problems, we would still demonstrate how to address authorization issues with the help of the browser mob-proxy authorization toolset. In this part of the Selenium Java tutorial, we will focus only on the first methodology (i.e. adding authorization headers to all the requests). First, we add the dependencies of browsermob-proxy in pom.xml ....................... ....................... <dependencies> <dependency> <groupId>net.lightbody.bmp</groupId> <artifactId>browsermob-core</artifactId> <version>2.1.5</version> <scope>test</scope> </dependency> </dependencies> ....................... ....................... public class caseFirstTest { WebDriver driver; BrowserMobProxy proxy; @BeforeAll public static void globalSetup() { System.setProperty("webdriver.gecko.driver", "(path of the driver)"); } @BeforeEach public void setUp() { setUpProxy(); FirefoxOptions Options = new FirefoxOptions(); Options.setProxy(ClientUtil.createSeleniumProxy(proxy)); driver = new FirefoxDriver(Options); } @Test public void testBasicAuth() { driver.get("https://webelement.click/stand/basic?lang=en"); Wait<WebDriver> waiter = new FluentWait(driver).withTimeout(Duration.ofSeconds(50)).ignoring(NoSuchElementException.class); String greetings = waiter.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("(Mention the xpath)"))).getText(); Assertions.assertEquals("(message"); } @AfterEach public void tearDown() { if(driver != null) { driver.quit(); } if(proxy != null) { proxy.stop(); } } private void setUpProxy( { } } If you want to pass this approach to all the header requests, a particular proxy, in this case, the forAllProxy method should be invoked as shown below: public void forAllProxy() { proxy = new BrowserMobProxyServer(); try { String authHeader = "Basic " + Base64.getEncoder().encodeToString("webelement:click".getBytes("utf-8")); proxy.addHeader("checkauth", authfirstHeader); } catch (UnsupportedEncodingException e) { System.err.println("the Authorization can not be passed"); e.printStackTrace(); } proxy.start(0); } In the above code, the line that starts with String authHeader states that we are creating the header, and this will be added to the requests. After that, these requests are passed through the proxy we created in proxy.addHeader(“checkauth”, authfirstHeader). try { String authHeader = "Basic " + Base64.getEncoder().encodeToString("webelement:click".getBytes("utf-8")); proxy.addHeader("checkauth", authfirstHeader); } catch (UnsupportedEncodingException e) { ……………………………………………………………………………… ……………………………………………………………………………… ……………………………………………………………………………... } proxy.start(0); } Eventually, we start the proxy setting 0 to mark the start parameter, and the proxy starts on the port. Modify HTTP Request Headers Using Firefox Extension In this part of the Selenium Java tutorial, we look at how to modify the header requests using the appropriate Firefox browser extension. The major drawback of this option is that it works only with Firefox (and not other browsers like Chrome, Edge, etc.). Perform the following steps to modify HTTP request headers using a Firefox extension: Download the Firefox browser Extension Load the extension. Set up the extension preferences. Set the Desired Capabilities. Prepare the test automation script. Let us go through each step one by one: 1. Download the Firefox browser extension Search for the firefox extension with .*xpi and set it up in the project 2. Load the Firefox extension Add the Firefox profile referring to the below code: FirefoxProfile profile = new FirefoxProfile(); File modifyHeaders = new File(System.getProperty("user.dir") + "/resources/modify_headers.xpi"); profile.setEnableNativeEvents(false); try { profile.addExtension(modifyHeaders); } catch (IOException e) { e.printStackTrace(); } 3. Set the extension preferences Once we load the Firefox extension into the project, we set the preferences (i.e. various inputs that need to be set before the extension is triggered). This is done using the profile.setPreference method. This method sets the preference for any given profile through the key-set parameter mechanism. Here the first parameter is the key that sets the value in addition to the second parameter, which sets a corresponding integer value. Here is the reference implementation: profile.setPreference("modifyheaders.headers.count", 1); profile.setPreference("modifyheaders.headers.action0", "Add"); profile.setPreference("modifyheaders.headers.name0", "Value"); profile.setPreference("modifyheaders.headers.value0", "numeric value"); profile.setPreference("modifyheaders.headers.enabled0", true); profile.setPreference("modifyheaders.config.active", true); profile.setPreference("modifyheaders.config.alwaysOn", true); In the above code, we list the number of times we want to set the header instance. profile.setPreference("modifyheaders.headers.count", 1); Next, we specify the action, and the header name and header value contain the dynamically received values from the API calls. profile.setPreference("modifyheaders.headers.action0", "Add"); For the rest of the line of the implementation of .setPreference, we enable all so that it allows the extension to be loaded when the WebDriver instantiates the Firefox browser along with setting the extension in active mode with HTTP header. 4. Set up the Desired Capabilities The Desired Capabilities in Selenium are used to set the browser, browser version, and platform type on which the automation test needs to be performed. Here we how we can set the desired capabilities: DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setBrowserName("firefox"); capabilities.setPlatform(org.openqa.selenium.Platform.ANY); capabilities.setCapability(FirefoxDriver.PROFILE, profile); WebDriver driver = new FirefoxDriver(capabilities); driver.get("url"); What if you want to modify HTTP request headers with a Firefox version that is not installed on your local (or test) machine? This is where LambdaTest, the largest cloud-based automation testing platform that offers faster cross browser testing infrastructure, comes to the rescue. With LambdaTest, you have the flexibility to modify HTTP request headers for different browsers and platform combinations. If you are willing to modify HTTP request headers using the Firefox extension, you can use LambdaTest to realize the same on different versions of the Firefox browser. 5. Draft the entire test automation script Once you have been through all the above steps, we proceed with designing the entire test automation script: public void startwebsite() { FirefoxProfile profile = new FirefoxProfile(); File modifyHeaders = new File(System.getProperty("user.dir") + "/resources/modify_headers.xpi"); profile.setEnableNativeEvents(false); try { profile.addExtension(modifyHeaders); } catch (IOException e) { e.printStackTrace(); } profile.setPreference("modifyheaders.headers.count", 1); profile.setPreference("modifyheaders.headers.action0", "Add"); profile.setPreference("modifyheaders.headers.name0", "Value"); profile.setPreference("modifyheaders.headers.value0", "Numeric Value"); profile.setPreference("modifyheaders.headers.enabled0", true); profile.setPreference("modifyheaders.config.active", true); profile.setPreference("modifyheaders.config.alwaysOn", true); DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setBrowserName("firefox"); capabilities.setPlatform(org.openqa.selenium.Platform.ANY); capabilities.setCapability(FirefoxDriver.PROFILE, profile); WebDriver driver = new FirefoxDriver(capabilities); driver.get("url"); } Conclusion In this Selenium Java tutorial, we explored three different ways to handle the modifications on the HTTP request headers. Selenium in itself is a great tool and has consistently worked well in web automation testing. Nevertheless, the tool cannot change the request headers. After exploring all three alternatives to modify the request header in a Java Selenium project, we can vouch for the first option using REST Assured. However, you may want to try out the other options and come up with your observations and perceptions in the comments section.

By Harshit Paul

Top Java Experts

expert thumbnail

Nicolas Fränkel

Developer Advocate,
Api7

Developer Advocate with 15+ years experience consulting for many different customers, in a wide range of contexts (such as telecoms, banking, insurances, large retail and public sector). Usually working on Java/Java EE and Spring technologies, but with focused interests like Rich Internet Applications, Testing, CI/CD and DevOps. Currently working for Hazelcast. Also double as a trainer and triples as a book author.
expert thumbnail

Shai Almog

OSS Hacker, Developer Advocate and Entrepreneur,
Codename One

Software developer with ~30 years of professional experience in a multitude of platforms/languages. JavaOne rockstar/highly rated speaker, author, blogger and open source hacker. Shai has extensive experience in the full stack of backend, desktop and mobile. This includes going all the way into the internals of VM implementation, debuggers etc. Shai started working with Java in 96 (the first public beta) and later on moved to VM porting/authoring/internals and development tools. Shai is the co-founder of Codename One, an Open Source project allowing Java developers to build native applications for all mobile platforms in Java. He's the coauthor of the open source LWUIT project from Sun Microsystems and has developed/worked on countless other projects both open source and closed source. Shai is also a developer advocate at Lightrun.
expert thumbnail

Marco Behler

Hi, I'm Marco. Say hello, I'd like to get in touch! twitter: @MarcoBehler
expert thumbnail

Ram Lakshmanan

GCeasy.io & fastThread.io

‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎

The Latest Java Topics

article thumbnail
Real-Time Stream Processing With Hazelcast and StreamNative
In this article, readers will learn about real-time stream processing with Hazelcast and StreamNative in a shorter time, along with demonstrations and code.
January 27, 2023
by Timothy Spann
· 1,880 Views · 2 Likes
article thumbnail
The Quest for REST
This post focuses on listing some of the lurking issues in the "Glory of REST" and provides hints at ways to solve them.
January 26, 2023
by Nicolas Fränkel CORE
· 2,159 Views · 3 Likes
article thumbnail
Fraud Detection With Apache Kafka, KSQL, and Apache Flink
Exploring fraud detection case studies and architectures with Apache Kafka, KSQL, and Apache Flink with examples, guide images, and informative details.
January 26, 2023
by Kai Wähner CORE
· 2,465 Views · 1 Like
article thumbnail
Upgrade Guide To Spring Data Elasticsearch 5.0
Learn about the latest Spring Data Elasticsearch 5.0.1 with Elasticsearch 8.5.3, starting with the proper configuration of the Elasticsearch Docker image.
January 26, 2023
by Arnošt Havelka CORE
· 2,152 Views · 1 Like
article thumbnail
Commonly Occurring Errors in Microsoft Graph Integrations and How to Troubleshoot Them (Part 3)
This third article explains common integration errors that may be seen in the transition from EWS to Microsoft Graph as to the resource type To Do Tasks.
January 25, 2023
by Constantin Kwiatkowski
· 2,127 Views · 1 Like
article thumbnail
A Brief Overview of the Spring Cloud Framework
Readers will get an overview of the Spring Cloud framework, a list of its main packages, and their relation with the Microservice Architectural patterns.
January 25, 2023
by Mario Casari
· 4,875 Views · 1 Like
article thumbnail
Spring Cloud: How To Deal With Microservice Configuration (Part 1)
In this article, we cover how to use a Spring Cloud Configuration module to implement a minimal microservice scenario based on a remote configuration.
January 24, 2023
by Mario Casari
· 3,876 Views · 2 Likes
article thumbnail
Microservices Discovery With Eureka
In this article, let's explore how to integrate services discovery into a microservices project.
January 22, 2023
by Jennifer Reif CORE
· 4,525 Views · 6 Likes
article thumbnail
How to Create a Real-Time Scalable Streaming App Using Apache NiFi, Apache Pulsar, and Apache Flink SQL
In this article, we'll cover how and when to use Pulsar with NiFi and Flink as you build your streaming application.
January 22, 2023
by Tim Spann CORE
· 3,357 Views · 5 Likes
article thumbnail
Handling Virtual Threads
Learn different strategies to implement virtual threads, a framework that allows us to dramatically facilitate programming a thread-per-task model. This simplifies writing and maintaining high-throughput concurrent applications.
January 20, 2023
by Victoria Barriola
· 2,349 Views · 3 Likes
article thumbnail
How To Generate Code Coverage Report Using JaCoCo-Maven Plugin
In this article, readers will use a series of tutorials to learn how to use the JaCoCo-Maven plugin to generate code coverage reports for Java projects.
January 19, 2023
by Harshit Paul
· 5,094 Views · 1 Like
article thumbnail
Deploying Java Serverless Functions as AWS Lambda
Learn about SAM (superset of CloudFormation) including some special commands and shortcuts to ease Java serverless code development, testing, and deployment.
January 18, 2023
by Nicolas Duminil CORE
· 4,101 Views · 2 Likes
article thumbnail
How To Take A Screenshot Using Python and Selenium
This tutorial will guide you on using Selenium and Python to capture Python Selenium screenshots and check how your website is rendered over different browsers.
January 17, 2023
by Nishant Choudhary
· 2,502 Views · 1 Like
article thumbnail
How To Create a Stub in 5 Minutes
Readers will learn how to create stubs in five minutes, which uses regression and load testing, debugging, and more, and how to configure the stubs flexibly.
January 17, 2023
by Andrei Rogalenko
· 2,986 Views · 3 Likes
article thumbnail
Implementing Infinite Scroll in jOOQ
Infinite scroll is a classical usage of keyset pagination and is gaining popularity these days.
January 17, 2023
by Anghel Leonard CORE
· 3,171 Views · 2 Likes
article thumbnail
Foojay.social: A Mastodon Server for the Java Community
Mastodon is a safe alternative for Twitter, and Foojay has set up a service for the Java, JVM, and OpenJDK communities. Read more!
January 13, 2023
by Frank Delporte
· 2,886 Views · 1 Like
article thumbnail
Watch Area and Renderers
Stop digging through variables in the watch to find nuggets of gold, or rerunning the expression evaluation. Use entity renderers instead.
January 12, 2023
by Shai Almog CORE
· 2,520 Views · 2 Likes
article thumbnail
Build CRUD RESTful API Using Spring Boot 3, Spring Data JPA, Hibernate, and MySQL Database
In this tutorial, we will learn how to build CRUD RESTful API using Spring Boot 3, Spring Data JPA (Hibernate), and MySQL database.
January 12, 2023
by Ramesh Fadatare
· 3,026 Views · 3 Likes
article thumbnail
Should You Create Your Own E-Signature API?
E-signatures have many benefits, such as higher security. An e-signature API can make the signing experience much easier, but what's the best way to implement it: buying an API or building one?
January 12, 2023
by Zac Amos
· 3,549 Views · 1 Like
article thumbnail
Project Loom And Kotlin: Some Experiments
This article will dive into the performance of virtual threading functionality with Project Loom and Kotlin using guide charts and code for readers to follow.
January 11, 2023
by Severn Everett
· 2,649 Views · 1 Like
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: