Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
A unique image aesthetic makes a big difference in representing any personal or professional brand online. Career and hobby photographers, marketing executives, and casual social media patrons alike are in constant pursuit of easily distinguishable visual content, and this basic need to stand out from a crowd has, in turn, driven the democratization of photo editing and filtering services in the last decade or so. Nearly every social media platform you can think of (not to mention many e-commerce websites and various other casual sites where images are frequently uploaded) now incorporates some means for programmatically altering vanilla image files. These built-in services can vary greatly in complexity, ranging from simple brightness controls to gaussian blurs. With this newfound ease of access to photo filtering, classic image filtering techniques have experienced a widespread resurgence in popularity. For example, the timeless look associated with black and white images can now be hastily applied to any image upload on the fly. Through simple manipulations of brightness and contrast, the illusion of embossment can be harnessed, allowing us to effortlessly emulate a vaunted, centuries-old printing technique. Even posterization – a classic, bold aesthetic once humbly associated with the natural color limitations of early printing machines – can be instantly generated within any grid of pixels. Given the desirability of simplified image filtering (especially those with common-sense customization features), building these types of features into any application – especially those handling a large volume of image uploads – is an excellent idea for developers to consider. Of course, once we elect to go in that direction, an important question arises: how can we efficiently include these services in our applications, given the myriad lines of code associated with building even the simplest photo-filtering functionality? Thankfully, that question is answered yet again by supply and demand forces brought about naturally in the ongoing digital industrial revolution. Developers can rely on readily available Image Filtering API services to circumvent large-scale programming operations, thereby implementing robust image customization features in only a few lines of clean, simple code. API Descriptions The purpose of this article is to demonstrate three free-to-use image filtering API solutions which can be implemented into your applications using complementary, ready-to-run Java code snippets. These snippets are supplied below in this article, directly following brief instructions to help you install the SDK. Before we reach that point, I’ll first highlight each solution, providing a more detailed look at its respective uses and API request parameters. Please note that each API will require a free-tier Cloudmersive API key to complete your call (provides a limit of 800 API calls per month with no commitments). Grayscale (Black and White) Filter API Early photography was initially limited to a grayscale color spectrum due to the natural constraints of primitive photo technology. The genesis of color photography opened new doors, certainly, but it never completely replaced the black-and-white photo aesthetic. Even now, in the digital age, grayscale photos continue to offer a certain degree of depth and expression which many feel the broader color spectrum can’t bring out. The process of converting a color image to grayscale is straightforward. Color information is stored in the hundreds (or thousands) of pixels making up any digital image; grayscale conversion forces each pixel to ignore its color information and present varying degrees of brightness instead. Beyond its well-documented aesthetic effects, grayscale conversion offers practical benefits, too, by reducing the size of the image in question. Grayscale images are much easier to store, edit and subsequently process (especially in downstream operations such as Optical Character Recognition, for example). The grayscale filter API below performs a simple black-and-white conversion, requiring only an image’s file path (formats like PNG and JPG are accepted) in its request parameters. Embossment Filter API Embossment is a physical printing process with roots dating as far back as the 15th century, and it’s still used to this day in that same context. While true embossment entails the inclusion of physically raised shapes on an otherwise flat surface (offering an enhanced visual and tactile experience), digital embossment merely emulates this effect by manipulating brightness and contrast in key areas around the subject of a photo. An embossment photo filter can be used to quickly add depth to any image. The embossment filter API below performs a customizable digital embossment operation, requiring the following input request information: Radius: The radius of pixels of the embossment operation (larger values will produce a greater effect) Sigma: The variance of the embossment operation (higher values produce higher variance) Image file: The file path for the subject of the operation (supports common formats like PNG and JPG) Posterization API Given the ubiquity of high-quality smartphone cameras, it’s easy to take the prevalence of high-definition color photos for granted. The color detail we’re accustomed to seeing in everyday photos comes down to advancements in high-quality pixel storage. Slight variations in reds, blues, greens, and other elements on the color spectrum are mostly accounted for in a vast matrix of pixel coordinates. In comparison, during the bygone era of physical printing presses, the variety of colors used to form an image was typically far less varied, and digital posterization filters aim to emulate this old-school effect. It does so by reducing the volume of unique colors in an image, narrowing a distinct spectrum of hex values into a more homogenous group. The aesthetic effect is unmistakable, invoking a look one might have associated with political campaigns and movie theater posters in decades past. The posterization API provided below requires a user to provide the following request information: Levels: The number of unique colors which should be retained in the output image Image File: The image file to perform the operation on (supports common formats like PNG and JPG) API Demonstration To structure your API call to any of the three services outlined above, your first step is to install the Java SDK. To do so with Maven, first, add a reference to the repository in pom.xml: <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> Then add one to the dependency in pom.xml: <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> Alternatively, to install with Gradle, add it to your root build.gradle (at the end of repositories): allprojects { repositories { ... maven { url 'https://jitpack.io' } } } Then add the dependency in build.gradle: dependencies { implementation 'com.github.Cloudmersive:Cloudmersive.APIClient.Java:v4.25' } To use the Grayscale Filter API, use the following code to structure your API call: // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.FilterApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); FilterApi apiInstance = new FilterApi(); File imageFile = new File("/path/to/inputfile"); // File | Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. try { byte[] result = apiInstance.filterBlackAndWhite(imageFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling FilterApi#filterBlackAndWhite"); e.printStackTrace(); } To use the Embossment Filter API, use the below code instead (remembering to configure your radius and sigma in their respective parameters): // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.FilterApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); FilterApi apiInstance = new FilterApi(); Integer radius = 56; // Integer | Radius in pixels of the emboss operation; a larger radius will produce a greater effect Integer sigma = 56; // Integer | Sigma, or variance, of the emboss operation File imageFile = new File("/path/to/inputfile"); // File | Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. try { byte[] result = apiInstance.filterEmboss(radius, sigma, imageFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling FilterApi#filterEmboss"); e.printStackTrace(); } Finally, to use the Posterization Filter API, use the below code (remember to define your posterization level with an integer as previously described): // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.FilterApi; ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); FilterApi apiInstance = new FilterApi(); Integer levels = 56; // Integer | Number of unique colors to retain in the output image File imageFile = new File("/path/to/inputfile"); // File | Image file to perform the operation on. Common file formats such as PNG, JPEG are supported. try { byte[] result = apiInstance.filterPosterize(levels, imageFile); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling FilterApi#filterPosterize"); e.printStackTrace(); } When using the Embossment and Posterization APIs, I recommend experimenting with different radius, sigma, and level values to find the right balance.
GitHub language statistics indicate that Java occupies second place among other programming codes, while in the TIOBE Index 2022, Java shifted to fourth position. The difference lies in the methodological approaches. Notwithstanding the ranking, Java is the coding language enterprises have used heavily since its inception, and it still holds the same position. As a programming language, it outperforms many of its competitors and continues to be the choice of most companies/organizations for software applications. However, Java doesn't stay the same; it goes through changes and modernization. In many ways, the development and innovation of this code and the surrounding ecosystem are propelled by new business demands. This article presents an overview of seven expected trends in Java based on the most significant events and achievements of 2022. Cloud architecture continues evolving, but costs are rising. According to the Flexera Report, public cloud spending exceeded budgets by 13% in 2022. Companies expect their cloud spending to increase by 29% over the next twelve months. What's worse, organizations waste 32% of their cloud spend. So the need for cloud cost optimization is out there. It will be one of the industry's driving forces in 2023, and we can hope to see more technological innovation and management solutions directed toward better efficiency and lesser costs. The new PaaS is a cloud computing model between IaaS and SaaS that's recently gained popularity. PaaS delivers third-party provider hardware and software tools to users. This approach allows greater flexibility for developers, and it's easier to handle finances because it's a pay-as-you-go payment model. PaaS enables developers to create or run new applications without spending extra time and resources on in-house hardware or software installations. Together with the still-rising popularity of cloud infrastructure, PaaS is predicted to evolve, too. We expect to see more support for Java-based PaaS applications with Java adapted to cloud environments. Spring Native 6.0 GA and Spring Boot 3.0 releases this year marked the beginning of a new framework generation, embracing current and upcoming innovations in OpenJDK and the Java ecosystem. In addition, spring 6.0 brought to life ahead-of-time transformations, focused on native image support for Spring applications and promising to deliver better application performance in the future. Spring Native updates in 2023 are definitely in the close loop of the Java community. CVEs in frameworks and libraries written in Java continue their unfortunate rise. The CVE Details source provides detailed information on how CVEs are expanding, and in 2022 reached a sad number of 25,036. These vulnerability types present an opportunity for attackers to take over sensitive resources and perform remote code execution. We cannot expect that 2023 will become an exception in this trend of a growing number of CVEs discovered. And there will be a trend for higher levels of security to be presented across the entire Java ecosystem. CVEs are also called zero-day vulnerabilities or Log4J. A zero-day vulnerability is one that has been disclosed but is not yet patched. Ensuring security requires keeping your dependencies on the schedule for the required updates. Organizations like Cyclonedx are entirely focused on this agenda and can offer great recommendations and practices to ensure your Java application stays in the secure zone. 2023 is expected to become a year of more extensive adoption of Lambdas for Java. In 2022 AWS presented a new feature for their AWS Lambda project, Lambda SnapStart. SnapStart helps to improve startup latency significantly and is specifically relevant for software applications using synchronous APIs, interactive microservices, or data processing. SnapStart has already been implemented by Quarkus and Micronaut, and there is no doubt that more acceptance of Lambda in Java will follow in 2023. Virtual Threads (2nd preview) in JDK 20, due in March, is another event to watch out for in 2023. Virtual threads support thread-local variables, synchronization blocks, thread interruptions, etc. Virtual threads are lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications. The March preview is focused on: the ability for better scaling; thread API adoption of virtual threads with minimal change; and easier troubleshooting, debugging, and profiling of virtual threads. As announced by Oracle in 2022, parts/portions of GraalVM Community Edition Java code will move to OpenJDK. This initiative will affiliate the development of GraalVM and Java technologies, benefiting all contributors and users. In addition, the community editions of the GraalVM JIT and Ahead-of-Time (AOT) compilers will move to OpenJDK in 2023. This change will signify a security improvement and synchronization in release schedules, features, and development processes. These trends and events to expect in 2023 demonstrate how the industry is moving forward and reflect how continuous Java success comes about within the Java ecosystem community and via business demands for better cloud Java operation. The only negative side for all Java developers is still the security question. However, downturns are also driving progress forward, and we should see new and more effective solutions to ensure better security to revert this trend in 2023. With a great number of initiatives presented in 2022, Java in 2023 should become more flexible for the cloud environment. Java is the most popular language for enterprise applications, and many of them were built before the cloud age. In the cloud, Java can be costlier than other programming languages and needs adoption. Making Java cloud-native is among the highest priorities for the industry, and many of the most expected events of 2023 relate to improving Java operations in the cloud. Java application modernization is not that simple, and there is no single button to press to convert your Java application to cloud-native. Making Java effective, less expensive, and high performing requires integrating a set of components allowing this language to be adapted to its cloud-native version. 2023 promises more of these elements to make more sustainable cloud-based applications being developed. In 2023 we can also expect further expansion of the PaaS computing model as more convenient for the developers building products in the cloud. Negative trends of overall tech debt and rising security concerns have attracted the attention of software development companies. As a result, new development practices in 2023 will suggest tighter security and a more accurate investment in IT innovation. However, downturns are also driving progress forward, and we should see new and more effective solutions to revert these trends in 2023.
From time to time, you need to check which Java version is installed on your computer or server, for instance, when starting a new project or configuring an application to run on a server. But did you know there are multiple ways you can do this and even get much more information than you might think very quickly? Let's find out... Reading the Java Version in the Terminal Probably the easiest way to find the installed version is by using the java -version terminal command: $ java -version openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) Checking Version Files in the Installation Directory The above output results from info read by the java executable from a file inside its installation directory. Let's explore what we can find there. On my machine, as I use SDKMAN to switch between different Java versions, all my versions are stored here: $ ls -l /Users/frankdelporte/.sdkman/candidates/java/ total 0 drwxr-xr-x 15 frankdelporte staff 480 Apr 17 2022 11.0.15-zulu drwxr-xr-x 16 frankdelporte staff 512 Apr 17 2022 17.0.3.fx-zulu drwxr-xr-x 15 frankdelporte staff 480 Mar 29 2022 18.0.1-zulu drwxr-xr-x 15 frankdelporte staff 480 Sep 7 18:36 19-zulu drwxr-xr-x 18 frankdelporte staff 576 Apr 18 2022 8.0.332-zulu lrwxr-xr-x 1 frankdelporte staff 7 Nov 21 21:09 current -> 19-zulu And in each of these directories, a release file can be found, which also shows us the version information, including some extra information. $ cat /Users/frankdelporte/.sdkman/candidates/java/19-zulu/release IMPLEMENTOR="Azul Systems, Inc." IMPLEMENTOR_VERSION="Zulu19.28+81-CA" JAVA_VERSION="19" JAVA_VERSION_DATE="2022-09-20" LIBC="default" MODULES="java.base java.compiler ... jdk.unsupported jdk.unsupported.desktop jdk.xml.dom" OS_ARCH="aarch64" OS_NAME="Darwin" SOURCE=".:git:3d665268e905" $ cat /Users/frankdelporte/.sdkman/candidates/java/8.0.332-zulu//release JAVA_VERSION="1.8.0_332" OS_NAME="Darwin" OS_VERSION="11.2" OS_ARCH="aarch64" SOURCE="git:f4b2b4c5882e" Getting More Information With ShowSettings In 2010, an experimental flag (indicated with the X) was added to OpenJDK to provide more configuration information: -XshowSettings. This flag can be called with different arguments, each producing another information output. The cleanest way to call this flag is by adding -version; otherwise, you will get the long Java manual output as no application code was found to be executed. Reading the System Properties By using the -XshowSettings:properties flag, a long list of various properties is shown. $ java -XshowSettings:properties -version Property settings: file.encoding = UTF-8 file.separator = / ftp.nonProxyHosts = local|*.local|169.254/16|*.169.254/16 http.nonProxyHosts = local|*.local|169.254/16|*.169.254/16 java.class.path = java.class.version = 63.0 java.home = /Users/frankdelporte/.sdkman/candidates/java/19-zulu/zulu-19.jdk/Contents/Home java.io.tmpdir = /var/folders/np/6j1kls013kn2gpg_k6tz2lkr0000gn/T/ java.library.path = /Users/frankdelporte/Library/Java/Extensions /Library/Java/Extensions /Network/Library/Java/Extensions /System/Library/Java/Extensions /usr/lib/java . java.runtime.name = OpenJDK Runtime Environment java.runtime.version = 19+36 java.specification.name = Java Platform API Specification java.specification.vendor = Oracle Corporation java.specification.version = 19 java.vendor = Azul Systems, Inc. java.vendor.url = http://www.azul.com/ java.vendor.url.bug = http://www.azul.com/support/ java.vendor.version = Zulu19.28+81-CA java.version = 19 java.version.date = 2022-09-20 java.vm.compressedOopsMode = Zero based java.vm.info = mixed mode, sharing java.vm.name = OpenJDK 64-Bit Server VM java.vm.specification.name = Java Virtual Machine Specification java.vm.specification.vendor = Oracle Corporation java.vm.specification.version = 19 java.vm.vendor = Azul Systems, Inc. java.vm.version = 19+36 jdk.debug = release line.separator = \n native.encoding = UTF-8 os.arch = aarch64 os.name = Mac OS X os.version = 13.0.1 path.separator = : socksNonProxyHosts = local|*.local|169.254/16|*.169.254/16 stderr.encoding = UTF-8 stdout.encoding = UTF-8 sun.arch.data.model = 64 sun.boot.library.path = /Users/frankdelporte/.sdkman/candidates/java/19-zulu/zulu-19.jdk/Contents/Home/lib sun.cpu.endian = little sun.io.unicode.encoding = UnicodeBig sun.java.launcher = SUN_STANDARD sun.jnu.encoding = UTF-8 sun.management.compiler = HotSpot 64-Bit Tiered Compilers user.country = BE user.dir = /Users/frankdelporte user.home = /Users/frankdelporte user.language = en user.name = frankdelporte openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) If you ever faced the problem of an unsupported Java version 59 (are similar), you'll now also understand where this value is defined; it's right here in this list as java.class.version. It's an internal number used by Java to define the version. Java release 8 9 10 11 12 13 14 15 16 17 18 19 Class version 52 53 54 55 56 57 58 59 60 61 62 63 Reading the Locale Information In case you didn't know yet, I live in Belgium and use English as my computer language, as you can see when using the -XshowSettings:locale flag: $ java -XshowSettings:locale -version Locale settings: default locale = English (Belgium) default display locale = English (Belgium) default format locale = English (Belgium) available locales = , af, af_NA, af_ZA, af_ZA_#Latn, agq, agq_CM, agq_CM_#Latn, ak, ak_GH, ak_GH_#Latn, am, am_ET, am_ET_#Ethi, ar, ar_001, ar_AE, ar_BH, ar_DJ, ar_DZ, ar_EG, ar_EG_#Arab, ar_EH, ar_ER, ... zh_MO_#Hant, zh_SG, zh_SG_#Hans, zh_TW, zh_TW_#Hant, zh__#Hans, zh__#Hant, zu, zu_ZA, zu_ZA_#Latn openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) Reading the VM Settings With the -XshowSettings:vm flag, some info is shown about the Java Virtual Machine. As you can see in the second example, the amount of maximum heap memory size can be defined with the -Xmx flag. $ java -XshowSettings:vm -version VM settings: Max. Heap Size (Estimated): 8.00G Using VM: OpenJDK 64-Bit Server VM openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) $ java -XshowSettings:vm -Xmx512M -version VM settings: Max. Heap Size: 512.00M Using VM: OpenJDK 64-Bit Server VM openjdk version "19" 2022-09-20 OpenJDK Runtime Environment Zulu19.28+81-CA (build 19+36) OpenJDK 64-Bit Server VM Zulu19.28+81-CA (build 19+36, mixed mode, sharing) Reading All at Once If you want all of the information above with one call, use the -XshowSettings:all flag. Conclusion Next to the java -version, we can also use java -XshowSettings:all -version to get more info about our Java environment.
This is a shocker…I just switched laptops, and I thought I was downgrading from the “top of the line” M1 Max with 64 GB (14.1-inch version) to a “tiny” MacBook Air M2 with “only” 24gb of RAM. Turns out I was wrong. The new M2 seems to be noticeably faster for my use cases as a Java developer. I was at first shocked, but in retrospect, I guess this makes sense. I recently left my job at Lightrun. I usually buy my own laptops, as I don’t enjoy constantly switching devices when I’m at work or working on personal things. But since I worked at Lightrun for so long, I accepted their offer for a laptop. One year after I got the new laptop, I found myself leaving the company. So arguably, this should have been a big mistake. Turns out it wasn’t. I wanted to buy the same machine. I was very pleased with the M1 Max. It’s powerful, fast, light, and has a battery that lasts forever. It runs cool and looks cool. I placed an order with a local vendor who had the worst possible service. I ended up canceling that. Then I started looking around. I initially dismissed the MacBook Airs. I had a couple of MacBook Airs in the past, and they were good for some things. But I do a lot of video editing nowadays, and I also can’t stand their sharp edges. They are uncomfortable to hold in some situations. The new MacBook Airs finally increased the RAM. Not to 32 GB as I’d have wanted, but 24 GB is already a vast improvement over the minuscule 16 GB of older devices. They also come in black and cost much less than the equivalent pro. MacBook Airs are lighter and thinner. I’m very mobile, both because I travel and because I work everywhere. For me, the thin and light device is a tremendous advantage. You can check out the comparison tool on Apple's website. They do have two big disadvantages: No HDMI port— that sucks a bit. It was nice plugging in at conferences with no dongle. But it’s not something I do often, and AirPlay works for most other use cases. Only one external monitor — I don’t use external monitors when I work, so this wasn’t a problem for me. If you’re the type that needs two monitors for work, then this isn’t the laptop for you. Since both aren’t an issue for me and the other benefits are great. I saved some money and bought the Air. I expected to take a performance hit. Turns out I got a major performance boost! Migration I used Time Machine to back up my older Mac and restore it to the new Mac. In terms of installed software and settings, both devices should be identical. The stuff that is running on the old machine should be on the new machine as well. Including all the legacy that might be slowing it down. However, I wouldn’t consider my findings as scientific as this isn’t a clean environment. Everything is based on my use cases. Professional sites have better benchmarks for common use cases. I suggest referring to them for a more complete picture. However, for me, the machine is MUCH better. Probably most glaring is the IDE startup. I use IntelliJ/IDEA Ultimate for most of my day-to-day work. I just started writing a new book after completing my latest book (which you can preorder now), and for that purpose, I installed a fresh version of IntelliJ Community Edition. It doesn’t include all the plugins and setup in my typical install. It’s the perfect environment to check IDE startup time. Notice that I measured this with a stopwatch app, not ideal. I started the stopwatch with the icon click and stopped when the IDE fully loaded the Codename One project. MBP M1 Max 64 GB - 6.30 MBA M2 24 GB - 4.54 This is a sizeable gap in performance, and it’s consistent with these types of IO-bound operations. Running mvn package on the Codename One project for both showed slightly lower but still consistent improvements. I ran this multiple times to make sure: MBP M1 Max 64 GB - 20.211 MBA M2 24 GB - 18.346 These are not medians or averages, just the output of one specific execution. But I ran the build repeatedly, and the numbers were consistent with a rough 2-second advantage to the M2. Notice I used the ARM build of the JDK and not the x86 versions. As part of my work, I also create media and presentations. I work a lot in keynote and export content from it. The next obvious test is to export a small presentation I made to both PDF and a movie. In the PDF export, I made both exports for all the stages of the build. MBP M1 Max 64 GB - 2.8 MBA M2 24 GB - 2.13 Again this shows a healthy advantage to the M2 device. But here’s the twist, when exporting to a movie, the benchmark flipped completely, and the MBP wiped the floor with the MBA. MBP M1 Max 64 GB - 26.8 MBA M2 24 GB - 39.59 Explanation In retrospect, these numbers shouldn’t have surprised me. The M2 would be faster for these sorts of loads. IO would be faster. The only point where the M1 would gain an advantage would be if the 24 GB of the Air would be depleted. This isn’t likely for the current test, so the air wins. Where the Air loss is in GPU-bound work. I’m assuming the movie export code does all the encoding on the GPU, which is huge and powerful on the M1 Max. I hope this won't be a problem for my video editing work, but I guess I’ll manage with what I have. Even though the device is smaller by one inch only, the size difference is hard to get used to at this point. I worked on a MacBook Air in the past, so I’m sure this will pass as I get used to it. It’s a process. I’m thrilled with my decision, and the black device is such a refreshing feeling after all of those silver and gray Macs. The power brick is also much smaller, which is one of those details that matter so much to frequent travelers. Why Am I Using a Mac? This might be the obvious question. I don’t use an iPhone, so I might as well get a Linux laptop like a good hacker. I still develop things on Codename One; here, I occasionally need a Mac for iOS-related work. It’s not as often, but it happens. The second reason is that I’m pretty used to it by now. The desktop on Linux still feels not as productive to me. There is one reason I considered going back to Linux, and that’s docker. I love the M1/2 chips. They are fantastic. Unfortunately, many docker images are Intel only, and that’s pretty hard to work with when setting up anything sophisticated. The problem is solving itself as ARM machines gain traction. But we aren’t there yet. Finally Yes, I know. This article is shocking: a newer machine is faster than an older machine. But keep in mind that the M1 was top of the line in all regards, and the Air has half the performance cores. It's much thinner, fanless, and around 30% lighter. That's amazing over a single-generation update. Amazingly I think the M2 is powerful enough in a MacBook Air for most people. I think I would pick it even if the M1 Max was at the same price point. It’s better looking. It’s lighter. Most of the things that matter to me perform better on the Air. It’s small but not too small, and the screen is pretty great. I can live with all of those. It doesn’t have that weird MBA sharp edge older versions have. It’s a great machine. Hopefully, I’ll feel the same way when the honeymoon period is over, so if you’re reading this in 2023, feel free to comment/ping me, I might have additional insights. The one point I’m conflicted about is stickers. The black finish is so pretty. But I want stickers. I had such a hard time removing them from the M1 machine. It’s too soon…
An enumerated type (enum) is a handy data type that allows us to specify a list of constants to which an object field or database column can be set. The beauty of the enums is that we can enforce data integrity by providing the enum constants in a human-readable format. As a result, it’s unsurprising that this data type is natively supported in Java and PostgreSQL. However, the conversion between Java and PostgreSQL enums doesn’t work out of the box. The JDBC API doesn’t recognize enums as a distinct data type, leaving it up to the JDBC drivers to decide how to deal with the conversion. And, usually, the drivers do nothing about it — the chicken-and-egg problem. Many solutions help you map between Java and PostgreSQL enums, but most are ORM or JDBC-specific. This means that what is suggested for Spring Data will not work for Quarkus and vice versa. In this article, I will review a generic way of handling the Java and PostgreSQL enums conversion. This approach works for plain JDBC APIs and popular ORM frameworks such as Spring Data, Hibernate, Quarkus, and Micronaut. Moreover, it’s supported by databases built on PostgreSQL, including Amazon Aurora, Google AlloyDB, and YugabyteDB. Creating Java Entity Object and Enum Assume that we have a Java entity object for a pizza order: Java public class PizzaOrder { private Integer id; private OrderStatus status; private Timestamp orderTime; // getters and setters are omitted } The status field of the object is of an enumerated type defined as follows: Java public enum OrderStatus { Ordered, Baking, Delivering, YummyInMyTummy } The application sets the status to Ordered once we order a pizza online. The status changes to Baking as soon as the chef gets to our order. Once the pizza is freshly baked, it is picked up by someone and delivered to our door - the status is then updated to Delivering. In the end, the status is set to YummyInMyTummy meaning that we enjoyed the pizza (hopefully!) Creating Database Table and Enum To persist the pizza orders in PostgreSQL, let’s create the following table that is mapped to our PizzaOrder entity class: SQL CREATE TABLE pizza_order ( id int PRIMARY KEY, status order_status NOT NULL, order_time timestamp NOT NULL DEFAULT now() ); The table comes with a custom type named order_status. The type is an enum that is defined as follows: SQL CREATE TYPE order_status AS ENUM( 'Ordered', 'Baking', 'Delivering', 'YummyInMyTummy'); The type defines constants (statuses) similar to the Java counterpart. Hitting the Conversion Issue If we connect to PostgreSQL using psql (or another SQL tool) and execute the following INSERT statement, it will complete successfully: SQL insert into pizza_order (id, status, order_time) values (1, 'Ordered', now()); The statement nicely accepts the order status (the enum data type) in a text representation - Ordered. After seeing that, we may be tempted to send a Java enum value to PostgreSQL in the String format. If we use the JDBC API directly, the PreparedStatement can look as follows: Java PreparedStatement statement = conn .prepareStatement("INSERT INTO pizza_order (id, status, order_time) VALUES(?,?,?)"); statement.setInt(1, 1); statement.setString(2, OrderStatus.Ordered.toString()); statement.setTimestamp(3, Timestamp.from(Instant.now())); statement.executeUpdate(); However, the statement will fail with the following exception: Java org.postgresql.util.PSQLException: ERROR: column "status" is of type order_status but expression is of type character varying Hint: You will need to rewrite or cast the expression. Position: 60 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2675) at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2365) at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:355) at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:490) at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:408) Even though PostgreSQL accepts the enum text representation when an INSERT/UPDATE statement is executed directly via a psql session, it doesn’t support the conversion between the varchar (passed by Java) and our enum type. One way to fix this for the plain JDBC API is by persisting the Java enum as an object of the java.sql.Types.OTHER type: Java PreparedStatement statement = conn .prepareStatement("INSERT INTO pizza_order (id, status, order_time) VALUES(?,?,?)"); statement.setInt(1, 1); statement.setObject(2, OrderStatus.Ordered, java.sql.Types.OTHER); statement.setTimestamp(3, Timestamp.from(Instant.now())); statement.executeUpdate(); But, as I said earlier, this approach is not generic. While it works for the plain JDBC API, you need to look for another solution if you are on Spring Data, Quarkus, or another ORM. Casting Types at the Database Level The database provides a generic solution. PostgreSQL supports the cast operator that can perform a conversion between two data types automatically. So, in our case, all we need to do is to create the following operator: SQL CREATE CAST (varchar AS order_status) WITH INOUT AS IMPLICIT; The created operator will map between the varchar type (passed by the JDBC driver) and our database-level order_status enum type. The WITH INOUT AS IMPLICIT clause ensures that the cast will happen transparently and automatically for all the statements using the order_status type. Testing With Plain JDBC API After we create that cast operator in PostgreSQL, the earlier JDBC code snippet inserts an order with no issues: Java PreparedStatement statement = conn .prepareStatement("INSERT INTO pizza_order (id, status, order_time) VALUES(?,?,?)"); statement.setInt(1, 1); statement.setString(2, OrderStatus.Ordered.toString()); statement.setTimestamp(3, Timestamp.from(Instant.now())); statement.executeUpdate(); All we need is to pass the Java enum value as a String, and the driver will send it to PostgreSQL in the varchar representation that will automatically convert the varchar value to the order_status type. If you read the order back from the database, then you can easily reconstruct the Java-level enum from a String value: Java PreparedStatement statement = conn.prepareStatement("SELECT id, status, order_time " + "FROM pizza_order WHERE id = ?"); statement.setInt(1, 1); ResultSet resultSet = statement.executeQuery(); resultSet.next(); PizzaOrder order = new PizzaOrder(); order.setId(resultSet.getInt(1)); order.setStatus(OrderStatus.valueOf(resultSet.getString(2))); order.setOrderTime(resultSet.getTimestamp(3)); Testing With Spring Data Next, let’s validate the cast operator-based approach with Spring Data. Nowadays, you’re likely to use an ORM rather than the JDBC API directly. First, we need to label our PizzaOrder entity class with a few JPA and Hibernate annotations: Java @Entity public class PizzaOrder { @Id private Integer id; @Enumerated(EnumType.STRING) private OrderStatus status; @CreationTimestamp private Timestamp orderTime; // getters and setters are omitted } The @Enumerated(EnumType.STRING) instructs a JPA implementation (usually Hibernate) to pass the enum value as a String to the driver. Second, we create PizzaOrderRepository and save an entity object using the Spring Data API: Java // The repository interface public interface PizzaOrderRepository extends JpaRepository<PizzaOrder, Integer> { } // The service class @Service public class PizzaOrderService { @Autowired PizzaOrderRepository repo; @Transactional public void addNewOrder(Integer id) { PizzaOrder order = new PizzaOrder(); order.setId(id); order.setStatus(OrderStatus.Ordered); repo.save(order); } ... // Somewhere in the source code pizzaService.addNewOrder(1); } When the pizzaService.addNewOrder(1) method is called somewhere in our source code, the order will be created and persisted successfully to the database. The conversion between the Java and PostgreSQL enums will occur with no issues. Lastly, if we need to read the order back from the database, we can use the JpaRepository.findById(ID id) method, which recreates the Java enum from its String representation: Java PizzaOrder order = repo.findById(orderId).get(); System.out.println("Order status: " + order.getStatus()); Testing With Quarkus How about Quarkus, which might be your #1 ORM? There is no significant difference from Spring Data as long as Quarkus favours Hibernate as a JPA implementation. First, we annotate our PizzaOrder entity class with JPA and Hibernate annotations: Java @Entity(name = "pizza_order") public class PizzaOrder { @Id private Integer id; @Enumerated(EnumType.STRING) private OrderStatus status; @CreationTimestamp @Column(name = "order_time") private Timestamp orderTime; // getters and setters are omitted } Second, we introduce PizzaOrderService that uses the EntityManager instance for database requests: Java @ApplicationScoped public class PizzaOrderService { @Inject EntityManager entityManager; @Transactional public void addNewOrder(Integer id) { PizzaOrder order = new PizzaOrder(); order.setId(id); order.setStatus(OrderStatus.Ordered); entityManager.persist(order); } ... // Somewhere in the source code pizzaService.addNewOrder(1); When we call the pizzaService.addNewOrder(1) somewhere in our application logic, Quarkus will persist the order successfully, and PostgreSQL will take care of the Java and PostgreSQL enums conversion. Finally, to read the order back from the database, we can use the following method of the EntityManager that maps the data from the result set to the PizzaOrder entity class (including the enum field): Java PizzaOrder order = entityManager.find(PizzaOrder.class, 1); System.out.println("Order status: " + order.getStatus()); Testing With Micronaut Alright, alright, how about Micronaut? I love this framework, and you might favour it as well. The database-side cast operator is a perfect solution for Micronaut as well. To make things a little different, we won’t use Hibernate for Micronaut. Instead, we’ll rely on Micronaut’s own capabilities by using the micronaut-data-jdbc module: XML <dependency> <groupId>io.micronaut.data</groupId> <artifactId>micronaut-data-jdbc</artifactId> </dependency> // other dependencies First, let’s annotate the PizzaOrder entity: Java @MappedEntity public class PizzaOrder { @Id private Integer id; @Enumerated(EnumType.STRING) private OrderStatus status; private Timestamp orderTime; // getters and setters are omitted } Next, define PizzaRepository: Java @JdbcRepository(dialect = Dialect.POSTGRES) public interface PizzaRepository extends CrudRepository<PizzaOrder, Integer> { } And, then store a pizza order in the database by invoking the following code snippet somewhere in the application logic: Java PizzaOrder order = new PizzaOrder(); order.setId(1); order.setStatus(OrderStatus.Ordered); order.setOrderTime(Timestamp.from(Instant.now())); repository.save(order); As with Spring Data and Quarkus, Micronaut persists the object to PostgreSQL with no issues letting the database handle the conversion between the Java and PostgreSQL enum types. Finally, whenever we need to read the order back from the database, we can use the following JPA API: Java PizzaOrder order = repository.findById(id).get(); System.out.println("Order status: " + order.getStatus()); The findById(ID id) method retrieves the record from the database and recreates the PizzaOrder entity, including the PizzaOrder.status field of the enum type. Wrapping Up Nowadays, it’s highly likely that you will use Java enums in your application logic and as a result will need to persist them to a PostgreSQL database. You can use an ORM-specific solution for the conversion between Java and PostgreSQL enums, or you can take advantage of the generic approach based on the cast operator of PostgreSQL. The cast operator-based approach works for all ORMs, including Spring Data, Hibernate, Quarkus, and Micronaut, as well as popular PostgreSQL-compliant databases like Amazon Aurora, Google AlloyDB, and YugabyteDB.
Introduction In this article, we’re going to look at some pain points while developing Java applications on top of Kubernetes. We’re going to look at newly added functionality in Eclipse JKube’s Kubernetes Maven Plugin that allows your application running on your local machine to get exposed in Kubernetes Cluster. If you haven’t heard about Eclipse JKube or Kubernetes Maven Plugin, I’d suggest you read the following DZone articles first: DZone: Deploy Maven Apps to Kubernetes With JKube Kubernetes Maven Plugin DZone: Containerize Gradle Apps and Deploy to Kubernetes With JKube Kubernetes Gradle Plugin Target Audience: This blog post targets Java developers who are working with Kubernetes and are familiar with containerized application development. We’re assuming that the reader has experience with Docker and Kubernetes. Eclipse JKube’s Kubernetes Remote Development functionality is suitable for Java developers working on Java applications communicating with several micro-services in Kubernetes, which is difficult to replicate on your local machine. Current Solutions Docker Compose: You can provide your own YAML file to configure your application services and start all services by your provided YAML configuration. This is only limited to Docker environment. Sometimes these services can be impossible to start due to resource constraints. Also, we may not be allowed to duplicate sensitive data locally. Dev Services: Some popular frameworks also support the automatic provisioning of dependent services in development/testing environments. Developers only need to worry about enabling this feature, and the framework takes care of starting the service and wiring it with your application. This is also limited to Docker environment. Build and Deploy Tooling: Use Kubernetes-related tooling to deploy all dependent services and then deploy your application to Kubernetes. Not smooth as compared to previous alternatives that are limited to Docker Building and deploying applications on every small change leads to slower development iterations What Is Eclipse JKube Kubernetes Remote Development Our team at Eclipse Cloud Tooling is focused on creating tools that ease developer activity and development workflow across distributed services. While working and testing on Kubernetes Maven Plugin, we noticed that repeatedly building and deploying applications to Kubernetes while developing locally isn’t the most effective way of working. In v1.10.1 of Kubernetes Maven Plugin, we added a new goal k8s:remote-dev . This goal tries to ease java developer workflow across distributed services via: Consuming remote services that are running inside the Kubernetes cluster Live application coding while interacting with other services running in Kubernetes Cluster Exposing applications running locally or by connecting to remote services Why Kubernetes Remote Development? Let’s consider a scenario where we’re writing a joke microservice that tries to fetch joke strings from other microservices. Here is a diagram for you to get a better understanding: Figure 1: Simple Joke application using two existing services Custom Joke Service is our main application which has one endpoint /random-joke. It depends on two other microservices ChuckNorris and Jokes via /chuck-norris and /joke endpoints, respectively. The user requests a joke using /random-joke endpoint, and our application fetches a joke string from one of the two microservices randomly. In order to develop and test our application, we need access to these dependent ChuckNorris and Jokes services, respectively. Let’s see what the developer’s workflow would look like: While developing and testing Custom Joke Microservice locally, the developer has to set up dependent microservices locally again and again. In order to verify the application is working properly in Kubernetes. The developer has to build, package and deploy the Custom Joke application to Kubernetes in every development iteration. The dependent Services (ChuckNorris and Joke) might be quite heavyweight services and might have some dependent services of their own. It might not be straightforward to set up these locally in the developer’s environment. Exposing Remote Kubernetes Services Locally Let’s assume you have two applications already running in Kubernetes Cluster on which your current application is dependent: $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service1 NodePort 10.101.224.227 <none> 8080:31878/TCP 113s service2 NodePort 10.101.224.227 <none> 8080:31879/TCP 113s Let us expose these remote services running in Kubernetes Cluster to our local machine. Here is a diagram for you to better understand: Figure 2: JKube's remote development simplifying remote development In order to do that, we need to provide XML configuration to our plugin for exposing these services: XML <plugin> <groupId>org.eclipse.jkube</groupId> <artifactId>kubernetes-maven-plugin</artifactId> <version>${jkube.version}</version> <configuration> <remoteDevelopment> <remoteServices> <remoteService> <hostname>service1</hostname> <!-- Name of Service --> <port>8080</port> <!-- Service port --> <localPort>8081</localPort> <!-- Local Port where to expose --> </remoteService> <remoteService> <hostname>service2</hostname> <!-- Name of Service --> <port>8080</port> <!-- Service Port --> <localPort>8082</localPort> <!-- Local Port where to expose --> </remoteService> </remoteServices> </remoteDevelopment> </configuration> </plugin> The above configuration is doing these two things: Expose Kubernetes service named service1 on port 8081 on your local machine Expose Kubernetes service named service2 on port 8082 on your local machine Run Kubernetes Remote Development goal: Shell $ mvn k8s:remote-dev [INFO] Scanning for projects... [INFO] [INFO] -----------< org.eclipse.jkube.demos:random-jokes-generator >----------- [INFO] Building random-jokes-generator 1.0.0-SNAPSHOT [INFO] --------------------------------[ jar ]--------------------------------- [INFO] [INFO] --- kubernetes-maven-plugin:1.10.1:remote-dev (default-cli) @ random-jokes-generator --- [INFO] k8s: Waiting for JKube remote development Pod [jkube-remote-dev-9cafc1e1-054b-4fab-8f4e-b4345056478e] to be ready... [INFO] k8s: JKube remote development Pod [jkube-remote-dev-9cafc1e1-054b-4fab-8f4e-b4345056478e] is ready [INFO] k8s: Opening remote development connection to Kubernetes: jkube-remote-dev-9cafc1e1-054b-4fab-8f4e-b4345056478e:54252 [INFO] k8s: Kubernetes Service service1:8080 is now available at local port 8081 [INFO] k8s: Kubernetes Service service2:8080 is now available at local port 8082 Try accessing services available locally on ports: Shell $ curl localhost:8081/ Chuck Norris's OSI network model has only one layer - Physical. $ curl localhost:8082/ Why do Java programmers have to wear glasses? Because they don't C#. As you can see, You are able to access Kubernetes services service1 and service2 locally on ports 8081 and 8082, respectively. Conclusion In this article, you learned about Eclipse JKube’s Kubernetes Maven Plugin’s remote development goal and how you can expose your local applications to the Kubernetes cluster and vice versa. In case you’re interested in knowing more about Eclipse JKube, you can check these links: Documentation Github Issue Tracker StackOverflow YouTube Channel Twitter Gitter Chat
Let's continue building our Java-based, command-line text editor that we started in Part 1 and Part 2. Here in Part 3, we will cover the following: How to implement page-up and page-down functionality How to make the end key work properly, including cursor snapping How to make our text editor work on all operating systems, including macOS and Windows - not just Linux You're in for a ride! What’s in the Video In the previous episode, we got vertical scrolling with the arrow keys working. Unfortunately, when you press page up or page down, nothing happens - and we need to change that! We'll use a tiny trick to simulate the page up/down functionality, mapping it to pressing the arrow up/down for the number of rows our screen has. It serves as a good initial implementation, though there are a couple of edge cases we need to iron out. Once we have the page up and down working, it's time to care about horizontal scrolling. At the moment, our text viewer renders lines overflowing the screen, leading to heavy flickering whenever we vertically move our cursor. Ideally, we only want to render as much text as we have columns on the screen - and then we want to move the screen's contents horizontally, whenever we press the left or right keys at the beginning or end of the screen. To implement horizontal scrolling we can take most of the code for vertical scrolling, copy and paste it, and just replace a couple of key variables - done! After horizontal scrolling, let's take care of a couple of minor editing issues: first of all, the end key. It currently makes the user jump to the end of the screen. Ideally, we'd like the end key to only jump to the end of the current line. With a couple of small changes to our moveCursor() function, we can implement that behavior. This opens up another problem: when we are at the end of a line and then move vertically upwards or downwards, we also want to automatically snap to the end of the new line, not just end up somewhere in the middle. So, we'll need to fix our cursor-snapping implementation. In between, I'll leave a couple of notes for you regarding cursor line wrapping. We don't have enough time to implement it in this episode, but it would serve as a great exercise, for you, the watcher, to implement. Last but not least, we'll need to fix a couple of issues for our macOS and Windows platform support. The issue with macOS is that while it uses the same OS APIs as Linux, it uses different values for the OS calls. Hence, we'll need to invent an abstraction/delegation layer that detects if the current OS is macOS or Linux, and then use the corresponding, OS-specific classes. Windows uses a completely different API to put terminals into raw mode or get the current terminal size, and we'll have to dig deep into Microsoft's API documentation to find out which Windows methods we'll need to implement on our JNA side. That's it for today! See you in the next episode, where we'll implement searching across your text file.
If you want to run your Java microservices on a public cloud infrastructure, you should take advantage of the multiple cloud regions. There are several reasons why this is a good idea. First, cloud availability zones and regions fail regularly due to hardware issues, bugs introduced after a cloud service upgrade, or banal human errors. One of the most well-known S3 outages happened when an AWS employee messed with an operational command! If a cloud region fails, so do your microservices from that region. But, if you run microservice instances across multiple cloud regions, you remain up and running even if an entire US East region is melting. Second, you may choose to deploy microservices in the US East, but the application gets traction across the Atlantic in Europe. The roundtrip latency for users from Europe to your application instances in the US East will be around 100ms. Compare this to the 5ms roundtrip latency for the user traffic originating from the US East (near the data centers running microservices), and don't be surprised when European users say your app is slow. You shouldn't hear this negative feedback if microservice instances are deployed in both the US East and Europe West regions. Finally, suppose a Java microservice serves a user request from Europe but requests data from a database instance in the USA. In that case, you might fall foul of data residency requirements (if the requested data is classified as personal by GDPR). However, if the microservice instance runs in Europe and gets the personal data from a database instance in one of the European cloud regions, you won't have the same problems with regulators. This was a lengthy introduction to the article's main topic, but I wanted you to see a few benefits of running Java microservices in multiple distant cloud locations. Now, let's move on to the main topic and see how to develop and deploy multi-region microservices with Spring Cloud. High-Level Concept Let’s take a geo-distributed Java messenger as an example to form a high-level understanding of how microservices and Spring Cloud function in a multi-region environment. The application (comprised of multiple microservices) runs across multiple distant regions: US West, US Central, US West, Europe West, and Asia South. All application instances are stateless. Spring Cloud components operate in the same regions where the application instances are located. The application uses Spring Config Server for configuration settings distribution and Spring Discovery Server for smooth and fault-tolerant inter-service communication. YugabyteDB is selected as a distributed database that can easily function across distant locations. Plus, as long as it’s built on the PostgreSQL source code, it naturally integrates with Spring Data and other components of the Spring ecosystem. I’m not going to review YugabyteDB multi-region deployment options in this article. Check out this article if you’re curious about those options and how to select the best one for this geo-distributed Java messenger. The user traffic gets to the microservice instances via a Global External Cloud Load Balancer. In short, the load balancer comes with a single IP address that can be accessed from any point on the planet. That IP address (or a DNS name that translates to the address) is given to your web or mobile front end, which uses the IP to connect to the application backend. The load balancer forwards user requests to the nearest application instance automatically. I’ll demonstrate this cloud component in greater detail below. Target Architecture A target architecture of the multi-region Java messenger looks like this: The whole solution runs on the Google Cloud Platform. You might prefer another cloud provider, so feel free to go with it. I usually default to Google for its developer experience, abundant and reasonably priced infrastructure, fast and stable network, and other goodies I’ll be referring to throughout the article. The microservice instances can be deployed in as many cloud regions as necessary. In the picture above, there are two random regions: Region A and Region B. Microservice instances can run in several availability zones of a region (Zone A and B of Region A) or within a single zone (Zone A of Region B). It’s also reasonable to have a single instance of the Spring Discovery and Config servers per region, but I purposefully run an instance of each server per availability zone to bring the latency to a minimum. Who decides which microservice instance will serve a user request? Well, the Global External Load Balancer is the decision-maker! Suppose a user pulls up her phone, opens the Java messenger, and sends a message. The request with the message will go to the load balancer, and it might forward it this way: Region A is the closest to the user, and it’s healthy at the time of the request (no outages). The load balancer selects this region based on those conditions. In that region, microservice instances are available in both Zone A and B. So, the load balancer can pick any zone if both are live and healthy. Let’s suppose that the request went to Zone B. I’ll explain what each microservice is responsible for in the next section. As of now, all you should know is that the Messaging microservice stores all application data (messages, channels, user profiles, etc.) in a multi-region YugabyteDB deployment. The Attachments microservice uses a globally distributed Google Cloud Storage for user pictures. Microservices and Spring Cloud Let’s talk more about microservices and how they utilize Spring Cloud. The Messenger microservice implements the key functionality that every messenger app must possess—the ability to send messages across channels and workspaces. The Attachments microservice uploads pictures and other files. You can check their source code in the geo-messenger’s repository. Spring Cloud Config Server Both microservices are built on Spring Boot. When they start, they retrieve configuration settings from the Spring Cloud Config Server, which is an excellent option if you need to externalize the config files in a distributed environment. The config server can host and pull your configuration from various backends, including a Git repository, Vault, and a JDBC-compliant database. In the case of the Java geo-messenger, the Git option is used, and the following line from the application.properties file of both microservices requests Spring Boot to load the settings from the Config Server: YAML spring.config.import=configserver:http://${CONFIG_SERVER_HOST}:${CONFIG_SERVER_PORT} Spring Cloud Discovery Server Once the Messenger and Attachments microservices are booted, they register with their zone-local instance of the Spring Cloud Discovery Server (that belongs to the Spring Cloud Netflix component). The location of a Discovery Server instance is defined in the following configuration setting that is transferred from the Config Server instance: YAML eureka.client.serviceUrl.defaultZone=http://${DISCOVERY_SERVER_HOST}:${DISCOVERY_SERVER_PORT}/eureka You can also open the HTTP address in the browser to confirm the services have successfully registered with the Discovery Server: The microservice register with the server using the name you pass via the spring.application.name setting of the application.properties file. As the above picture shows, I’ve chosen the following names: spring.application.name=messenger for the Messenger microservice spring.application.name=attachments for the Attachments service The microservice instances use those names to locate and send requests to each other via the Discovery Server. For example, when a user wants to upload a picture in a discussion channel, the request goes to the Messenger service first. But then, the Messenger delegates this task to the Attachments microservice with the help of the Discovery Server. First, the Messenger service gets an instance of the Attachments counterpart: Java List<ServiceInstance> serviceInstances = discoveryClient.getInstances("ATTACHMENTS"); ServiceInstance instance; if (!serviceInstances.isEmpty()) { instance = serviceInstances .get(ThreadLocalRandom.current().nextInt(0, serviceInstances.size())); } System.out.printf("Connected to service %s with URI %s\n", instance.getInstanceId(), instance.getUri()); Next, the Messenger microservice creates an HTTP client using the Attachments’ instance URI and sends a picture via an InputStream: Java HttpClient httpClient = HttpClient.newBuilder().build(); HttpRequest request = HttpRequest.newBuilder() .uri(URI.create(instance.getUri() + "/upload?fileName=" + fileName)) .header("Content-Type", mimeType) .POST(HttpRequest.BodyPublishers.ofInputStream(new Supplier<InputStream>() { @Override public InputStream get() { return inputStream; } })).build(); The Attachments service receives the request via a REST endpoint and eventually stores the picture in Google Cloud Storage, returning a picture URL to the Messenger microservice: Java public Optional<String> storeFile(String filePath, String fileName, String contentType) { if (client == null) { initClient(); } String objectName = generateUniqueObjectName(fileName); BlobId blobId = BlobId.of(bucketName, objectName); BlobInfo blobInfo = BlobInfo.newBuilder(blobId).build(); try { client.create(blobInfo, Files.readAllBytes(Paths.get(filePath))); } catch (IOException e) { System.err.println("Failed to load the file:" + fileName); e.printStackTrace(); return Optional.empty(); } System.out.printf( "File %s uploaded to bucket %s as %s %n", filePath, bucketName, objectName); String objectFullAddress = "http://storage.googleapis.com/" + bucketName + "/" + objectName; System.out.println("Picture public address: " + objectFullAddress); return Optional.of(objectFullAddress); } If you’d like to explore a complete implementation of the microservices and how they communicate via the Discovery Server, visit the GitHub repo, linked earlier in this article. Deploying on Google Cloud Platform Now, let’s deploy the Java geo-messenger on GCP across three geographies and five cloud regions - North America ('us-west2,' 'us-central1,' 'us-east4'), Europe ('europe-west3') and Asia ('asia-east1'). Follow these deployment steps: Create a Google project. Create a custom premium network. Configure Google Cloud Storage. Create Instance Templates for VMs. Start VMs with application instances. Configure Global External Load Balancer. I’ll skip the detailed instructions for the steps above. You can find them here. Instead, let me use the illustration below to clarify why the premium Google network was selected in step #2: Suppose an application instance is deployed in the USA on GCP, and the user connects to the application from India. There are slow and fast routes to the app from the user’s location. The slow route is taken if you select the Standard Network for your deployment. In this case, the user request travels over the public Internet, entering and exiting the networks of many providers before getting to the USA. Eventually, in the USA, the request gets to Google’s PoP (Point of Presence) near the application instance, enters the Google network, and gets to the application. The fast route is selected if your deployment uses the Premium Network. In this case, the user request enters the Google Network at the PoP closest to the user and never leaves it. That PoP is in India, and the request will speed to the application instance in the USA via a fast and stable connection. Plus, the Cloud External Load Balancer requires the premium tier. Otherwise, you won’t be able to intercept user requests at the nearest PoP and forward them to the nearby application instances. Testing Fault Tolerance Once the microservices are deployed across continents, you can witness how the Cloud Load Balancer functions at normal times and during outages. Open an IP address used by the load balancer in your browser and send a few messages with photos in one of the discussion channels: Which instance of the Messenger and Attachments microservices served your last requests? Well, it depends on where you are in the world. In my case, the instances from the US East (ig-us-east) serve my traffic: What would happen with the application if the US East region became unavailable, bringing down all microservices in that location? Not a problem for my multi-region deployment. The load balancer will detect issues in the US East and forward my traffic to another closest location. In this case, the traffic is forwarded to Europe as long as I live in the US East Coast near the Atlantic Ocean: To emulate the US East region outage, I connected to the VM in that region and shut down all of the microservices. The load balancer detected that the microservices no longer responded in that region and started forwarding my traffic to a European data center. Enjoy the fault tolerance out of the box! Testing Performance Apart from fault tolerance, if you deploy Java microservices across multiple cloud regions, your application can serve user requests at low latency regardless of their location. To make this happen, first, you need to deploy the microservice instances in the cloud locations where most of your users live and configure the Global External Load Balancer that can do routing for you. This is what I discussed in "Automating Java Application Deployment Across Multiple Cloud Regions." Second, you need to arrange your data properly in those locations. Your database needs to function across multiple regions, the same as microservice instances. Otherwise, the latency between microservices and the database will be high and overall performance will be poor. In the discussed architecture, I used YugabyteDB as it is a distributed SQL database that can be deployed across multiple cloud regions. The article, "Geo-Distributed Microservices and Their Database: Fighting the High Latency" shows how latency and performance improve if YugabyteDB stores data close to your microservice instances. Think of that article as the continuation of this story, but with a focus on database deployment. As a spoiler, I improved latency from 450ms to 5ms for users who used the Java messenger from South Asia. Wrapping Up If you develop Java applications for public cloud environments, you should utilize the global cloud infrastructure by deploying application instances across multiple regions. This will make your solution more resilient, performant, and compliant with the data regulatory requirements. It‘s important to remember that it’s not that difficult to create microservices that function and coordinate across distant cloud locations. The Spring ecosystem provides you with the Spring Cloud framework, and public cloud providers like Google offer the infrastructure and services needed to make things simple
Jakarta EE 10 is probably the most important event of this year in the Java world. Since this fall, software editors providing Jakarta EE-compliant platforms are working hard to validate their respective implementations against the TCK (Technology Compatibility Kit) supplied by the Eclipse Foundation. At Payara, as much as in bigger companies like Oracle, Red Hat, or IBM, they aren't left behind, and, as of last September, they announced the availability of the Payara 6 Platform, declined in three versions: Server, Micro, and Cloud. As an implementation of Jakarta EE 10 Web, Core, and Micro Profile, Payara Server 6 is itself proposed in two editions: Community and Enterprise. But how does this impact Java developers? What does it mean in terms of application development and portability? Is it easier or more difficult to write and deploy code compliant to the new specifications than it was with Release 9 or 8 of the Jakarta EE drafts? Well, it depends. While any new Jakarta EE release aims at simplifying the whole bunch of the API (Application Programming Interface) set and at facilitating the developers' work, the fact that, on a total of 20 specifications, 16 have been updated, and a new one has been added, shows how dynamic the communities and the working groups involved in this process are. Which isn't without some difficulties when trying to transition to the newest releases with a minimal impact. This is especially true when it comes to combining, for example, JAX-RS 4.0 and its implementation by Jersey 3.1.0 with JSON-B 3.0 and its Yasson provider by Eclipse or when experiencing NoSuchMethodException, due to an inconvenient combination of versions. Or when noticing that a transitive Maven dependency, pulled out by a nonupdated library like RESTassured, still uses the old javax namespace. In order to avoid all these troubles, as marginal as they may be, one of the most practical solutions is to use Maven archetypes. A Maven archetype is a set of templates used to generate a Java project skeleton. They use Velocity placeholders which, at the generation time, are replaced by actual values that make sense in the current context. Hence, software editors, different OSS communities and work groups, or even individuals provide such Maven archetypes. The Apache community, for example, provides several hundreds of such Maven archetypes, and one may find one for almost any type of Java project. The advantage of using them is that the developers can generate a basic and clean skeleton of their Java project, on which they can build while avoiding some minor but painful annoyances. Jakarta EE 10 is so recent that most of its implementations are still in beta testing and consequently, the Maven archetypes dedicated to the Java projects using this release aren't yet available. In this blog post, I'm demonstrating such an archetype that generates a Jakarta EE 10 web applications skeleton and its associated artifacts to be deployed on a Payara 6 server. The code might be found here. The figure below shows the structure of our Maven archetype: A Maven archetype is a Maven project like any other, and, as such, it is driven by a pom.xml file, which the most essential part is reproduced below: XML <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>fr.simplex-software.archetypes</groupId> <artifactId>jakartaee10-basic-archetype</artifactId> <version>1.0-SNAPSHOT</version> <name>Basic Java EE 10 project archetype</name> ... <packaging>maven-archetype</packaging> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> </properties> <build> <extensions> <extension> <groupId>org.apache.maven.archetype</groupId> <artifactId>archetype-packaging</artifactId> <version>3.1.1</version> </extension> </extensions> </build> </project> The only notable thing that our pom.xml file should contain is the declaration of the archetype-packaging Maven plugin that will be used to generate our Java project skeleton. In order to do that, proceed as follows: Shell $ git clone https://github.com/nicolasduminil/jakartaee10-basic-archetype.git $ cd jakartaee10-basic-archetype $ mvn clean install [INFO] Scanning for projects... [INFO] [INFO] -----< fr.simplex-software.archetypes:jakartaee10-basic-archetype >----- [INFO] Building Basic Java EE 10 project archetype 1.0-SNAPSHOT [INFO] --------------------------[ maven-archetype ]--------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ jakartaee10-basic-archetype --- [INFO] [INFO] --- maven-resources-plugin:3.3.0:resources (default-resources) @ jakartaee10-basic-archetype --- [INFO] Copying 11 resources [INFO] [INFO] --- maven-resources-plugin:3.3.0:testResources (default-testResources) @ jakartaee10-basic-archetype --- [INFO] skip non existing resourceDirectory /home/nicolas/jakartaee10-basic-archetype/src/test/resources [INFO] [INFO] --- maven-archetype-plugin:3.2.1:jar (default-jar) @ jakartaee10-basic-archetype --- [INFO] Building archetype jar: /home/nicolas/jakartaee10-basic-archetype/target/jakartaee10-basic-archetype-1.0-SNAPSHOT.jar [INFO] Building jar: /home/nicolas/jakartaee10-basic-archetype/target/jakartaee10-basic-archetype-1.0-SNAPSHOT.jar [INFO] [INFO] --- maven-archetype-plugin:3.2.1:integration-test (default-integration-test) @ jakartaee10-basic-archetype --- [WARNING] No Archetype IT projects: root 'projects' directory not found. [INFO] [INFO] --- maven-install-plugin:3.1.0:install (default-install) @ jakartaee10-basic-archetype --- [INFO] Installing /home/nicolas/jakartaee10-basic-archetype/pom.xml to /home/nicolas/.m2/repository/fr/simplex-software/archetypes/jakartaee10-basic-archetype/1.0-SNAPSHOT/jakartaee10-basic-archetype-1.0-SNAPSHOT.pom [INFO] Installing /home/nicolas/jakartaee10-basic-archetype/target/jakartaee10-basic-archetype-1.0-SNAPSHOT.jar to /home/nicolas/.m2/repository/fr/simplex-software/archetypes/jakartaee10-basic-archetype/1.0-SNAPSHOT/jakartaee10-basic-archetype-1.0-SNAPSHOT.jar [INFO] [INFO] --- maven-archetype-plugin:3.2.1:update-local-catalog (default-update-local-catalog) @ jakartaee10-basic-archetype --- [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.151 s [INFO] Finished at: 2022-12-02T13:32:35+01:00 [INFO] ------------------------------------------------------------------------ Here we're cloning first the GIT repository containing our archetype, and then, we're installing it in our local Maven repository, such that we can use it in order to generate projects. As explained earlier, our archetype is a set of template files located in the directory src/main/resources/atchetype-resources. These files use the Velocity notation to express placeholders that will be processed and replaced during the generation process. For example, looking at the file MyResource.java which exposes a simple REST API: Java package $package; import jakarta.ws.rs.*; import jakarta.ws.rs.core.*; import jakarta.inject.*; import org.eclipse.microprofile.config.inject.*; @Path("myresource") public class MyResource { @Inject @ConfigProperty(name = "message") private String message; @GET @Produces(MediaType.TEXT_PLAIN) public String getIt() { return message; } } Here, the placeholder $package will be replaced by the actual Java package name of the generated class. The whole bunch of resources that will be included in the generated project are described by the file archetype-metadata.xml located in src/main/resources/META-INF/maven. XML <archetype-descriptor xsi:schemaLocation="http://maven.apache.org/plugins/maven-archetype-plugin/archetype-descriptor/1.0.0 http://maven.apache.org/xsd/archetype-descriptor-1.0.0.xsd" xmlns="http://maven.apache.org/plugins/maven-archetype-plugin/archetype-descriptor/1.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" name="jakarta-ee-10-webapp"> <fileSets> <fileSet filtered="true" packaged="false" encoding="UTF-8"> <directory>src/main/java</directory> <includes> <include>**/*.java</include> </includes> </fileSet> <fileSet filtered="true" packaged="false" encoding="UTF-8"> <directory>src/test/java</directory> <includes> <include>**/*.java</include> </includes> </fileSet> <fileSet filtered="true" packaged="false" encoding="UTF-8"> <directory>src/main/resources</directory> <includes> <include>**/*.xml</include> <include>**/*.properties</include> </includes> </fileSet> <fileSet filtered="false" packaged="false" encoding="UTF-8"> <directory>src/main/webapp</directory> <includes> <include>index.jsp</include> </includes> </fileSet> <fileSet filtered="false" packaged="false" encoding="UTF-8"> <directory></directory> <includes> <include>.gitignore</include> </includes> </fileSet> <fileSet filtered="true" packaged="false" encoding="UTF-8"> <directory></directory> <includes> <include>README.md</include> <include>Dockerfile</include> <include>build.sh</include> </includes> </fileSet> </fileSets> </archetype-descriptor> The syntax above is self-descriptive and probably already known by all the Maven users. Once we installed our Maven archetype in our local Maven repository, we can proceed with the generation process: Shell $ cd example $ ../jakartaee10-basic-archetype/generate.sh [INFO] Scanning for projects... [INFO] [INFO] ------------------< org.apache.maven:standalone-pom >------------------- [INFO] Building Maven Stub Project (No POM) 1 [INFO] --------------------------------[ pom ]--------------------------------- [INFO] [INFO] >>> maven-archetype-plugin:3.2.1:generate (default-cli) > generate-sources @ standalone-pom >>> [INFO] [INFO] <<< maven-archetype-plugin:3.2.1:generate (default-cli) < generate-sources @ standalone-pom <<< [INFO] [INFO] [INFO] --- maven-archetype-plugin:3.2.1:generate (default-cli) @ standalone-pom --- [INFO] Generating project in Batch mode [INFO] Archetype repository not defined. Using the one from [fr.simplex-software.archetypes:jakartaee10-basic-archetype:1.0-SNAPSHOT] found in catalog local [INFO] ---------------------------------------------------------------------------- [INFO] Using following parameters for creating project from Archetype: jakartaee10-basic-archetype:1.0-SNAPSHOT [INFO] ---------------------------------------------------------------------------- [INFO] Parameter: groupId, Value: com.exemple [INFO] Parameter: artifactId, Value: test [INFO] Parameter: version, Value: 1.0-SNAPSHOT [INFO] Parameter: package, Value: com.exemple [INFO] Parameter: packageInPathFormat, Value: com/exemple [INFO] Parameter: package, Value: com.exemple [INFO] Parameter: groupId, Value: com.exemple [INFO] Parameter: artifactId, Value: test [INFO] Parameter: version, Value: 1.0-SNAPSHOT [INFO] Project created from Archetype in dir: /home/nicolas/toto/test [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 1.898 s [INFO] Finished at: 2022-12-02T13:53:32+01:00 [INFO] ------------------------------------------------------------------------ $ cd test $ ./build.sh [INFO] Scanning for projects... [INFO] [INFO] --------------------------< com.exemple:test >-------------------------- [INFO] Building test 1.0-SNAPSHOT [INFO] --------------------------------[ war ]--------------------------------- [INFO] [INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ test --- [INFO] [INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ test --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] Copying 1 resource [INFO] [INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ test --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 2 source files to /home/nicolas/toto/test/target/classes [INFO] [INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ test --- [INFO] Using 'UTF-8' encoding to copy filtered resources. [INFO] skip non existing resourceDirectory /home/nicolas/toto/test/src/test/resources [INFO] [INFO] --- maven-compiler-plugin:3.1:testCompile (default-testCompile) @ test --- [INFO] Changes detected - recompiling the module! [INFO] Compiling 1 source file to /home/nicolas/toto/test/target/test-classes [INFO] [INFO] --- maven-surefire-plugin:2.12.4:test (default-test) @ test --- [INFO] [INFO] --- maven-war-plugin:3.3.1:war (default-war) @ test --- [INFO] Packaging webapp [INFO] Assembling webapp [test] in [/home/nicolas/toto/test/target/test] [INFO] Processing war project [INFO] Copying webapp resources [/home/nicolas/toto/test/src/main/webapp] [INFO] Building war: /home/nicolas/toto/test/target/test.war [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 2.620 s [INFO] Finished at: 2022-12-02T13:54:33+01:00 [INFO] ------------------------------------------------------------------------ Sending build context to Docker daemon 13.66MB Step 1/2 : FROM payara/server-full:6.2022.1 ---> ada23f507bd2 Step 2/2 : COPY ./target/test.war $DEPLOY_DIR ---> 96650dc307b0 Successfully built 96650dc307b0 Successfully tagged com.exemple/test:latest Error: No such container: test 39934e82c8b164c4e6cd91036df7e2b0731254cdb869d7f2321ad1f2aaf37350 The generate.sh script that we're running above only contains the maven archetype:generate goal, as shown: Shell #!/bin/sh mvn -B archetype:generate \ -DarchetypeGroupId=fr.simplex-software.archetypes \ -DarchetypeArtifactId=jakartaee10-basic-archetype \ -DarchetypeVersion=1.0-SNAPSHOT \ -DgroupId=com.exemple \ -DartifactId=test Here we're using our Maven archetype in order to generate a new artifact which GAV (GroupID, ArtifactID, Version) are: com.example:test:1.0-SNAPSHOT. Once generated, the new project may be imported in your preferred IDE. As you may see, it consists of a simple REST API exposing and endpoint returning some text. For this purpose, we leverage Jakarta JAX-RS 4.0 and its Jersey 3.1 implementation with the Eclipse Microprofile Configuration 5.0. Please take some time to look at the generation project, including the pom.xml file and the dependencies used with their associated versions. All these dependencies are mandatory in order to get a valid artifact. We just generated our new project; let's build it now. In the listing above, we did that by running the script build.sh. Shell #!/bin/sh mvn clean package && docker build -t ${groupId}/${artifactId} . docker rm -f ${artifactId} || true && docker run -d -p 8080:8080 -p 4848:4848 --name ${artifactId} ${groupId}/${artifactId} This script is first packaging the newly generated Java project in a WAR, and after that, it builds a new Docker image based on the Dockerfile below: Dockerfile FROM payara/server-full:6.2022.1 COPY ./target/${artifactId}.war $DEPLOY_DIR As you may see, this Dockerfile is just extending the standard Payara server Docker image provided by the company to copy the WAR that was previously packaged into the auto-deployment server directory. Copying the WAR in the mentioned directory, which by the way is /opt/payara/deployments, automatically deploys the packaged application. Once this new Docker image built, we run it under the same name as our Maven artifactId, by mapping the ports 8080 and 4848. Please notice the way that the Velocity placeholders are again used. Once the Maven build process is successfully finished, a Docker container of the name of the test should be running. Of course, you need to have a running Docker daemon. You can test that everything is okay using the following curl request: Shell curl http://localhost:8080/test/api/myresource or by executing the script myresource.sh. An integration test is generated as well. It leverages test containers to execute an instance of Payara server 6 in a Docker container in which the application has been deployed. Then it uses the JAX-RS client, as implemented by Jersey Client 3.1, to perform HTTP requests to the exposed endpoint. You can experience it by running the following maven command: Shell $ mvn verify Please notice that this command can only be run after having previously executed the build.sh script or having manually run: Shell $ mvn -DskipTests clean package This is because the integration test uses testcontainers to deploy the WAR and, consequently, the WAR has to exist then. Hence, the package goal which creates the WAR should have been executed already. And we need to skip tests in order to avoid trying to execute them before packaging. Enjoy!
One of the most common test automation challenges is how do we modify the request headers in Selenium WebDriver. As an automation tester, you would come across this challenge for any programming language, including Java. Before coming to a solution, we need to understand the problem statement better and arrive at different possibilities to modify the header request in Java while working with Selenium WebDriver Tutorial. In this Selenium Java tutorial, we will learn how to modify HTTP request headers in Java using Selenium WebDriver with different available options. Starting your journey with Selenium WebDriver? Check out this step-by-step guide to perform Automation testing using Selenium WebDriver. So let’s get started! What Are HTTP Headers HTTP headers are an important part of the HTTP protocol. They define an HTTP message (request or response) and allow the client and server to exchange optional metadata with the message. They are composed of a case-insensitive header field name followed by a colon, then a header field value. Header fields can be extended over multiple lines by preceding each extra line with at least one space or horizontal tab. Headers can be grouped according to their contexts: Request Headers: HTTP request headers are used to supply additional information about the resource being fetched and the client making the request. Response Headers: HTTP response headers provide information about the response. The Location header specifies the location of a resource, and the server header presents information about the server providing the resource. Representation Headers: HTTP representation headers are an important part of any HTTP response. They provide information about protocol elements like mime types, character encodings, and more. This makes them a vital part of processing resources over the internet. Payload Headers: HTTP payload headers contain data about the payload of an HTTP message (such as its length and encoding) but are representation-independent.Deep Dive Into HTTP Request HeadersThe HTTP Request header is a communication mechanism that enables browsers or clients to request specific webpages or data from a (Web) server. When used in web communications or internet browsing, the HTTP Request Header enables browsers and clients to communicate with the appropriate Web server by sending requests.The HTTP request headers describe the request sent by the web browser to load a page. It’s also referred to as the client-to-server protocol. The header includes details of the client’s request, such as the type of browser and operating system used by the user and other parameters required for the proper display of the requested content on the screen.Here is the major information included within the HTTP request headers: IP address (source) and port number. URL of the requested web page. Web Server or the destination website (host). Data type that the browser will accept (text, html, xml, etc.). Browser type (Mozilla, Chrome, IE) to send compatible data. In response, an HTTP response header containing the requested data is sent back by the. The Need to Change the HTTP Request Headers Can you guess why we even need to change the request header once it is already set into the scripts? Here are some of the scenarios where you might need to change the HTTP Request Headers: Testing the control and/or testing the different variants by establishing appropriate HTTP headers. The need to test the cases when different aspects of the web application or even the server logic have to be thoroughly tested. Since the HTTP request headers come to use to enable some specific parts of a web application logic, which in general would be disabled in a normal mode, modification of the HTTP request headers may be required from time to time per the test scenario. Testing the guest mode on a web application under test is the ideal case where you might need to modify the HTTP request headers. However, the function of modifying the HTTP request header, which Selenium RC once supported, is now not handled by Selenium Webdriver. This is why the question arises about how we change the header request when the test automation project is written using the Selenium framework and Java. How To Modify Header Requests in Selenium Java Project In this part of the Selenium Java tutorial, we look at the numerous ways to modify header requests in Java. Broadly, there are a few possibilities, following which one can modify the header request in the Java-Selenium project. Using a driver/library like REST Assured instead of Selenium. Using a reverse proxy such as browser mob-proxy or some other proxy mechanism. Using a Firefox browser extension, which would help to modify the headers for the request. Let us explore each possibility one by one: Modify HTTP Request Headers Using REST Assured Library Along with Selenium, we can make use of REST Assured, which is a wonderful tool to work with REST services in a simple way. The prerequisites to configure REST Assured with your project in any IDE (e.g., Eclipse) is fairly easy. After setting up Java, Eclipse, and TestNG, you would need to download the required REST Assured jar files. After the jar files are downloaded, you have to create a project in Eclipse and add the downloaded jar files as external jars to the Properties section. This is again similar to the manner in which we add Selenium jar files to the project. Once you have successfully set up the Java project with the REST Assured library, you are good to go. We intend to create a mechanism so that the request header is customizable. To achieve this with the possibility mentioned above, we first need to know the conventional way to create a request header. Let’s consider the following scenario: We have one Java class named RequestHeaderChangeDemo where we maintain the basic configurations We have a test step file named TestSteps, where we will call the methods from the RequestHeaderChangeDemo Java class through which we will execute our test. Observe the below Java class named RequestHeaderChangeDemo. The BASE_URL is the Amazon website on which the following four methods are applied: authenticateUser getProducts addProducts removeProduct public class RequestHeaderChangeDemo { private static final String BASE_URL = "https://amazon.com"; public static IRestResponse<Token> authenticateUser(AuthorizationRequest authRequest) { RestAssured.baseURI = BASE_URL; RequestSpecification request = RestAssured.given(); request.header("Content-Type", "application/json"); Response response = request.body(authRequest).post(Route.generateToken()); return new RestResponse(Token.class, response); } public static IRestResponse<Products> getProducts() { RestAssured.baseURI = BASE_URL; RequestSpecification request = RestAssured.given(); request.header("Content-Type", "application/json"); Response response = request.get(Route.products()); return new RestResponse(Products.class, response); } public static IRestResponse<UserAccount> addProduct(AddProductsRequest addProductsRequest, String token) { RestAssured.baseURI = BASE_URL; RequestSpecification request = RestAssured.given(); request.header("Authorization", "Bearer " + token) .header("Content-Type", "application/json"); Response response = request.body(addProductsRequest).post(Route.products()); return new RestResponse(UserAccount.class, response); } public static Response removeProduct(RemoveProductRequest removeProductRequest, String token) { RestAssured.baseURI = BASE_URL; RequestSpecification request = RestAssured.given(); request.header("Authorization", "Bearer " + token) .header("Content-Type", "application/json"); return request.body(removeProductRequest).delete(Route.product());, } } In the above Java class file, we have repeatedly sent the BASE_URL and headers in every consecutive method. Example is shown below: RestAssured.baseURI = BASE_URL; RequestSpecification request = RestAssured.given(); request.header("Content-Type", "application/json"); Response response = request.body(authRequest).post(Route.generateToken()); The request.header method requests the header in the JSON format. There is a significant amount of duplication of code which reduces the maintainability aspect of the code. This can be avoided if we initialize the RequestSpecification object in the constructor and make these methods non-static (i.e. creating the instance method). Since the instance method in Java belongs to the Object of the class and not to the class itself, the method can be called even after creating the Object of the class. Along with this, we will also override the instance method. Converting the method to an instance method results in the following advantages: Authentication is done only once in one RequestSpecification Object. There won’t be any further need to create the same for other requests. Flexibility to modify the request header in the project. Therefore, let us see how both the Java class RequestHeaderChangeDemo and the test step file TestSteps look when we use the instance method. Java Class for class RequestHeaderChangeDemo with instance method public class RequestHeaderChangeDemo { private final RequestSpecification request; public RequestHeaderChangeDemo(String baseUrl) { RestAssured.baseURI = baseUrl; request = RestAssured.given(); request.header("Content-Type", "application/json"); } public void authenticateUser(AuthorizationRequest authRequest) { Response response = request.body(authRequest).post(Route.generateToken()); if (response.statusCode() != HttpStatus.SC_OK) throw new RuntimeException("Authentication Failed. Content of failed Response: " + response.toString() + " , Status Code : " + response.statusCode()); Token tokenResponse = response.body().jsonPath().getObject("$", Token.class); request.header("Authorization", "Bearer " + tokenResponse.token); } public IRestResponse<Products> getProducts() { Response response = request.get(Route.products()); return new RestResponse(Products.class, response); } public IRestResponse<UserAccount> addProduct(AddProductsRequest addProductsRequest) { Response response = request.body(addProductsRequest).post(Route.products()); return new RestResponse(UserAccount.class, response); } public Response removeProducts(RemoveProductRequest removeProductRequest) { return request.body(removeProductRequest).delete(Route.product()); } } Code Walkthrough We have created a constructor to initialize the RequestSpecification object containing BaseURL and Request Headers. Earlier, we had to pass the token in every request header. Now, we put the tokenresponse into the same instance of the request as soon as we receive it in the method authenticateUser(). This enables the test step execution to move forward without adding the token for every request like it was done earlier. This makes the header available for the subsequent calls to the server. This RequestHeaderChangeDemo Java class will now be initialized in the TestSteps file. We change the TestSteps file in line with the changes in the RequestHeaderChangeDemo Java class. public class TestSteps { private final String USER_ID = " (Enter the user id from your test case )"; private Response response; private IRestResponse<UserAccount> userAccountResponse; private Product product; private final String BaseUrl = "https://amazon.com"; private RequestHeaderChangeDemo endPoints; @Given("^User is authorized$") public void authorizedUser() { endPoints = new RequestHeaderChangeDemo (BaseUrl); AuthorizationRequest authRequest = new AuthorizationRequest("(Username)", "(Password)"); endPoints.authenticateUser(authRequest); } @Given("^Available Product List$") public void availableProductLists() { IRestResponse<Products> productsResponse = endPoints.getProducts(); Product = productsResponse.getBody().products.get(0); } @When("^Adding the Product in Wishlist$") public void addProductInWishList() { ADDPROD code = new ADDPROD(product.code); AddProductsRequest addProductsRequest = new AddProductsRequest(USER_ID, code); userAccountResponse = endPoints.addProduct(addProductsRequest); } @Then("^The productis added$") public void productIsAdded() { Assert.assertTrue(userAccountResponse.isSuccessful()); Assert.assertEquals(201, userAccountResponse.getStatusCode()); Assert.assertEquals(USER_ID, userAccountResponse.getBody().userID); Asert.assertEquals(product.code, userAccountResponse.getBody().products.get(0).code); } @When("^Product to be removed from the list$") public void removeProductFromList() { RemoveProductRequest removeProductRequest = new RemoveProductRequest(USER_ID, product.code); response = endPoints.removeProduct(removeProductRequest); } @Then("^Product is removed$") public void productIsRemoved() { Assert.assertEquals(204, response.getStatusCode()); userAccountResponse = endPoints.getUserAccount(USER_ID); Assert.assertEquals(200, userAccountResponse.getStatusCode()); Assert.assertEquals(0, userAccountResponse.getBody().products.size()); } } Code Walkthrough Here’s what we have done in the modified implementation: Initiatialized RequestHeaderChangeDemo class objects as endpoints. The BaseURL was passed in the first method (i.e. authorizedUser). Within the method authorizedUser, we invoked the constructor authenticateUser of the RequestHeaderChangeDemo class. Hence the same endpoint object is used by the subsequent step definitions. Modify HTTP Request Headers Using Reverse Proxy Like Browser Mob-Proxy As the name suggests, we can opt for using proxies when dealing with the request header changes in a Java-Selenium automation test suite. As Selenium forbids injecting information amidst the browser and the server, proxies can come to a rescue. This approach is not preferred if the testing is being performed behind a corporate firewall. Being a web infrastructure component, Proxy makes the web traffic move through it by positioning itself between the client and the server. In the corporate world, proxies work similarly, making the traffic pass through it, allowing the ones that are safe and blocking the potential threats. Proxies come with the capability to modify both the requests and the responses, either partially or completely. The core idea is to send the authorization headers, bypassing the phase that includes the credential dialogue, also known as the basic authentication dialog. However, this turns out to be a tiring process, especially if the test cases demand frequent reconfigurations. This is where the browser mob-proxy library comes into the picture. When you make the proxy configuration part of the Selenium automation testing suite, the proxy configuration will stand valid each time you execute the test suite. Let us see how we can use the browser mob-proxy with a sample website that is secured with basic authentication. To tackle this, we might narrow down two possible ways: Add authorization headers to all requests with no condition or exception. Add headers only to the requests which meet certain conditions. Though we will not address headers management problems, we would still demonstrate how to address authorization issues with the help of the browser mob-proxy authorization toolset. In this part of the Selenium Java tutorial, we will focus only on the first methodology (i.e. adding authorization headers to all the requests). First, we add the dependencies of browsermob-proxy in pom.xml ....................... ....................... <dependencies> <dependency> <groupId>net.lightbody.bmp</groupId> <artifactId>browsermob-core</artifactId> <version>2.1.5</version> <scope>test</scope> </dependency> </dependencies> ....................... ....................... public class caseFirstTest { WebDriver driver; BrowserMobProxy proxy; @BeforeAll public static void globalSetup() { System.setProperty("webdriver.gecko.driver", "(path of the driver)"); } @BeforeEach public void setUp() { setUpProxy(); FirefoxOptions Options = new FirefoxOptions(); Options.setProxy(ClientUtil.createSeleniumProxy(proxy)); driver = new FirefoxDriver(Options); } @Test public void testBasicAuth() { driver.get("https://webelement.click/stand/basic?lang=en"); Wait<WebDriver> waiter = new FluentWait(driver).withTimeout(Duration.ofSeconds(50)).ignoring(NoSuchElementException.class); String greetings = waiter.until(ExpectedConditions.visibilityOfElementLocated(By.xpath("(Mention the xpath)"))).getText(); Assertions.assertEquals("(message"); } @AfterEach public void tearDown() { if(driver != null) { driver.quit(); } if(proxy != null) { proxy.stop(); } } private void setUpProxy( { } } If you want to pass this approach to all the header requests, a particular proxy, in this case, the forAllProxy method should be invoked as shown below: public void forAllProxy() { proxy = new BrowserMobProxyServer(); try { String authHeader = "Basic " + Base64.getEncoder().encodeToString("webelement:click".getBytes("utf-8")); proxy.addHeader("checkauth", authfirstHeader); } catch (UnsupportedEncodingException e) { System.err.println("the Authorization can not be passed"); e.printStackTrace(); } proxy.start(0); } In the above code, the line that starts with String authHeader states that we are creating the header, and this will be added to the requests. After that, these requests are passed through the proxy we created in proxy.addHeader(“checkauth”, authfirstHeader). try { String authHeader = "Basic " + Base64.getEncoder().encodeToString("webelement:click".getBytes("utf-8")); proxy.addHeader("checkauth", authfirstHeader); } catch (UnsupportedEncodingException e) { ……………………………………………………………………………… ……………………………………………………………………………… ……………………………………………………………………………... } proxy.start(0); } Eventually, we start the proxy setting 0 to mark the start parameter, and the proxy starts on the port. Modify HTTP Request Headers Using Firefox Extension In this part of the Selenium Java tutorial, we look at how to modify the header requests using the appropriate Firefox browser extension. The major drawback of this option is that it works only with Firefox (and not other browsers like Chrome, Edge, etc.). Perform the following steps to modify HTTP request headers using a Firefox extension: Download the Firefox browser Extension Load the extension. Set up the extension preferences. Set the Desired Capabilities. Prepare the test automation script. Let us go through each step one by one: 1. Download the Firefox browser extension Search for the firefox extension with .*xpi and set it up in the project 2. Load the Firefox extension Add the Firefox profile referring to the below code: FirefoxProfile profile = new FirefoxProfile(); File modifyHeaders = new File(System.getProperty("user.dir") + "/resources/modify_headers.xpi"); profile.setEnableNativeEvents(false); try { profile.addExtension(modifyHeaders); } catch (IOException e) { e.printStackTrace(); } 3. Set the extension preferences Once we load the Firefox extension into the project, we set the preferences (i.e. various inputs that need to be set before the extension is triggered). This is done using the profile.setPreference method. This method sets the preference for any given profile through the key-set parameter mechanism. Here the first parameter is the key that sets the value in addition to the second parameter, which sets a corresponding integer value. Here is the reference implementation: profile.setPreference("modifyheaders.headers.count", 1); profile.setPreference("modifyheaders.headers.action0", "Add"); profile.setPreference("modifyheaders.headers.name0", "Value"); profile.setPreference("modifyheaders.headers.value0", "numeric value"); profile.setPreference("modifyheaders.headers.enabled0", true); profile.setPreference("modifyheaders.config.active", true); profile.setPreference("modifyheaders.config.alwaysOn", true); In the above code, we list the number of times we want to set the header instance. profile.setPreference("modifyheaders.headers.count", 1); Next, we specify the action, and the header name and header value contain the dynamically received values from the API calls. profile.setPreference("modifyheaders.headers.action0", "Add"); For the rest of the line of the implementation of .setPreference, we enable all so that it allows the extension to be loaded when the WebDriver instantiates the Firefox browser along with setting the extension in active mode with HTTP header. 4. Set up the Desired Capabilities The Desired Capabilities in Selenium are used to set the browser, browser version, and platform type on which the automation test needs to be performed. Here we how we can set the desired capabilities: DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setBrowserName("firefox"); capabilities.setPlatform(org.openqa.selenium.Platform.ANY); capabilities.setCapability(FirefoxDriver.PROFILE, profile); WebDriver driver = new FirefoxDriver(capabilities); driver.get("url"); What if you want to modify HTTP request headers with a Firefox version that is not installed on your local (or test) machine? This is where LambdaTest, the largest cloud-based automation testing platform that offers faster cross browser testing infrastructure, comes to the rescue. With LambdaTest, you have the flexibility to modify HTTP request headers for different browsers and platform combinations. If you are willing to modify HTTP request headers using the Firefox extension, you can use LambdaTest to realize the same on different versions of the Firefox browser. 5. Draft the entire test automation script Once you have been through all the above steps, we proceed with designing the entire test automation script: public void startwebsite() { FirefoxProfile profile = new FirefoxProfile(); File modifyHeaders = new File(System.getProperty("user.dir") + "/resources/modify_headers.xpi"); profile.setEnableNativeEvents(false); try { profile.addExtension(modifyHeaders); } catch (IOException e) { e.printStackTrace(); } profile.setPreference("modifyheaders.headers.count", 1); profile.setPreference("modifyheaders.headers.action0", "Add"); profile.setPreference("modifyheaders.headers.name0", "Value"); profile.setPreference("modifyheaders.headers.value0", "Numeric Value"); profile.setPreference("modifyheaders.headers.enabled0", true); profile.setPreference("modifyheaders.config.active", true); profile.setPreference("modifyheaders.config.alwaysOn", true); DesiredCapabilities capabilities = new DesiredCapabilities(); capabilities.setBrowserName("firefox"); capabilities.setPlatform(org.openqa.selenium.Platform.ANY); capabilities.setCapability(FirefoxDriver.PROFILE, profile); WebDriver driver = new FirefoxDriver(capabilities); driver.get("url"); } Conclusion In this Selenium Java tutorial, we explored three different ways to handle the modifications on the HTTP request headers. Selenium in itself is a great tool and has consistently worked well in web automation testing. Nevertheless, the tool cannot change the request headers. After exploring all three alternatives to modify the request header in a Java Selenium project, we can vouch for the first option using REST Assured. However, you may want to try out the other options and come up with your observations and perceptions in the comments section.
Nicolas Fränkel
Developer Advocate,
Api7
Shai Almog
OSS Hacker, Developer Advocate and Entrepreneur,
Codename One
Marco Behler
Ram Lakshmanan
GCeasy.io & fastThread.io