Also known as the build stage of the SDLC, coding focuses on the writing and programming of a system. The Zones in this category take a hands-on approach to equip developers with the knowledge about frameworks, tools, and languages that they can tailor to their own build needs.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.
Database Systems
Every modern application and organization collects data. With that, there is a constant demand for database systems to expand, scale, and take on more responsibilities. Database architectures have become more complex, and as a result, there are more implementation choices. An effective database management system allows for quick access to database queries, and an organization can efficiently make informed decisions. So how does one effectively scale a database system and not sacrifice its quality?Our Database Systems Trend Report offers answers to this question by providing industry insights into database management selection and evaluation criteria. It also explores database management patterns for microservices, relational database migration strategies, time series compression algorithms and their applications, advice for the best data governing practices, and more. The goal of this report is to set up organizations for scaling success.
Drupal 9 Essentials
The Java 9 release in 2017 saw the introduction of the Java Module System. This module system was developed directly for the Java language and is not to be confused with module systems such as IntelliJ Idea or Maven. The module system helps to provide a more secure and structured approach to writing Java code by better-organizing components, thus preventing malicious or out-of-date code from being used. In this article, we will look at what exactly the Java Module System is and how it can benefit developers. Benefits of Using Java Module Java modules were introduced in Java 9 as a new way to organize and package Java code. They provide several benefits, including: Strong encapsulation: Modules allow you to encapsulate your code and hide its implementation details from other modules. This helps to reduce the risk of coupling and improve the maintainability of your code. Better organization: Modules help you to organize your code into logical units, making it easier to navigate and understand. You can group related classes and packages together in a module and specify dependencies between modules. Improved security: Modules provide a way to control access to your code and limit the exposure of sensitive APIs. You can specify which modules are allowed to access a particular module and which packages and classes within a module are exposed to the outside world. Faster startup time: Modules allow the Java runtime to only load the modules that are actually needed for a particular application, reducing startup time and memory usage. How To Define Module Module Name Module Descriptor Set of Packages Dependencies, Type of resource, etc. Let's walk through an example of a modular sample application in Java. Our application will have two modules: com.example.core and com.example.app. The core module will contain some utility classes that the app module will use. Here's the module descriptor for the core module: Java module com.example.core { exports com.example.core.utils; } In this module, we define that it exports the com.example.core.utils package, which contains some utility classes. Here's the module descriptor for the app module: Java module com.example.app { requires com.example.core; exports com.example.app; } In this module, we specify that it requires the com.example.core module, so it can use the utility classes in that module. We also specify that it exports the com.example.app package, which contains the main class of our application. Now, let's take a look at the source code for our application. In the com.example.core module, we have a utility class: Java package com.example.core.utils; public class StringUtils { public static boolean isEmpty(String str) { return str == null || str.isEmpty(); } } In the com.example.app module, we have a main class: Java package com.example.app; import com.example.core.utils.StringUtils; public class MyApp { public static void main(String[] args) { String myString = ""; if (StringUtils.isEmpty(myString)) { System.out.println("The string is empty"); } else { System.out.println("The string is not empty"); } } } In this main class, we use the StringUtils class from the com.example.core module to check if a string is empty or not. To compile and run this application, we can use the following commands: Java $ javac -d mods/com.example.core src/com.example.core/com/example/core/utils/StringUtils.java $ javac --module-path mods -d mods/com.example.app src/com.example.app/com/example/app/MyApp.java $ java --module-path mods -m com.example.app/com.example.app.MyApp These commands compile the core module and the app module and then run the MyApp class in the com.example.app module. Conclusion Java programming allows developers to employ a modular approach, which can result in smaller, more secure code. By using this technique, the code becomes encapsulated at the package level for extra security. Although there is no requirement to use this technique, it provides developers with an additional tool to potentially write higher-quality code.
Ethereum has experienced dazzling growth in recent years. According to YCharts, the programmable blockchain now has approximately 220 million unique addresses. Linked to the increase in users is an explosion in the number of dApps. Global companies and startups across finance, sales, HR, accounting, supply chain, and manufacturing are using dApps to streamline processes and onboard new customers. Multiple frameworks exist that simplify the dApp development process for Web2 developers, who want to participate in Web3. This post examines four of the most popular. But first, what is a dApp? What Is dApp? A dApp, or decentralized application, is serverless software that runs on a decentralized network and uses a programmable blockchain for security, transparency, and immutability. A dApp combines smart contracts with a frontend user interface (HTML5, React, Angular). DApps can be used in a variety of industries and services, from social media to supply-chain management, payment tracking, complaint resolution, and all manner of accounting and (decentralized) financial services. How Is a dApp Different From an App? To the end client, a dApp shouldn’t feel any different compared to a traditional app. The differences are beneath the hood. Firstly, unlike a conventional app that has its backend code running on centralized servers, such as AWS or Azure, dApps run on a decentralized peer-to-peer network (blockchain), such as Cardano, Algorand, Polkadot, Solana, or Tezos. However, for this article, we will focus on the most popular network—Ethereum. When designing your decentralized app, building for the Ethereum blockchain starts with selecting the right framework for your needs. dApp Advantages Increased privacy and censorship: Users don’t need to provide an identity to interact with a dApp. This protects user data. Better security: A conventional app runs on centralized servers that are more vulnerable to tampering and data breaches. More interoperability: Traditional apps are mostly isolated, siloed software. dApps give interoperability across the same—and increasingly other—blockchain technology. Trustless: Smart contracts execute in predictable, pre-programmed ways, removing the need for intermediaries. How Do dApps Work With APIs? dApps use APIs to interact and access the functionality of other dApps—to retrieve financial, HR, or accounts data, for example. They can also open their own APIs to the wider ecosystem of Ethereum dApps. Additionally, dApps use APIs to send transactions and interact with smart contracts on Ethereum. Common APIs for Interacting With Ethereum JSON-RPC API: A popular API used to send transactions, read data, and interact with smart contracts. Web3.js: A JavaScript library that provides a user-friendly API for interacting with Ethereum. Web3.js is used to send transactions, read data and interact with smart contracts. Additional functionality includes event handling and contract abstraction. Infura.API: Provides hosted Ethereum nodes so developers can interact with Ethereum without running their own. The Best Frameworks for Developing Ethereum dApps Solidity, the programming language of Ethereum, owes much to JavaScript and C++, as such, Web2 developers should experience a shallow learning curve. However, there are numerous frameworks for developing decentralized apps that make the process for developers more straightforward. Picking the right one will go a long way to determining your success. Here are four of the best frameworks: Source: Truffle Truffle Truffle is a popular development and testing framework for dApps, for first time and experienced Ethereum developers. As well as containing a Web3.js library, Truffle is simple, user friendly and, with over 56K GitHub users, trusted. To install Truffle, you need to have Node, npm, and Python. You can install Truffle via npm with the command npm install -g truffle. Truffle Pros User-friendly interface with a comprehensive suite of developer tools, including smart contract development, testing, and debugging. Write tests in Solidity, JavaScript, and TypeScript and use Drizzle frontend libraries for dApp UI. Layer 2 support develops on EVM and JSON-RPC compatible blockchains such as Optimism, Polygon, Arbitrum and Avalanche. Truffle Cons Steep learning curve and a potential complex testing and debugging environment for first time dApp developers. Reliance on JavaScript is a limitation for experienced Ethereum developers. Truffle Use Cases Building and deploying smart contracts on the Ethereum blockchain. Developing and testing smart contracts locally before deploying them to the blockchain. Automating contract testing and deployment. Managing and interacting with multiple development networks and test nets. Creating and managing digital assets, such as tokens. Building decentralized autonomous organizations (DAOs). Source: Hardhat Hardhat Hardhat allows developers to build, test, and deploy smart contracts and dApps using a variety of tools and libraries. With over 114K users on GitHub and an active Discord community, Hardhat is a hugely popular framework for dApp developers. Much of its popularity can be attributed to its rich list of features, flexibility, and the Hardhat Ethereum Virtual Machine for testing and debugging smart contracts. Hardhat Pros Intuitive debugging and testing environment. Developers get stack traces, console.log, and explicit error messages when transactions fail. Test and deploy dApps via JavaScript, Rust, and TypeScript integration. Active community and trusted by some of the biggest names in Web3—Yearn, Uniswap, Decentraland, and Chainlink. Hardhat Cons Steep learning curve and limited documentation compared to Truffle. Limited support for frontend frameworks for dApp UI design. Designed more for experienced Web3 dApp developers. Hardhat Use Cases Developing and testing smart contracts on a local development network. Automating smart contract testing and deployment. Debugging and troubleshooting smart contract issues. Simulating and testing different network conditions, such as high network latency or low gas prices. Creating and managing multiple development networks for different stages of the development process. Interacting with smart contracts through a user-friendly command line interface (CLI). Source: Embark Embark Similar to Truffle, Embark provides tools and libraries (Web3.js, IPFS, EmbarkJS, and Embark-testrpc) for developing, launching, and maintaining dApps. Additional features of Embark include automatic contract deployment and a user interface for integration with other APIs. Embark is a sound choice for first time Ethereum dApp developers. Embark Pros User-friendly interface. Comes with Cockpit—web-based tools to facilitate the development and debugging of dApps. Multiple libraries, storage, and integration with IPFS, Whisper, and Swarm. Respected debugging and testing environment. Extensive plug-in customization options for full dApp development. Embark Cons Steep learning curve. Reliance on JavaScript. Limited GitHub community and not as popular in Web3 as other frameworks, such as Truffle. Embark Use Cases Building and deploying smart contracts on the Ethereum blockchain. Building frontend user interfaces for dApps, using JavaScript frameworks such as AngularJS and ReactJS. Developing and testing smart contracts locally before deploying them to the blockchain. Integrating dApps with Web3 wallets and other blockchain-related tools. Automating deployment and management of smart contracts and dApps. Source: OpenZeppelin OpenZeppelin OpenZeppelin is a popular dApp framework with some of the biggest companies in Web3 (Decentraland, Aave, ENS, andThe Sandbox). Its smart contract templates and rich reserve of libraries (Network.js, Hotloader) are tried and tested. The OpenZeppelin starter kits make the framework an ideal starting place for first-time Ethereum dApp developers. OpenZepplin Pros OpenZeppelin starter kits are a great way to build your first dApp—reuse community vetted code, upgrade, and test smart contracts and create UI. Widely used open source framework with an active GitHub community and documentation. Extensive and trusted audit service—smart contracts will conform to established standards. OpenZepplin Cons Reliance on Solidity. Steep learning curve. Needs to be used in conjunction with other frameworks, such as Truffle and Embark, for the complete dApp development process. OpenZepplin Use Cases Building decentralized applications on Ethereum. Creating and managing digital assets such as ERC-721, ERC-1155, and ERC-20 tokens. Implementing smart contract security best practices. Building decentralized autonomous organizations. Creating and managing digital identities on the blockchain. dApp Frameworks Summary When it comes to choosing which dApp framework is best for a first-time Ethereum developer, understanding the fundamentals of what a dApp is are critical. It’s vital to consider documentation, how active the community (Reddit, GitHub, Discord) is, and how geared the particular framework is to the needs of your decentralized app. That said, most frameworks offer similar tools, so getting familiar with one might be more beneficial than experimenting with two or three.
Java is a popular programming language used for developing a wide range of applications, including web, mobile, and desktop applications. It provides many useful data structures for developers to use in their programs, one of which is the Map interface. The Map interface is used to store data in key-value pairs, making it an essential data structure for many applications. In this article, we will discuss the use of Map.of() and new HashMap<>() in Java, the difference between them, and the benefits of using Map.of(). What Is Map.of()? Map.of() is a method introduced in Java 9, which allows developers to create an immutable map with up to 10 key-value pairs. It provides a convenient and concise way of creating maps, making it easier to create small maps without having to write a lot of code. Map.of() is an improvement over the previous way of creating small maps using the constructor of the HashMap class, which can be cumbersome and verbose. What Is New HashMap<>()? The new HashMap<>() is a constructor provided by the HashMap class in Java, which allows developers to create a new instance of a HashMap. It is used to create a mutable map, which means that the map can be modified by adding, removing, or updating key-value pairs. It is a commonly used method for creating maps in Java, especially when dealing with larger sets of data. Benchmarking Map.of() and New HashMap<>() To compare the performance of Map.of() and new HashMap<>() in Java, we can use benchmarking tools to measure the time taken to perform various operations on maps created using these methods. In our benchmark, we will measure the time taken to get a value from a map and the time taken to insert values into a map. It's worth noting that our benchmarks are limited to a small set of data, such as ten items. It's possible that the results could differ for larger data sets or more complex use cases. package ca.bazlur; import org.openjdk.jmh.annotations.Benchmark; import org.openjdk.jmh.annotations.*; import org.openjdk.jmh.infra.Blackhole; import java.util.HashMap; import java.util.Map; import java.util.concurrent.TimeUnit; @State(Scope.Benchmark) @Warmup(iterations = 5, time = 1) @Measurement(iterations = 20, time = 1) @Fork(1) @BenchmarkMode(Mode.AverageTime) @OutputTimeUnit(TimeUnit.NANOSECONDS) @OperationsPerInvocation public class MapBenchmark { private static final int SIZE = 10; private Map<Integer, String> mapOf; private Map<Integer, String> hashMap; @Setup public void setup() { mapOf = Map.of( 0, "value0", 1, "value1", 2, "value2", 3, "value3", 4, "value4", 5, "value5", 6, "value6", 7, "value7", 8, "value8", 9, "value9" ); hashMap = new HashMap<>(); hashMap.put(0, "value0"); hashMap.put(1, "value1"); hashMap.put(2, "value2"); hashMap.put(3, "value3"); hashMap.put(4, "value4"); hashMap.put(5, "value5"); hashMap.put(6, "value6"); hashMap.put(7, "value7"); hashMap.put(8, "value8"); hashMap.put(9, "value9"); } @Benchmark public void testMapOf(Blackhole blackhole) { Map<Integer, String> map = Map.of( 0, "value0", 1, "value1", 2, "value2", 3, "value3", 4, "value4", 5, "value5", 6, "value6", 7, "value7", 8, "value8", 9, "value9" ); blackhole.consume(map); } @Benchmark public void testHashMap(Blackhole blackhole) { Map<Integer, String> hashMap = new HashMap<>(); hashMap.put(0, "value0"); hashMap.put(1, "value1"); hashMap.put(2, "value2"); hashMap.put(3, "value3"); hashMap.put(4, "value4"); hashMap.put(5, "value5"); hashMap.put(6, "value6"); hashMap.put(7, "value7"); hashMap.put(8, "value8"); hashMap.put(9, "value9"); blackhole.consume(hashMap); } @Benchmark public void testGetMapOf() { for (int i = 0; i < 10; i++) { mapOf.get(i); } } @Benchmark public void testGetHashMap() { for (int i = 0; i < SIZE; i++) { hashMap.get(i); } } } Benchmark Mode Cnt Score Error Units MapBenchmark.testGetHashMap avgt 20 14.999 ± 0.433 ns/op MapBenchmark.testGetMapOf avgt 20 16.327 ± 0.119 ns/op MapBenchmark.testHashMap avgt 20 84.920 ± 1.737 ns/op MapBenchmark.testMapOf avgt 20 83.290 ± 0.471 ns/op These are the benchmark results for comparing the performance of using new HashMap<>() and Map.of() in Java. The benchmark was conducted with a limited and small data set (e.g. ,10). These results show that HashMaps have slightly faster get operations compared to immutable Maps created using Map.of(). However, creating an immutable Map using Map.of() is still faster than creating a HashMap. Note that based on your JDK distribution and computer, the benchmark results may slightly differ when you try them. However, in most cases, the results should be consistent. It's always a good idea to run your own benchmarks to ensure you make the right choice for your specific use case. Additionally, remember that micro-benchmarks should always be taken with a grain of salt and not used as the sole factor in making a decision. Other factors, such as memory usage, thread safety, and readability of code, should also be considered. The source code can be found on GitHub. In my opinion, the slight variations in performance may not hold much importance in most cases. It is essential to consider other aspects, such as the particular use case, how concise it is, well-organized code, and preferred features (for example, mutable or immutable nature), when deciding between HashMap and Map.of(). For straightforward scenarios, Map.of() might still have the upper hand regarding simplicity and brevity. So let's see the benefit of using Map.of(). Benefits of Using Map.of() There are several benefits to using Map.of() over the new HashMap<>() in Java: Conciseness: Map.of() provides a concise and convenient way of creating small maps in Java. This makes the code more readable and easier to maintain. Immutability: Map.of() creates immutable maps, which means that once the map is created, it cannot be modified. This provides a degree of safety and security for the data stored in the map. Type Safety: Map.of() provides type safety for the keys and values of the map, which helps prevent type-related errors that can occur when using new HashMap<>(). Conclusion Map.of() is a powerful and useful method introduced in Java 9, which provides a more concise way of creating small maps in Java with added benefits such as immutability and type safety. Our benchmarking shows that the latencies of Map.of() and new HashMap<>() for small maps are close, with overlapping error bars, which makes it difficult to definitively conclude that one method is significantly faster than the other based on this data alone. The benefits of using Map.of() include its conciseness, immutability, and type safety. Although the performance difference may not be significant based on the provided benchmark results, the other advantages of Map.of() make it an appealing option. Developers should consider using Map.of() when creating small maps in Java to take advantage of these benefits.
In Part 1 of the series on PEG implementation, I explained the basics of Parser Expression Grammar and how to implement it in JavaScript. This second part of the series is focused on implementation in Java using the parboiled library. We will try to build the same example for parsing arithmetic expressions but using different syntax and API. QuickStart parboiled is a lightweight and easy-to-use library to parse text input based on formal rules defined using Parser Expression Grammar. Unlike other parsers that use external grammar definition, parboiled provides a quick DSL (domain-specific language) to define grammar rules that can be used to generate parser rules on the runtime. This approach helps to avoid separate parsing and lexing phases and also does not require additional build steps. Installation The parboiled library is packaged into two level dependencies. There is a core artifact and two implementation artifacts for Java and Scala support. Both Java and Scala artifacts depend on the core and can be used independently in respective environments. They are available as Maven dependencies and can be downloaded from Maven central with the coordinates below: XML <dependency> <groupId>org.parboiled</groupId> <artifactId>parboiled-java</artifactId> <version>1.4.1</version> </dependency> Defining the Grammar Rules Let’s take the same example we used earlier to define rules to parse arithmetic expressions. Expression ← Term ((‘+’ / ‘-’) Term)* Term ← Factor ((‘*’ / ‘/’) Factor)* Factor ← Number / ‘(’ Expression ‘)’ Number ← [0-9]+ With the help of integrated DSL, the following rules can be easily defined as follows. Java public class CalculatorParser extends BaseParser { Rule Expression() { return Sequence( Term(), ZeroOrMore(AnyOf("+-"), Term())); } Rule Term() { return Sequence(Factor(), ZeroOrMore(AnyOf("*/"), Factor())); } Rule Factor() { return FirstOf(Number(), Sequence('(', Expression(), ')')); } Rule Number() { return OneOrMore(CharRange('0', '9')); } } If we take a closer look at the example, the parser class inherits all the DSL functions from its parent class BaseParser. It provides various builder methods for creating different types of Rules. By combining and nesting those you can build your custom grammar rules. There needs to be starting rules that recursively expand to terminal rules which are usually literals and character classes. Generating the Parser parbolied’s createParser API will take the DSL input and generates a parser class by enhancing the byte code of the existing class on the runtime using the ASM utils library. Java CalculatorParser parser = Parboiled.createParser(CalculatorParser.class); Using the Parser The generated parser is then passed to a parse runner which lazily initializes the rule tree for the first time and uses it for the subsequent run. Java String input = "1+2"; ParseRunner runner = new ReportingParseRunner(parser.Expression()); ParsingResult<?> result = runner.run(input); Here, the thing to care about is that both the generated parser and parse runner are not thread-safe. So, we need to keep it minimum scope and avoid sharing it across multiple threads. Understanding the Parse Result/Tree The output parse result encapsulates information about parse success or failure. A successful run generates a parse tree with the appropriate label and text fragments. ParseTreeUtils can be used to print the whole or partial parse tree based on passed filters. Java String parseTreePrintOut = ParseTreeUtils.printNodeTree(result); System.out.println(parseTreePrintOut); For more fine-grained control over the parse tree, you can use the visitor API and traverse it to collect the required information out of it. Sample Implementation There are some sample implementations available with the library itself. It contains samples for calculators, Java, SPARQL, and time formats. Visit this GitHub repository for more. Conclusion As we observed, it is very quick and easy to build/use the parser using the parboiled library. However, there might be some use cases that can lead to performance and memory issues while using it on large input with a complex rule tree. Therefore, we need to be careful about complexity and ambiguity while defining the rules.
Users expect new features and websites to be seamless and user-friendly when they go live. End-to-end website testing in local infrastructure becomes an unspoken critical requirement for this. However, if this test is performed later or after the entire website, or app, has been developed, the possibility of bugs and code issues increases. Such issues can do more damage than we can ever think of. According to a report by HubSpot, 88% of users are less likely to return to the website after a bad user experience. As much as $2.6 Billion is lost each year due to slow-loading websites and images on them if it takes more than an average of two seconds. Also, up to 8/10 users stop visiting a website if it is incompatible with their device. A mere look at these numbers is terrifying due to the cost and effort involved in fixing these at a later stage, in addition to the customer base lost due to bad impressions and first experiences. In such situations, testing the websites beforehand becomes imperative on such platforms where this cost can be reduced to a minimum. Cloud testing platforms help test such websites on various local environments by providing remote access to real browsers and operating systems. This allows you to verify the functionality and compatibility of your website across different configurations without having to set up a complex test infrastructure. Testing a website, especially in a production environment, can be time-consuming and resource-intensive. This can slow development and make it difficult to detect bugs and issues early on, delaying developer feedback. Local website testing tests a website on a developer’s machine using automated functional tests. These test scripts can be designed to be integrated with the CI/CD pipeline and executed for each local deployment. This saves time and resources by identifying issues early, shortening the feedback cycle, and increasing the ROI on development and testing. Automated local website testing enables developers to speed up and streamline the testing process. Effective test case management is crucial in this scenario, as it allows for testing on various browser configurations and OS versions to cater to the diverse systems used by end users. A well-designed test automation framework is essential for performing local testing efficiently. Because we will be discussing website testing in this article, Selenium is the best choice. Furthermore, because the website will be hosted locally, we will require a platform that allows for local website testing without interfering with the local infrastructure or the developer’s machine. In this article, we will learn more about local page testing and its advantages in software development and the testing cycle. It follows how we can write an automation test script for a locally hosted website and execute the same in an organized manner so as not to block local infrastructure but still get faster feedback. So, let us get started. What Is Local Website Testing? Local website testing allows developers to host and test the website on their computers or local infrastructure. Once the developer is confident, the website can be moved to a live testing server before making it live in production. This website is a copy that behaves like the real one and provides a place to test it with the least threat. This includes checking cross browser compatibility, user interactions, different links or images on the page, etc. This configuration is ideally different from a staging or pre-prod test environment where any app or website is usually tested in a testing cycle by the QA team before it is made available on production. This is because, in a staging or pre-prod environment, more stable services are running, or features are being tested at a later stage of development and require more regression testing and/or integration testing with external resources. As a result, we don’t want to risk breaking such an environment with early-stage changes that are more prone to bugs. Local hosting and testing websites become extremely important and useful in such cases. There are a few different ways to set up local website testing. One common method is to use a local development server, such as XAMPP, which can be installed on a computer and configured to run a website. We will use the same to access the website on localhost. Advantages of Local Website Testing There are several advantages to using local website testing: Accelerated developer feedback: Local website testing greatly improves the feedback cycle as developers can quickly make changes to the code and check the results. This improves the development, leading to a better user experience and a more refined final product. Overall, it highly improves the efficiency and effectiveness of the development process and allows the delivery of a high-quality website in a shorter time by reducing the risk of any major issues after launch. Speed of execution: It allows developers to quickly test their changes without waiting for the code to be deployed on testing environments. This saves a lot of time and helps them iterate faster during the development cycle. Cost-effective: Testing a website locally is highly cost-effective as it lessens or even eliminates the time required for testing on a live server, saving hosting and associated services costs. Greater control and ease to debug: A developer has better control over the environmental configurations when performing local website testing. Also, they have access to various debugging tools on their computers, like the developer’s console. This allows them to replicate and debug the issues better to fix them, which might not be that easier on a live server due to limited access and control. Integration with the CI/CD pipeline: Local website testing can be used in conjunction with the Continuous Integration and Continuous Development (CI/CD) pipeline to ensure changes to the website are thoroughly tested before they are deployed. CI/CD is a software development practice that automatically builds, tests, and deploys changes to a server. It can be integrated into the CI/CD pipeline as a separate step, allowing developers to test the website on different configurations and environments, such as different operating systems and browsers. This can help ensure the website is compatible with many users and devices. Great fit for agile:Local website testing can be a valuable tool in an agile development environment because it allows developers to test changes to their website and receive feedback quickly. Agile development is an iterative, collaborative approach that emphasizes flexibility, rapid iteration, and fast feedback. It adds the advantage of allowing devs and QAs to work in parallel and provide better results. Configuring Tunnel for Local Website Testing Having understood the basics and advantages of local website testing, let us move to the implementation part and see how we can perform it on our local computer. In this article, we will use the LambdaTest platform to test the locally hosted website. This tunnel helps you test plain HTML, CSS, PHP, Python, or similar web files saved on your local system over combinations of operating systems, browsers, and screen resolutions available on LambdaTest. This tunnel follows various protocols, such as Web Socket, HTTPS, SSH (Secure Shell), etc., to help you establish a secure and unique tunnel connection through corporate firewalls between your system and LambdaTest cloud servers. Before setting up the tunnel and seeing how it works, we must host a local website or webpage. In this article, we are referring to this CodePen project. How To Host a Local Website To set up and verify it by launching the webpage, follow these steps: Step 1 Open the link mentioned above, export the project, and unzip the downloaded *.zip. Step 2 Make sure to turn on XAMPP or any other web hosting tool you use. If you are using XAMPP, start the “Apache” service under “Actions.” Step 3 Copy and paste the content from the unzipped folder in “Step 1,” inside the XAMPP htdocs folder, to access the website on this URL: “http://localhost.” With this, the setup for the local website is done. Next, we move to the Tunnel configuration and see how we can use the same to perform automated testing of the local website. How To Configure the Tunnel This article covers configuring the LamdbaTest tunnel connection and testing locally hosted web pages from a macOS (Big Sur) perspective. The configuration remains the same for all previous versions. LambdaTest also supports tunnel configuration and testing of local websites on Windows and Linux machines. Step 1 Create your account on the LambdaTest platform and login into the dashboard. Step 2 Click “Configure Tunnel” on the top right and select “COMMAND LINE.“ Then, download the binary file by clicking on “Download Link.” This binary helps establish a secure tunnel connection to the LambdaTest cloud servers. Step 3 Navigate to your downloads folder and extract the downloaded zip. Step 4 Copy the command to execute the downloaded binary from the dashboard. The command would look like the below. You can mention an optional tunnelName in the command to identify which tunnel you want to execute your test case in case of multiple tunnels: LT --user {user's login email} --key {user's access key} --tunnelName {user's tunnel name} Step 5 Execute the command to start the tunnel and make the connection. On successful tunnel connection, you will see a prompt “You can start testing now.” Note: the tunnel has been named as LambdaTest. Step 6 After this, move back to the LambdaTest Dashboard to verify the tunnel before we write the automation code for the local website testing using Selenium with Java. Step 7 Navigate to “Real Time Testing,” select “Browser Testing,” enter the localhost URL you want to test, and select the tunnel name. You can select the test configuration of your choice from various major browsers and their versions to perform a test session. After selecting the configuration, click on the “START” button. Step 8 At this point, you should get navigated to your localhost URL. This shows that the setup is verified, and we can write the automation code. Demonstration: Local Website Testing Using Selenium and Java Having completed the tunnel setup, let us try to implement an automated test script for the same local website using Selenium and Java. We will execute the script on the Lambda Test Selenium cloud grid through the tunnel we have already configured. Test scenario for demonstration—navigate to the localhost using the tunnel and click on the first toggle button. A sample test script for local website testing using Selenium with Java looks like the one below: package test.java; import java.net.MalformedURLException; import java.net.URL; import java.util.HashMap; import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.chrome.ChromeOptions; import org.openqa.selenium.remote.RemoteWebDriver; import org.testng.annotations.AfterTest; import org.testng.annotations.BeforeTest; import org.testng.annotations.Test; public class TestLocalWebsiteUsingTunnel { WebDriver driver = null; String user_name = System.getenv("LT_USERNAME") == null ? "LT_USERNAME" : System.getenv("LT_USERNAME"); String access_key = System.getenv("LT_ACCESS_KEY") == null ? "LT_ACCESS_KEY" : System.getenv("LT_ACCESS_KEY"); @BeforeTest public void testSetUp() throws Exception { ChromeOptions browserOptions = new ChromeOptions(); browserOptions.setPlatformName("Windows 10"); browserOptions.setBrowserVersion("108.0"); HashMap<String, Object> ltOptions = new HashMap<String, Object>(); ltOptions.put("username", user_name); ltOptions.put("accessKey", access_key); ltOptions.put("project", "Local Website Testing using Selenium JAVA"); ltOptions.put("build", "Local Website Testing"); ltOptions.put("tunnel", true); ltOptions.put("selenium_version", "4.0.0"); ltOptions.put("w3c", true); browserOptions.setCapability("LT:Options", ltOptions); try { driver = new RemoteWebDriver( new URL("https://" + user_name + ":" + access_key + "@hub.LambdaTest.com/wd/hub"), browserOptions); } catch (MalformedURLException exc) { exc.printStackTrace(); } } @Test(description = "Demonstration of Automated Local Website Testing using LambdaTest Tunnel") public void testLocalWebsite() throws InterruptedException { driver.get("https://localhost"); driver.findElement(By.cssSelector("[for='cb1']")).click(); } @AfterTest public void tearDown() { if (driver != null) { driver.quit(); } } } Code Walkthrough Step 1 The first step would be to create an instance of the RemoteWebDriver as we will be executing the code on the Selenium cloud grid. Step 2 As already mentioned, since we are using LambdaTest cloud grid and tunnel for local website testing, we need to add credentials of the same in the environment variables. If you do not have your credentials, navigate to the LambdaTest Dashboard, click on the “Profile” icon in the top-right corner of your screen, then click on “Profile.” You will find your username and access key here. Step 3 Next, we add a function as testSetUp() to set the initial browser capabilities that will be passed on to the LambdaTest grid to define the browser and OS configurations. This method will be annotated with @BeforeTest annotation in TestNG as we want to execute it before each test case run. The most important thing to note here is the following code to set the tunnel as true. This code tells the LambdaTest grid that this automation script is part of the localhost website testing and that tunnel configuration is to be used for execution. Step 4 After setting the initial capabilities and tunnel configuration, we add the test case function testLocalWebsite() inside, telling the driver to navigate to localhost and click on the first toggle button. For this click, we are using the CSS Selector of the web element. Step 5 After executing every test script, we must close the browser. To perform the same, another function as tearDown() is added and annotated with @AfterTest to execute it after every test execution. Test Execution So far, we have understood local website testing and how to make relevant configurations to perform the same on the LambdaTest platform using the tunnel. Now, we will execute the test script and see what the test execution looks like on the LambdaTest Dashboard. Since we have used TestNG annotations, the test script can be executed as a TestNG run. Upon execution, you will see results like below on the dashboard. Navigate to Automation —> Build to see the execution results. To view the details of execution, click on the “Session Name” on the right side. Note: the URL to test navigated is localhost, and Tunnel ID is the same as the tunnelName, i.e., LambdaTest, which we mentioned while starting the tunnel in the configuration section. Conclusion In this article on how to perform local website testing using Selenium and Java, we have learned about local website testing and why it is so important in the software development world and implemented the same on the LambdaTest platform using an automation test script. Overall, local website testing is an effective solution for developers who want to ensure their website is thoroughly tested and free of bugs and issues before it goes to production. Happy Local Testing!
Spring Boot 3 is riding the wave in the Java world: a few months have passed since the release, and the community has already started migrating to the new version. The usage of parent pom 3.0.2 is approaching 500 on Maven Central! An exciting new feature of Spring Boot is the baked-in support from GraalVM Native Image. We have been waiting for this moment for years. The time to migrate our projects to Native Image is now! But one cannot simply transfer the existing workloads to Native Image because the technology is incompatible with some Java features. So, this article covers the intricacies associated with Spring Boot Native Image development. A Paradigm Shift in Java Development For many years, dynamism was one of the essential Java features. Developer tools written in Java, such as IntelliJ IDEA and Eclipse, are built upon the presumption that “everything is a plugin,” so we can load as many new plugins as we like without restarting the development environment. Spring is also an excellent example of a dynamic environment, thanks to such features as AOP. Years have passed. We discovered that dynamic code loading is not only convenient but also resource-expensive. Waiting 20 minutes for a web application to start is not fun. We had to think of ways to accelerate startup and reduce excessive memory consumption. As a result, developers began abstaining from excessive dynamism and statically precompiling all necessary resources. Then, GraalVM Native Image appeared. The technology turns a JVM-based application into a compiled binary, which sometimes doesn’t require a JDK to run. The resulting native binary starts up incredibly fast. But Native Image works under the “closed-world assumption,” i.e., all utilized classes must be known during the compilation. So, the migration to Native Image is not about changing certain lines of code. It is about shifting the development approach. Your task is to make dynamic resources known to the Native Image at the compilation stage. Native Image Specifics Finalization Developing a Spring application is not the same as writing a bare Java app. We should keep that in mind when working with Native Image. To bring Spring and Native Image together, you must dig into some Java peculiarities. For example, let’s take a simple case of class finalization. At the beginning of Java evolution, we could write some housekeeping code in finalize(), set the System.runFinalizersOnExit(true) flag, and wait for the program to exit. Java public class ShutdownHookedApp { public static void main( String[] args ) { System.runFinalizersOnExit(true); } protected void finalize() throws Throwable { System.out.println( "Goodbye World!" ); } } You will be surprised if you expect a “Goodbye World!” output because this code won’t run with the existing Java versions due to garbage collection specifics. With Java versions 8-10, the app will do nothing, but with Java 11, it will throw an exception with a message that this feature was deprecated: ➜ shutdown_hook_jar java -jar ./shutdown-hook.jar Exception in thread "main" java.lang.NoSuchMethodError: void java.lang.System.runFinalizersOnExit(boolean) at ShutdownHookedApp.main(ShutdownHookedApp.java:9) Why was this feature removed? Finalizers work in some situations and fail in others. Developers can’t rely on a feature with such unpredictable behavior. Native Image documentation states that finalizers don’t work and must be substituted with weak references, reference queues, or something else, depending on the situation. For the Java platform, it is a good development trend to make unpredictable behavior a thing of the past. If you want to guarantee the String output upon exit, use Runtime.getRuntime().addShutdownHook(): Java public class ShutdownHookedApp { public static void main( String[] args ) { Runtime.getRuntime().addShutdownHook(new Thread(() -> { System.out.println("Goodbye World!"); })); } } Spring developers have additional tools. You can use the @PreDestroy annotation and close the context manually with ConfigurableApplicationContext.close() or write something similar to shutdown hook registration. You can do this instead of a finalizer: Java @Component public class WorldComponent { @PreDestroy public void bye() { System.out.println("Goodbye World!"); } } Or you can use this instead of Shutdown Hook: Java @SpringBootApplication public class PredestroyApplication { public static void main(String[] args) { ConfigurableApplicationContext ctx = SpringApplication.run(PredestroyApplication.class, args); int exitCode = SpringApplication.exit(ctx, new ExitCodeGenerator() { @Override public int getExitCode() { System.out.println("Goodbye World!"); return 0; } }); System.exit(exitCode); } } Now, let’s collect all these methods in one code snippet: Java @SpringBootApplication public class PredestroyApplication { public static void main(String[] args) { Runtime.getRuntime().addShutdownHook(new Thread(() -> { System.out.println("Goodbye World! (shutdown-hook)"); })); ConfigurableApplicationContext ctx = SpringApplication.run(PredestroyApplication.class, args); int exitCode = SpringApplication.exit(ctx, new ExitCodeGenerator() { @Override public int getExitCode() { System.out.println("Goodbye World! (context-exit)"); return 0; } }); System.exit(exitCode); } @PreDestroy public void bye() { System.out.println("Goodbye World! (pre-destroy)"); } @Override protected void finalize() throws Throwable { System.out.println( "Goodbye World! (finalizer)" ); } } Let’s run the program and see the order of our “finalizers”: Java Goodbye World! (context-exit) Goodbye World! (pre-destroy) Goodbye World! (shutdown-hook) This code will function when compiling a Spring app into a native image. Initialization Let’s set finalization aside and look into initialization for a change. Spring provides several field initialization methods: you can assign a value directly with the @Value annotation or define properties with @Autowired or @PostConstruct. GraalVM adds another interesting technique, which enables you to write data at the binary compilation stage. Classes that you want to initialize this way are marked with --initialize-at-build-time=my.class when building Native Image. The option works for the whole class, not just separate fields. It is convenient and sometimes even required (if you use Netty, for example). Let’s build a new Spring application with Spring Initializr. The only dependency we need to specify is GraalVM Native Support. You also need a Native Image Build Tool to generate native executables. BellSoft develops Liberica Native Image Kit, a GraalVM-based utility recommended by Spring. Download Liberica NIK for your platform here. Select NIK 22 (JDK 17), Full version. Put the compiler to $PATH with GRAALVM_HOME=/home/user/opt/bellsoft-liberica export PATH=$GRAALVM_HOME/bin:$PATH Check that Liberica NIK is installed: java -version openjdk version "17.0.5" 2022-10-18 LTS OpenJDK Runtime Environment GraalVM 22.3.0 (build 17.0.5+8-LTS) OpenJDK 64-Bit Server VM GraalVM 22.3.0 (build 17.0.5+8-LTS, mixed mode, sharing) native-image --version GraalVM 22.3.0 Java 17 CE (Java Version 17.0.5+8-LTS) Back to Spring Boot. Our application will do the following logic: we initialize a PropsComponent bean and ask it for a key: Java @SpringBootApplication public class BurningApplication { @Autowired PropsComponent props; public static void main(String[] args) { SpringApplication.run(BurningApplication.class, args); } @PostConstruct public void displayProperty() { System.out.println(props.getProps().get("key")); } } Component properties will be loaded in a static class initializer. Java @Component public class PropsComponent { private static final String NAME = "my.properties"; private static final Properties props; public static final String CONFIG_FILE = "/tmp/my.props"; static { Properties fallback = new Properties(); fallback.put("key", "default"); props = new Properties(fallback); try (InputStream is = new FileInputStream(CONFIG_FILE)) { props.load(is); } catch (IOException ex) { throw new UncheckedIOException("Failed to load resource", ex); } } public Properties getProps() { return props; } } Create a /tmp/my.props text file and populate it with data: key=apple If we build the standard Java app, we get different outputs by changing the contents of the my.props file. But we can change the rules of the game. Let’s write the following Native Image configuration in our pom.xml: XML <profiles> <profile> <id>native</id> <build> <plugins> <plugin> <groupId>org.graalvm.buildtools</groupId> <artifactId>native-maven-plugin</artifactId> <executions> <execution> <id>build-native</id> <goals> <goal>compile-no-fork</goal> </goals> <phase>package</phase> </execution> </executions> <configuration> <buildArgs> --initialize-at-build-time=org.graalvm.community.examples.burning.PropsComponent </buildArgs> </configuration> </plugin> </plugins> </build> </profile> </profiles> Pay attention to the initialize-at-build-time key. Now, build the app with mvn clean package —Pnative. The resulting file is in the target directory. No matter how many times we change the /tmp/my.properties file, the output will be the same (the one we wrote at compilation). On the one hand, it is an excellent tool that increases application portability if you use the properties file for code organization and not for dynamic String loading. On the other hand, it may lead to misuse and incorrect understanding of the code. For example, a DevOps may glance at the code, see the my.properties file, and then spend the whole day trying to understand why the file doesn’t pick his or her settings. This is a fundamental concept of Native Image — the separation of data between compilation and run time. If you build app configuration based on environment variables, you should evaluate which keys will be initialized at a specific moment. For convenience’s sake, it is possible to use different prefixes, like S_ for static compilation and D_ for dynamic one. Native Image Limitations Some functions work differently or don’t work with GraalVM at all: Reflection Proxies Method Handles Serialization JNI Resources One approach is to accept that they are not supported and rewrite the app accordingly. Another way is to understand what we know at compile time and put this data into config files. Below is the example for Reflection: JSON [ { "name": "HelloWorld", "allDeclaredFields": true } ] To avoid manual configuration, run the app with the standard JVM and the java -agentlib:native-image-agent=config-output-dir=./config flag. While you are using the application, all resources you are utilizing will be written into the config directory. The agent usually generates a whole bunch of files associated with the features mentioned above: jni-config.json predefined-classed-config.json proxy-config.json reflect-config.json resource-config.json serialization-config.json After that, state these files in pom.xml Spring configuration: XML <groupId>org.graalvm.buildtools</groupId> <artifactId>native-maven-plugin</artifactId> <configuration> <buildArgs> -H:ReflectionConfigurationFiles=reflect-config.json </buildArgs> </configuration> If you adjust the Reflection as shown above and then try to access something you didn’t define, the program won’t exit with an error like “Aborting stand-alone image build due to reflection use without configuration.” Instead, it will continue running, and Reflection calls will provide an empty output. It means that you must consider all Reflection calls when writing the tests. Compatibility With Legacy Libraries The Java ecosystem has a competitive edge over C/C++. With Java, you can add a couple of lines to the pom.xml, and Maven will load a ready-to-use library. In contrast, a C++ developer must put libraries together manually for different platforms. Classical Java enables you to use third-party libraries as ready-to-go boxes without knowing how they were developed. The situation differs with Native Image. Because Native Image uses global code analysis, it compiles the libraries with the application code. Third-party libraries are compiled every time anew on your computer. If there is a compilation error, you will have to solve the issues related to the incompatible library. If you develop an innovative solution, these GraalVM incompatibilities are a great way to find issues in your code or discover new ways of developing your projects. If you write hardcore fintech code, determine whether the third-party library supports GraalVM Native Image. But let’s go back to Spring: what about Native image support there? The Spring team has done outstanding work with integrating Native Image technology into the ecosystem. Just a year ago, Spring didn’t support the technology. Then the Spring Native project was born, and now, Spring Boot has baked-in support for Native Image. The team continues building on the momentum, and many Spring libraries and modules are already compatible with Native Image. Still, we recommend running the libraries with your code using a prototype to make sure that everything works correctly. Development and Debugging Intricacies Native Image compilation takes time (min. 90 seconds), so it would be more practical to write and debug the code using standard JVM and turn to Native Image only when you have some coherent results. But you should always test the resulting binary separately, even if you double-checked the JAR. Why? An application compiled with Native Image can behave differently than the “classic” JVM version. For example, you have an inexplicit Reflection somewhere in the code. It fails without an error or message, and the code gives a different result. To accelerate the testing process, set the CI server in such a way that it collects all commits through Native Image. You can also save testers' time by providing them with Native Image binaries only, without the JARs. In addition, DevOps engineers should write a console script that can be easily started (ondemand ./project). It builds the project, compiles it with Native Image, packs it into a Docker image, and deploys it to a new virtual machine on Amazon. Fortunately, Spring Boot performs the whole build process with a single command: mvn clean package -Pnative. But virtual machine deployment remains your task. Performance Profile Legends and mysteries surround Native Image advantages. They claim it will make apps smaller and faster. But what does it mean, “smaller” and “faster”? Two decades ago, before cloud computing and microservices, developers wrote monolithic applications only (they still prevail in some industries, such as gaming). The key performance indicators for monolithic applications are increased raw peak performance and minimal latency. The Just-in-time (JIT) compiler built into the OpenJDK is quite good at dealing with these tasks. But the performance increases only after Tier4CompileThreshold invokes the C2 JIT-compiler, which takes time. It is not optimal for cloud-native microservices. Key performance indicators are different in the cloud: Microservices have to restart rapidly for efficient scaling; Containers must consume less resources so as not to inflate cloud bills; The build process must be simple simple to make the DevOps processes easier; Packages must be small so that developers can rapidly solve issues and move apps between the Kubernetes nodes. The JIT compiler is not suitable for these purposes because of long warm-up time and excessive overhead. GraalVM uses the AOT (ahead-of-time) compilation, which significantly reduces startup time. As for the memory consumption, the resulting native executable is not always smaller than Uber JAR or Fat JAR. So developers should build the project or its part with Native Image and verify whether it is worth the trouble. There is one more thing to consider when selecting the compiler, namely the load patterns. The AOT compilation is best suitable for a “flat” load profile when application parts are loaded understandably. JIT is optimal for applications with sudden load peaks because JIT can define and optimize such loads. The choice depends on your app. Take a look at your Spring Boot app. Find out which microservices return web pages, which work with a database, and which perform complex analytics. We can safely assume that the load profile of web services will be flatter than that of business analytics, so they potentially can be migrated to Native Image. Garbage Collection in Native Image Native Image is a relatively new project, so it doesn’t utilize the variety of garbage collectors compared to OpenJDK. GraalVM Community Edition (and Liberica NIK) currently use only simple Serial GC with generations (generational scavenger). Oracle’s GraalVM Enterprise also has a G1 GC. The first thing to note is that Native Image uses more memory than stated in the Xmx parameter. A standard JVM-based app does that too, but for different reasons. In the case of Native Image, the root cause resides in garbage collection specifics. GC uses additional memory when performing its tasks. If you run your application in a container and set the precise amount of memory in Xmx, it will probably go down when loads increase. Therefore, you should allocate more memory. Use a trial-and-error approach to find out how much. Furthermore, if you write a tiny program, it doesn’t mean it will automatically use less memory. Like with JVM, we have Xmx (maximum heap size in bytes) and Xmn (young generation size) parameters. If you don’t state them, the app may devour all available memory within the limit. You can alleviate the situation with the -R:MaxHeapSize parameter that sets the default heap size at build time. But thanks to these Native Image specifics, we can now conveniently write console applications with Spring Boot. Imagine you want to create a console client for your web service. The first thought that comes to mind is to develop a Spring Boot app and reuse the whole Java code, including classes for API. But such an application would take several seconds to start without a chance for acceleration with JIT because there’s too little runnable code in console apps to trigger JIT. And every application start would consume a lot of RAM. Now you can compile the app with Native Image, set -R:MaxHeapSize, and get a good result, no worse than with standard Linux console commands. For illustrative purposes, I wrote a console jls utility with the same function as ls, i.e., listing the files. The algorithm is borrowed from StackOverflow. Java @SpringBootApplication public class JLSApplication { public static void main(String[] args) { SpringApplication.run(JLSApplication.class, args); walkin(new File(args[0])); } public static void walkin(File dir) { File listFile[] = dir.listFiles(); if (listFile != null) { for (int i=0; i<listFile.length; i++) { if (listFile[i].isDirectory()) { System.out.println("|\t\t"); walkin(listFile[i]); } else { System.out.println("+---"+listFile[i].getName().toString()); } } } } } Define the max. heap size in Maven settings: XML <configuration> <buildArgs> -R:MaxHeapSize=2m </buildArgs> </configuration> According to the time utility, the execution time for the /tmp directory is about 0.02 s, which is within the time margin of error. And that time includes the start of the algorithm plus the whole Spring Boot app. The result is quite impressive compared to a JAR file. Conclusion Finally, we can compile our Spring Boot projects with Native Image! This powerful utility makes it possible to perform tasks previously unattainable for Java developers.
To put it in simple terms, an HTML, or web forms, are referred to as web elements that are designed to enable users to put in their information, which may include their names, age, gender, credit card number, etc., which is then sent to a server for processing. Web forms are very useful and are now a very important aspect of web development. Imagine having to travel to a foreign country just to fill out a form because you want to apply for a course in their university. Since every modern university has a website with a form, students can sit in the comfort of their homes and apply within their convenient time, saving them the time to visit the school in person. Next, the school collects this information to decide if the candidate is qualified to study at their university. Web forms are not limited to schools only, businesses, such as banks and e-commerce, to mention a few, use web forms to collect information from their customers. This helps them decide how to serve the needs of their customers better. This is exactly what web forms are designed to do—collect information for processing. In this tutorial on CSS Forms, we will take a closer look at how to style forms with CSS and much more. Prerequisites for Styling CSS Forms By the end of this tutorial, you will be in a position to build a form in HTML that is styled using CSS. However, a basic knowledge of HTML and CSS is required to understand this article. Here is a sample of the finished project of what we’ll be building. Here is the link to CodePen for this styling CSS Form project’s source code. Creating HTML Boilerplates Let’s start by creating the website boilerplates (that is, the HTML code structure for the website). This contains the head and body tags, as seen below: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta data-fr-http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="viewport" content="width=device-width,initial-scale=1.0"> <title>How to style forms with CSS: A beginner's guide</title> <link rel="stylesheet" href="main.css"> </head> <body> </body> </html> After creating our HTML boilerplate, save it as an index.html file. I am using VS Code but you can use any IDE of your choice. Creating the Necessary HTML Tags Now, let’s create the necessary HTML tags for our styling CSS Forms project: class="site__container"> <main class="hero__images"> <main class="card__wrapper"> <!-- background for the form --> <section class="card__forms"> <!-- Wrapper for all items of the box --> <section class="items__wrapper"> <div class="site__logo">MS</div> <div class="sign__caption"> <p>Signin for home delivery service</p> </div> <div class="user_id"> <!-- user id options for username and password --> <input type="text" name="Username" placeholder="Username"> <input type="password" name="Password" placeholder="Password"> </div> <div class="checkbox__wrapper"> <!-- Input field for checkbox and forget password --> <input type="checkbox" name="checkbox"> <label for="checkbox">stay signed in</label> <a href="#">Forget Password?</a> </div> <div class="btn__wrapper"> <!-- Sign in button --> <button class="btn__signin">sign in</button> </div> <div class="signup__option"> <!-- Sign up option for new users --> <p>Don't have an account yet? <a href="#">Sign Up!</a></p> </div> </section> </section> </main> </main> </div> </body> From the code sample above, let’s look at what each tag is supposed to do based on the class names assigned to them: site__container: This class is assigned to a div tag that wraps around every other tag within our HTML tags. hero__images: This class is assigned to the main tag. This tag is where our hero image will be assigned using CSS. card__wrapper: This class is assigned to another main tag nested inside the hero__image tag. This tag wraps around all tags that make up our web form. card__forms: This class is assigned to the section tag, which is the main tag for our web form Items__wrapper: This tag wraps around the div, input, button, and link tags, which are items within the web form. site__logo: This is the site logo. sign__caption: This tag helps inform the user why they should sign up/sign in using the web form. user_id: This wraps around the input tag where the user has to enter their username and password. checkbox__wrapper: This wraps around the input, a (anchor), and labels tag. Here, we ask the user if they would like their user id to be saved so they don’t have to retype them later when they visit the site by clicking the checkbox. We also ask if they have a forgotten password that needs to be recovered. btn__wrapper: This wraps around the main button of the form. This is the button the user clicks on that helps them sign into the site. signup__option: This tag wraps around the paragraph tag and a link tag. Here, we provide an option for new users who don’t have an account to signup. Now that we have the HTML boilerplate setup, save it, and run it on your favorite browser. The code in this CSS Forms tutorial is run using Google Chrome. Browser Output Your code should look like this. You will notice how plain and simple the website is from our browser output. This is because we have not added the CSS yet. In the next section of this tutorial, we’ll talk about this. Styling Common Form Elements With CSS Form elements are some of the most common elements on a website. Every site must have these elements, from login forms to search boxes and buttons, to be functional. These elements are sometimes overlooked in design and styling, which may cause them to blend into each other, making your site’s UI look dull. A good way to avoid this is by using CSS to change the appearance of form elements such as text fields and buttons. Here’s how you can style common CSS form elements: 1. In your Visual Studio Code, create a new file and name it main.css. This is going to be the CSS file. Notice from our HTML file that, in the head tag, we have a link tag that points to main.css. This link tag helps us link together the HTML and CSS files. Any changes to the HTML will affect the CSS file and vice versa. 2. Let’s add the relevant CSS code in main.css so we can style the boring page we created earlier using HTML. 3. Your editor should open. Now, let’s write some CSS. Applying Universal Selector Universal selector is a CSS concept that lets you define styles once and reuse them across the entire website. It means you need to define only that “one thing” once, and then you can use it on other pages. This saves us a lot of time and makes the code more maintainable. Type and run the code below for your CSS Forms project: *, *::after, *::before { padding: 0; margin: 0; box-sizing: border-box; } From the CSS code above, we use a CSS universal selector to target all the elements on the webpage. We added a padding 0px, margin 0px and box-sizing of the border-box. This helps remove all the white spaces on the webpage so we don’t have unnecessary white spaces interfering when styling CSS Forms. Applying Viewport Width (VW) and Viewport Height (VH) Applying viewport width (vw) and viewport height (vh) are new properties available in CSS3 and have quite a few uses. By default, the vw unit is equal to 1% of the containing element’s width. The same goes for vh. Using these properties, you can do some cool things with your website: .site__container { width: 100vw; height: 100vh; } We target a div tag with a class of .site__container and we assign a width and height of 100vw and 100vh to both, respectively. This helps size our webpage’s viewport to take the browser’s full width and height. You will not see any effect when you refresh your browser since other HTML tags have not been given specific sizing or styling. Applying Hero Image The hero image is a common element in blog design. It’s a large, eye-catching image that spans the full width of the page and often a bit of the page’s height. It is usually used to draw initial attention to an article or page and as an anchor point for future articles or pages within that site section. A hero image can also highlight content, such as images, videos, or other interactive elements, by making it the central point of focus on the page: .hero__images { height: 100%; background-image: url("./delivery-man.jpg"); background-repeat: no-repeat; background-size: cover; background-position: center; } Browser Output From the CSS code above, we assigned a height of 100% to the class of hero__images. This helps the hero__images class inherit the same height value set to the direct parent, which is 100vh. This helps the background image used to occupy the browser viewport, and then we set a background-image. We also added a background-repeat of no-repeat to prevent it from repeating, a background-size of cover, which helps set the image to cover the entire viewport, and a background-position of center, which helps center the image within the viewport or container. Centering the Form With Flexbox Centering the form with CSS Flexbox is easy. The only problem is it requires you to wrap the form in a parent container with a specified width, which is a bit of a pain. Thankfully, there’s a better solution that works in all browsers. You’ll need three elements: The actual form. A wrapper element (parent). An element for the actual content (child). We’ll use CSS Flexbox to center the web form to the browser center. Type and run the code below: .card__wrapper { height: 100%; display: flex; justify-content: center; align-items: center; } Browser Output In this section, we target the card__wrapper class, we set a height of 100%, a display of flex, justify-content of center, and align-items of center. This helps position the form to the center horizontally and vertically while styling CSS Forms. Styling the Form Element The HTML for a typical form consists of various input elements, each representing a different type of data. In CSS, you can style the input element in various ways to create distinction among them. Here we apply styling to the CSS form and add a specific width and height: .card__forms { display: flex; justify-content: center; width: 400px; height: 400px; background-color: rgb(1, 32, 32, 0.4); border-radius: 0.5rem; box-shadow: 3.5px 3.5px 4px 2px rgba(0, 0, 0, 0.3); border-top: 2px solid rgb(89, 250, 156); border-bottom: 2px solid rgb(89, 250, 156); } Browser Output We target the class of card__form, apply a display of flex, justify-content of center, width, and height of 400px across, to give it a defined size, background-color of rgb (1, 32, 32, 0.4). The last two integers of 0.4 are transparent values as they represent the opacity of the 0.4 value. We also added a border-radius of 0.5em, box-shadow, border-top, and border-bottom of 2px solid rgb(89, 250, 156). This creates the solid lime color you can see at the top and bottom of our CSS Form. Styling Form Logo Many websites use the form logo to style the input field and submit button in a form. The first reason is the form is consistent with the overall design. The second reason is it makes it easier to differentiate between a regular input field and a submit button since the color is used for the submit button. Here we apply styling to the logo on the form element: .site__logo { width: 40px; padding: 4px; margin: 2.0rem 5rem; text-align: center; border-radius: 50%; font-size: x-large; font-weight: bolder; font-family: 'Trebuchet MS', sans-serif; background-color: rgb(89, 250, 156); color: rgb(1, 32, 32); cursor: default; } Browser Output We targeted the site__logo class and we added a width of 40px, padding of 4px, margin of 2.0rem and 5rem for top and bottom, respectively (to add extra white space). We also apply text-align to center (to center the logo), border-radius of 50% (to make the logo round), font-size of x-large, font-weight of bolder, and font-family of “Trebuchet MS.” And a background-color of rgb (89, 250, 156), color of rgb (1, 32, 32) and cursor of default. Styling Site Caption The site caption is a little bit of text that appears at the top of every page on your website. This can be any text you want. It is typically used to identify who created the site and possibly provide legal information about the site’s content. By styling this text, we can make it stand out more or appear in multiple places on a page. Here we apply styling to the caption on the CSS Form: .sign__caption p { color: white; font-family: calibri; font-style: italic; text-transform: lowercase; margin-bottom: 1.5rem; } Browser Output We selected the sign__caption class and targeted the p tag inside it. We apply a text color of white, font-family of calibri, font-style of italic, text-transform to lowercase, and margin-bottom of 0.5 rem (to apply extra white space at the bottom). Styling the Input Tag The input tag comes with a few styles by default. It has the look of a text field, and it’s a good idea to use the default styling for the most part. The default styling provides enough contrast between elements so users can easily read and understand what they’re filling in. Here we apply styling to the input tag on the CSS form, where users can enter their information: .user_id input { width: 100%; display: block; outline: none; border: 0; padding: 1rem; border-radius: 20px; margin: 0.8rem 0; color: rgb(1, 32, 32); } .user_id input::placeholder{ color: rgb(1, 32, 32); } .user_id input:active { outline: 2px solid rgb(89, 250, 156); } Browser Output We apply the following values from the code sample above to the input tag nested inside the user id class: width: of 100% (so our input tag takes in a full size within the container). display: of a block (so the tag can be centered properly). outline: of none (to remove the outline around the input tag when we click on it). border: to 0px (to remove the gray border around the input tag). padding: of 1rem (to add more space within the tag input tag to give room for user’s input such as usernames and passwords). border-radius: of 20px (to give it a rounded curve at the edge). margin: of 0.8rem 0 (0.8rem added extra space at the top and bottom while the 0 means no space should be added to the left and right of the input tag). color: of rgb (1, 32, 32). For the placeholder, we added a text color of rgb (1, 32, 32), which is responsible for the “Username” and “Password” text. And for the active state, we added an outline color of 2px solid rgb (89, 250, 156). You will see the outline color when you click on the input field of the CSS form. Styling Forget Password It is necessary to style the password field and its labels in the forgot-password form. You may require use for this purpose a combination of standard CSS properties and some custom properties. Here, we apply styling to the label and a tag, providing two options for users who want their account to remain signed in and for users who forgot their password to recover it: .checkbox__wrapper label { color: white; font-family: calibri; text-transform: lowercase; } .checkbox__wrapper a { color: rgb(89, 250, 156); font-family: calibri; text-transform: lowercase; text-decoration: none; font-style: italic; } .checkbox__wrapper a:hover { color: rgb(255, 255, 255); font-family: calibri; text-transform: lowercase; text-decoration: none; font-style: normal; } Browser Output In this section, we targeted the label tag nested inside the .checkbox__wrapper class and applied the following styling to it. color: of white, a font-family of calibri. text-transform: of lowercase, while on the anchor tag. color: of rgb (89, 250, 156). text-decoration: to none (to remove the default blue line on the anchor tag). font-style: to italic to differentiate it from the label text. Since the anchor tag is a link that is meant to send a request, we decided to add a hover state, which is something to notify the user that this is a clickable link. On a hover state, we added a text color of rgb (255, 255, 255), and the font-style is set to normal to restore it. Style the Form Button The form button is the first thing a user will see on your website. A nice button can make a good impression, but a bad one can leave a user with a bad taste in their mouth before they even get to read any content. Here we apply styling to the button tag on CSS form. This button enables the user to sign into the website: .btn__wrapper button { width: 100%; border: none; padding: 1rem; border-radius: 20px; text-transform: uppercase; font-weight: bolder; margin: 0.8rem 0; color: rgb(1, 32, 32); } .btn__wrapper button:hover { background-color: rgb(89, 250, 156); color: white; transition: all 0.5s ease-in-out; cursor: pointer; } Browser Output In this section, we targeted the button tag nested in the btn__wrapper class, and we applied: width: of 100% to make it have a full width within the container. border: is set to none to remove the gray border around the button. padding: of 1rem to add space between the “SIGN IN” text and the button tag. border-radius: of 20px to apply a round corner style to the border. text-transform: is set to uppercase to capitalize the text. font-weight: is set to bolder to make text bold. margin: is set to 0.8rem at the top and bottom to give white space around the object, while 0 at the left and right. color: of rgb (1, 32, 32). On hover, we set the background-color to rgb (89, 250, 156), text color to white to create a kind of invert effect when we hover over it, a transition of all 0.5s ease-in-out, and a cursor of a pointer. To see these changes, move your mouse pointer to hover on the button. Styling the Signup Option The actual signup option is styled to look like a button. The design of the button is simple and recognizable so users will know what it does. The input size for the email address is a bit smaller than usual to ensure the user doesn’t have to scroll up and down every time they want to add an email address. Here we apply styling to the p and a tag, where we provide options for users who don’t have an account yet but want to sign up: .signup__option p { color: white; font-family: calibri; text-transform: lowercase; } .signup__option a { color: rgb(89, 250, 156); font-family: calibri; text-transform: lowercase; text-decoration: none; font-style: italic; } .signup__option a:hover { color: rgb(255, 255, 255); font-family: calibri; text-transform: lowercase; text-decoration: none; font-style: normal; } Browser Output From the browser output, you will notice that the “stay signed in / forgot password” and “don’t have an account yet? / sign up!” looks the same. Well, you guessed it right! We have to copy the CSS styles for the label tag of the checkbox__wrapper class and paste it on .signup__option a and then copy the styles on forget password and paste it on the sign up class. Now we should have the same effect. Here is the link to the finished project on styling CSS forms. Summary You have learned how to style forms using CSS. You also learned how to center items using Flexbox, using transition on buttons, apply background images, and how to cross text your website or web app using LambdaTest. Alright! We’ve come to the end of this tutorial. Thanks for taking your time to read this article to completion. Feel free to ask questions. I’ll gladly reply.
Welcome back to this series about uploading files to the web. If you missed the first post, I recommend you check it out because it’s all about uploading files via HTML. The full series will look like this: Upload files With HTML Upload files With JavaScript Receiving File Uploads With Node.js (Nuxt.js) Optimizing Storage Costs With Object Storage Optimizing Delivery With a CDN Securing File Uploads With Malware Scans In this article, we’ll do the same thing using JavaScript. Previous Article Info We left the project off with the form that looks like this: <form action="/api" method="post" enctype="multipart/form-data"> <label for="file">File</label> <input id="file" name="file" type="file" /> <button>Upload</button> </form> In the previous article, we learned that in order to access a file on the user’s device, we had to use an <input> with the “file” type. To create the HTTP request to upload the file, we had to use a <form> element. When dealing with JavaScript, the first part is still true. We still need the file input to access the files on the device. However, browsers have a Fetch API we can use to make HTTP requests without forms. I still like to include a form because: Progressive enhancement: If JavaScript fails for whatever reason, the HTML form will still work. I’m lazy: The form will actually make my work easier later on, as we’ll see. With that in mind, for JavaScript to submit this form, I’ll set up a “submit” event handler: const form = document.querySelector('form'); form.addEventListener('submit', handleSubmit); /** @param {Event} event */ function handleSubmit(event) { // The rest of the logic will go here. } handleSubmit Function Throughout the rest of this article, we’ll only be looking at the logic within the event handler function, handleSubmit. The first thing I need to do in this submit handler is call the event’s preventDefault method to stop the browser from reloading the page to submit the form. I like to put this at the end of the event handler so if there is an exception thrown within the body of this function, preventDefault will not be called, and the browser will fall back to the default behavior: /** @param {Event} event */ function handleSubmit(event) { // Any JS that could fail goes here event.preventDefault(); } Next, we’ll want to construct the HTTP request using the Fetch API. The Fetch API expects the first argument to be a URL, and a second, optional argument as an Object. We can get the URL from the form’s action property. It’s available on any form DOM node, which we can access using the event’s currentTarget property. If the action is not defined in the HTML, it will default to the browser’s current URL: /** @param {Event} event */ function handleSubmit(event) { const form = event.currentTarget; const url = new URL(form.action); fetch(url); event.preventDefault(); } Relying on the HTML to define the URL makes it more declarative, keeps our event handler reusable, and our JavaScript bundles smaller. It also maintains functionality if the JavaScript fails. By default, Fetch sends HTTP requests using the GET method, but to upload a file, we need to use a POST method. We can change the method using fetch’s optional second argument. I’ll create a variable for that object and assign the method property, but once again, I’ll grab the value from the form’s method attribute in the HTML: const url = new URL(form.action); /** @type {Parameters<fetch>[1]} */ const fetchOptions = { method: form.method, }; fetch(url, fetchOptions); Now the only missing piece is including the payload in the body of the request. If you’ve ever created a Fetch request in the past, you may have included the body as a JSON string or a URLSearchParams object. Unfortunately, neither of those will work to send a file, as they don’t have access to the binary file contents. Fortunately, there is the FormData browser API. We can use it to construct the request body from the form DOM node. And conveniently, when we do so, it even sets the request’s Content-Type header to multipart/form-data; also a necessary step to transmit the binary data: const url = new URL(form.action); const formData = new FormData(form); /** @type {Parameters<fetch>[1]} */ const fetchOptions = { method: form.method, body: formData, }; fetch(url, fetchOptions); Recap That’s really the bare minimum needed to upload files with JavaScript. Let’s do a little recap: Access to the file system using a file type input. Construct an HTTP request using the Fetch (or XMLHttpRequest) API. Set the request method to POST. Include the file in the request body. Set the HTTP Content-Type header to multipart/form-data. Today, we looked at a convenient way of doing that, using an HTML form element with a submit event handler, and using a FormData object in the body of the request. The current handleSumit function should look like this: /** @param {Event} event */ function handleSubmit(event) { const url = new URL(form.action); const formData = new FormData(form); /** @type {Parameters<fetch>[1]} */ const fetchOptions = { method: form.method, body: formData, }; fetch(url, fetchOptions); event.preventDefault(); } GET and POST Requests Unfortunately, the current submit handler is not very reusable. Every request will include a body set to a FormData object and a “Content-Type” header set to multipart/form-data. This is too brittle. Bodies are not allowed in GET requests, and we may want to support different content types in other POST requests. We can make our code more robust to handle GET and POST requests, and send the appropriate Content-Type header. We’ll do so by creating a URLSearchParams object in addition to the FormData, and running some logic based on whether the request method should be POST or GET. I’ll try to lay out the logic below: Is the request using a POSTmethod? Yes: Is the form’s enctype attribute multipart/form-data? Yes: set the body of the request to the FormData object. The browser will automatically set the “Content-Type” header to multipart/form-data. No: set the body of the request to the URLSearchParams object. The browser will automatically set the “Content-Type” header to application/x-www-form-urlencoded. No: We can assume it’s a GET request. Modify the URL to include the data as query string parameters. The refactored solution looks like: /** @param {Event} event */ function handleSubmit(event) { /** @type {HTMLFormElement} */ const form = event.currentTarget; const url = new URL(form.action); const formData = new FormData(form); const searchParams = new URLSearchParams(formData); /** @type {Parameters<fetch>[1]} */ const fetchOptions = { method: form.method, }; if (form.method.toLowerCase() === 'post') { if (form.enctype === 'multipart/form-data') { fetchOptions.body = formData; } else { fetchOptions.body = searchParams; } } else { url.search = searchParams; } fetch(url, fetchOptions); event.preventDefault(); } I really like this solution for a number of reasons: It can be used for any form. It relies on the underlying HTML as the declarative source of configuration. The HTTP request behaves the same as with an HTML form. This follows the principle of progressive enhancement, so file upload works the same when JavaScript is working properly or when it fails. Conclusion So, that’s it. That’s uploading files with JavaScript. I hope you found this useful and plan to stick around for the whole series. In the next article, we’ll move to the back end to see what we need to do to receive files. Thank you so much for reading. If you liked this article, please share it. It's one of the best ways to support me.
Quarkus is an open-source, full-stack Java framework designed for building cloud-native, containerized applications. As Quarkus is built for cloud applications, it is designed to be lightweight and fast and supports fast startup times. A well-designed containerized application facilitates the implementation of reliable REST APIs for creating and accessing data. Data validation is always an afterthought for developers but is important to keep the data consistent and valid. REST APIs need to validate the data it receives, and Quarkus provides rich built-in support for validating REST API request objects. There are situations where we need custom validation of our data objects. This article describes how we can create custom validators using the Quarkus framework. REST API Example Let’s consider a simple example below where we have a House data object and a REST API to create a new House. The following fields need to be validated: number: should be not null. street: should not be blank. state: should be only California (CA) and Nevada (NV). Java class House { String number; String street; String city; String state; String type; } And the REST API: Java @Path("/house") public class HouseResource { @POST public String createHouse(House house) { // Additional logic to process the house object return "Valid house created"; } } Configure Quarkus Validator Quarkus provides the Hibernate validator to perform data validation. This is a Quarkus extension and needs to be added to the project. For Maven projects, add the dependency to the pom.xml: XML <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-hibernate-validator</artifactId> </dependency> For Gradle-based projects, use the following to build.gradle: Java implementation("io.quarkus:quarkus-hibernate-validator") Built-In Validators The commonly used validators are available as annotations that can be easily added to the data object. In our House data object, we need to ensure the number property should not be null, and the street should not be blank. We will use the annotations @NotNull and @NotBlank: Java class House { @NotNull int number; @NotBlank(message = "House street cannot be blank") String street; String city; String state; String type; } To validate the data object in a REST API, it is necessary to include the @Valid annotation. By doing so, there is no need for manual validation, and any validation errors will result in a 400 HTTP response being returned to the caller: Java @Path("/house") public String createHouse(@Valid House house) { return "Valid house received"; } Custom Validation There are several scenarios in which the default validations are insufficient, and we must implement some form of custom validation for our data. In our example, the House data is supported only in California (CA) and Nevada (NV). Let’s create a validator for this: Java @Retention(RetentionPolicy.RUNTIME) @Target({ ElementType.FIELD }) @Constraint(validatedBy = StateValidator.class) public @interface ValidState { String message() default "State not supported"; Class<? extends Payload>[] payload() default {}; Class<?>[] groups() default {}; } Getting into the details: The name of the validator is ValidState. In the class validation, this can be used as @ValidState. The default error message is added to the message() method. This annotation is validated by StateValidator.class and is linked using the @Constraint annotation. The @Target annotation indicates where the annotation might appear in the Java program. In the above case, it can be applied only to Fields. The @Retention describes the retention policy for the annotation. In the above example, the annotation is retained during runtime. The validation logic in the StateValidator class: Java public class StateValidator implements ConstraintValidator<ValidState, String> { List<String> states = List.of("CA", "NV"); @Override public boolean isValid(String value, ConstraintValidatorContext context) { return value != null && states.contains(value); } } The custom validator @ValidState can be included in the House class: Java class House { @NotNull int number; @NotBlank(message = "House street cannot be blank") String street; String city; @ValidState String state; String type; } Testing In software development, unit testing is a crucial component that offers several benefits, such as enhancing code quality and detecting defects early in the development cycle. The Quarkus framework provides a variety of tools to help developers with unit testing. Unit testing the custom validator is simple as Quarkus allows us to inject the validator and perform manual validation on the House object: Java @QuarkusTest public class HouseResourceTest { @Inject Validator validator; @Test public void testValidState() { House h = new House(); h.state = "CA"; h.number = 1; h.street = "street1"; Set<ConstraintViolation<House>> violations = validator.validate(h); System.out.println("Res " + violations); assertEquals(violations.size(), 0); } @Test public void testInvalidState() { House h = new House(); h.state = "WA"; h.number = 1; Set<ConstraintViolation<House>> violations = validator.validate(h); assertEquals(violations.size(), 1); assertEquals(violations.iterator().next().getMessage(), "State not supported"); } } Conclusion Quarkus is a robust, well-written framework that provides various built-in validations and supports add custom validation. Validators provide a clean and convenient way to perform REST API validation, which, in turn, supports the DRY methodology. Validation plays an increasingly crucial role in microservices architecture because every service defines and requires the validation of the data it processes. The above article describes a process to use built-in and custom validators.
In this article, we’re going to compare some essential metrics of web applications using two different Java stacks: Spring Boot and Eclipse MicroProfile. More precisely, we’ll implement the same web application in Spring Boot 3.0.2 and Eclipse MicroProfile 4.2. These releases are the most recent at the time of this writing. Since there are several implementations of Eclipse MicroProfile, we’ll be using one of the most famous: Quarkus. At the time of this writing, the most recent Quarkus release is 2.16.2. This mention is important regarding Eclipse MicroProfile because, as opposed to Spring Boot, which isn’t based on any specification and, consequently, the question of the implementation doesn’t exist, Eclipse MicroProfile has largely been adopted by many editors who provide different implementations, among which Quarkus, Wildfly, Open Liberty and Payara are from the most evangelical. In this article, we will implement the same web application using two different technologies, Spring Boot and Quarkus, such that to compare their respective two essential metrics: RSS (Resident Set Size) and TFR (Time to First Request). The Use Case The use case that we’ve chosen for the web application to be implemented is a quite standard one: the one of a microservice responsible to manage press releases. A press release is an official statement delivered to members of the news media for the purpose of providing information, creating an official statement, or making a public announcement. In our simplified case, a press release consists in a set of data like a unique name describing its subject, an author, and a publisher. The microservice used to manage press releases is very straightforward. As with any microservice, it exposes a REST API allowing for CRUD press releases. All the required layers, like domain, model, entities, DTOs, mapping, persistence, and service, are present as well. Our point here is not to discuss the microservices structure and modus operandi but to propose a common use case to be implemented in the two similar technologies, Spring Boot and Quarkus, to be able to compare their respective performances through the mentioned metrics. Resident Set Size (RSS) RSS is the amount of RAM occupied by a process and consists of the sum of the following JVM spaces: Heap space Class metadata Thread stacks Compiled code Garbage collection RSS is a very accurate metric, and comparing applications based on it is a very reliable way to measure their associated performances and footprints. Time to First Request (TFR) There is a common concern about measuring and comparing applications' startup times. However, logging it, which is how this is generally done, isn’t enough. The time you’re seeing in your log file as being the application startup time isn’t accurate because it represents the time your application or web server started, but not the one required that your application starts to receive requests. Application and web servers, or servlet containers, might start in a couple of milliseconds, but this doesn’t mean your application can process requests. These platforms often delay work through the process and may give a false, lazy initialization indication about the TFR. Hence, to accurately determine the TFR, in this report, we’re using Clément Escofier’s script time.js, found here in the GitHub repository, which illustrates the excellent book Reactive Systems in Java by Clément Escoffier and Ken Finnigan. Spring Boot Implementation To compare the metrics presented above for the two implementations, you need to clone and run the two projects. Here are the steps required to experience the Spring Boot implementation: Shell $ git clone https://github.com/nicolasduminil/Comparing-Resident-Size- Set-Between-Spring-Boot-and-Quarkus.git metrics $ cd metrics $ git checkout spring-boot $ mvn package $ java -jar target/metrics.jar Here you start by cloning the GIT repository, and once this operation is finished, you go into the project’s root directory and do a Maven build. Then you start the Spring Boot application by running the über JAR created by the spring-boot-maven-plugin. Now you can test the application via its exposed Swagger UI interface by going here. Please take a moment to use the feature that tries it out that Swagger UI offers. The order of operations is as follows: First, the POST endpoint is to create a press release. Please use the editor to modify the JSON payload proposed by default. While doing this, you should leave the field pressReleaseId having a value of “0” as this is the primary key that will be generated by the insert operation. Below, you can see an example of how to customize this payload: JSON { "pressReleaseId": 0, "name": "AWS Lambda", "author": "Nicolas DUMINIL", "publisher": "ENI" } Next, a GET /all is followed by a GET /id to check that the previous operation has successfully created a press release. A PUT to modify the current press release A DELETE /id to clean-up Note: Since the ID is automatically generated by a sequence, as explained, the first record will have the value of “1.” You can use this value in GET /id and DELETE /id requests. Notice that the press release name must be unique. Now, once you have experienced your microservice, let’s see its associated RSS. Proceed as follows: Shell $ ps aux | grep metrics nicolas 31598 3.5 1.8 13035944 598940 pts/1 Sl+ 19:03 0:21 java -jar target/metrics.jar nicolas 31771 0.0 0.0 9040 660 pts/2 S+ 19:13 0:00 grep --color=auto metrics $ ps -o pid,rss,command -p 31598 PID RSS COMMAND 31598 639380 java -jar target/metrics.ja Here, we get the PID of our microservice by looking up its name, and once we have it, we can display its associated RSS. Notice that the command ps -o above will display the PID, the RSS, and the starting command associated with the process, which PID is passed as the -p argument. And as you may see, the RSS for our process is 624 MB (639380 KB). If you’re hesitating about how to calculate this value, you can use the following command: Shell $ echo 639380/1024 | bc 624 As for the TFR, all you need to do is to run the script time.js, as follows: Shell node time.js "java -jar target/metrics.jar" "http://localhost:8080/" 173 ms To resume, our Spring Boot microservice has a RSS of 624 MB and a TFR of 173 ms. Quarkus Implementation We need to perform these same operations to experience our Quarkus microservice. Here are the required operations: Shell $ git checkout quarkus $ mvn package quarkus:dev Once our Quarkus microservice has started, you may use the Swager UI interface here. And if you’re too tired to use the graphical interface, then you may use the curl scripts provided in the repository ( post.sh, get.sh, etc.) as shown below: Shell java -jar target/quarkus-ap/quarkus-run.jar & ./post.sh ./get.sh ./get-1.sh 1 ./update.sh ... Now, let’s see how we do concerning our RSS and TFR: Shell $ ps aux | grep quarkus-run nicolas 24776 20.2 0.6 13808088 205004 pts/3 Sl+ 16:27 0:04 java -jar target/quarkus-app/quarkus-run.jar nicolas 24840 0.0 0.0 9040 728 pts/5 S+ 16:28 0:00 grep --color=auto quarkus-run $ ps -o pid,rss,command -p 24776 PID RSS COMMAND 24776 175480 java -jar target/quarkus-app/quarkus-run.jar $ echo 175480/1024 | bc 168 $ node time.js "java -jar target/quarkus-app/quarkus-run.jar" "http://localhost:8081/q/swagger-ui" 121 ms As you can see, our Quarkus microservice uses an RSS of 168MB, i.e., almost 500MB less than the 624MB with Spring Boot. Also, the TFR is slightly inferior (121ms vs. 173ms). Conclusion Our exercise has compared the RSS and TFR metrics for the two microservices executed with the HotSpot JVM (Oracle JDK 17). Spring Boot and Quarkus support the compilation into native executables through GraalVM. It would have been interesting to compare these same metrics of the native replica of the two microservices, and if we didn’t do it here, that’s because Spring Boot heavily relies on Java introspection and, consequently, it’s significantly more difficult to generate Spring Boot native microservices than Quarkus ones. But stay tuned; it will come soon. The source code may be found here. The GIT repository has a master branch and two specific ones, labeled spring-boot and, respectively, quarkus. Enjoy!