Also known as the build stage of the SDLC, coding focuses on the writing and programming of a system. The Zones in this category take a hands-on approach to equip developers with the knowledge about frameworks, tools, and languages that they can tailor to their own build needs.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.
Enterprise Security
Security is everywhere: Behind every highly performant application, or even detected threat, there is a powerful security system and set of processes implemented. And in the off chance there are NOT such systems in place, that fact will quickly make itself known. We are living in an entirely new world, where bad actors are growing more and more sophisticated the moment we make ourselves "comfortable." So how do you remain hypervigilant in this ever so treacherous environment?DZone's annual Enterprise Security Trend Report has you covered. The research and expert articles explore the fastest emerging techniques and nuances in the security space, diving into key topics like CSPM, full-stack security practices and challenges, SBOMs and DevSecOps for secure software supply chains, threat hunting, secrets management, zero-trust security, and more. It's time to expand your organization's tactics and put any future attackers in their place as you hear from industry leaders and experts on how they are facing these challenges in everyday scenarios — because if there is one thing we know about the cyberspace, any vulnerabilities left to chance will always be exposed.
Writing Great Code: The Five Principles of Clean Code
Open-Source Data Management Practices and Patterns
Artificial Intelligence (AI) is modernizing industries by enabling machines to perform tasks that typically require no human intervention; tasks such as problem-solving, natural language understanding, and image processing. For AI-related software development, Python is often used. However, Java is also a powerful option, as many organizations are using it in enterprise applications due to its robustness and scalability. In this article, we explore how the Java programming language can be used for AI development, along with supporting libraries and tools. Java Programming Language for AI Java offers several features that make Java suitable for AI-related task development: 1. Platform Independence The Java programming language philosophy of "write once and run anywhere" allows developers to create AI systems that can run on various platforms without changes. This feature makes Java highly scalable. 2. Robust Ecosystem Java has many built-in libraries and frameworks that support AI and machine learning, making it easier to implement complex algorithms. 3. Memory Management Garbage collection feature is one of the key features of Java. Java manages memory allocation and deallocation automatically for objects and memory management helps resource management efficiently. It is very important to handle memory management as AI deals with large datasets. Java memory management is critical in AI systems. 4. Scalability AI applications deal with larger data sets and vast amounts of data that require heavy computation. Java is highly scalable and helps develop AI applications. 5. Multi-Threading Neural network training, large-scale data processing, and other AI-related tasks require parallel processing to handle vast amounts of data. Java supports multithreading that allows parallel processing. Java Libraries and Frameworks for AI There are many libraries that are available to build AI systems. Below are a few AI libraries for Java: 1. Weka Weka is a popular library used for data mining and machine learning. Weka provides a collection of algorithms for classification, regression, clustering, and feature selection. Weka also has a graphical interface, making it easier to visualize and preprocess data. Weka Key Features Vast collection of algorithms for ML Visualization and data preprocessing support Support integration with Java applications 2. Deeplearning4j (DL4J) Deeplearning4j is specifically created for business environments to facilitate Java-based deep learning tasks. These libraries are compatible with distributed computing frameworks like Apache Spark and Hadoop, making them well-suited for handling large-scale data processing. DL4J offers tools for constructing neural networks, developing deep learning models, and creating natural language processing (NLP) applications. Features Apache Spark and Hadoop integration GPU support Deep neural networks and Reinforcement learning (RL) tools 3. MOA MOA is good for streaming ML and big data analysis. MOA provides a framework for learning from massive data which is a critical step for real-time AI applications like fraud detection, network intrusion detection, and recommendation systems. Features Real-time data algorithms Clustering, regression, classification Weka integration 4. Java-ML Java-ML is a library for machine learning. It has algorithms for clustering, classification, and feature selection. It’s easy to use and for developers who need to implement AI algorithms in their applications. Features Many machine-learning algorithms Lightweight and easy to embed Data processing and visualization support 5. Apache Mahout Apache Mahout is an open-source project for developing ML algorithms and another popular machine-learning library that is scalable and for working with big data. It focuses on math operations like linear algebra, collaborative filtering, clustering, and classification. It works along with distributed computing frameworks like Apache Hadoop so it’s good for big data applications. Key Features Scalable algorithms for clustering, classification, and collaborative filtering Hadoop integration for large data User-defined engine AI Application With Java Example: ML model in Java using the Weka library Step 1: Setup and Installation Download and install the Weka library by adding dependency via Maven: Pom.xml: <dependency> <groupId>nz.ac.waikato.cms.weka</groupId> <artifactId>weka-stable</artifactId> <version>3.8.0</version> </dependency> Step 2: Load Dataset Load a dataset and perform preprocessing. Java import weka.core.Instances; import weka.core.converters.ConverterUtils.DataSource; public class WekaExample { public static void main(String[] args) throws Exception { // Loading dataset DataSource source = new DataSource("data/iris.arff"); Instances data = source.getDataSet(); // classification if (data.classIndex() == -1) { data.setClassIndex(data.numAttributes() - 1); } System.out.println("Dataset loaded successfully!"); } } Step 3: Build a Classifier Use the J48 algorithm for the decision tree classifier. Java import weka.classifiers.Classifier; import weka.classifiers.trees.J48; import weka.core.Instances; public class WekaClassifier { public static void main(String[] args) throws Exception { DataSource source = new DataSource("data/iris.arff"); Instances data = source.getDataSet(); data.setClassIndex(data.numAttributes() - 1); // Build classifier Classifier classifier = new J48(); classifier.buildClassifier(data); System.out.println("Classifier built successfully!"); } } Step 4: Evaluate the Model Java import weka.classifiers.Evaluation; import weka.classifiers.trees.J48; import weka.core.Instances; import weka.core.converters.ConverterUtils.DataSource; public class WekaEvaluation { public static void main(String[] args) throws Exception { // Load dataset DataSource source = new DataSource("data/iris.arff"); Instances data = source.getDataSet(); data.setClassIndex(data.numAttributes() - 1); // Build classifier J48 tree = new J48(); tree.buildClassifier(data); // Perform 10-fold cross-validation Evaluation eval = new Evaluation(data); eval.crossValidateModel(tree, data, 10, new java.util.Random(1)); // Output evaluation results System.out.println(eval.toSummaryString("\nResults\n======\n", false)); } } To evaluate the model, you can use cross-validation to see how well the classifier performs on unseen data. Java import weka.core.Instances; import weka.core.converters.ConverterUtils.DataSource; public class WekaExample { public static void main(String[] args) throws Exception { // Loading dataset DataSource source = new DataSource("data/iris.arff"); Instances data = source.getDataSet(); // classification if (data.classIndex() == -1) { data.setClassIndex(data.numAttributes() - 1); } System.out.println("Dataset loaded successfully!"); } } Java vs Python for AI Python is extensively used in the automation environment and has an extensive range of libraries for AI. The popular libraries are TensorFlow, Keras, and Scikit-learn. Java provides enterprise environments for many applications and provides many libraries for AI integration. Below is the comparison between Python and Java : JAVA Python Performance high due to the compiled nature Slower compared to Java due to the interpreted nature Currently limited number of Java library support, but it keeps growing Python has extensive libraries for AI and machine learning. Java has a large community for enterprise applications; however, the community is still growing for AI. Python has a larger and stronger community for AI. Verbose syntax Simpler, more intuitive syntax Java used for large-scale applications such as enterprise applications Python often used for research and prototyping Conclusion Java is used for enterprise and large applications, but it is also the best language for building AI applications. Python is used for research and development because of its simplicity and large number of libraries. Java has features like scalability, robustness, and performance that support AI systems to perform complex tasks. Java has many libraries such as Weka, Deeplearning4j, and Apache Mahout that help in handling complex AI tasks like machine learning to deep learning.
Java 23 is finally out, and we can start migrating our project to it. The very first pitfall comes quickly when switching to the latest JDK 23 with compilation issues when using the Lombok library in your project. Let's begin with the symptom description first. Description The Lombok library heavily relies on annotations. It's used for removing a lot of boilerplate code; e.g., getters, setters, toString, loggers, etc. @Slf4j usage for simplified logging configuration Maven compilation errors coming from Lombok and Java 23 look like this: Plain Text [INFO] --- [compiler:3.13.0:compile [ (default-compile) @ sat-core --- [WARNING] Parameter 'forceJavacCompilerUse' (user property 'maven.compiler.forceJavacCompilerUse') is deprecated: Use forceLegacyJavacApi instead [INFO] Recompiling the module because of changed source code [INFO] Compiling 50 source files with javac [debug parameters release 23] to target\classes [INFO] ------------------------------------------------------------- [ERROR] COMPILATION ERROR : [INFO] ------------------------------------------------------------- [ERROR] spring-advanced-training\sat-core\src\main\java\com\github\aha\sat\core\aop\BeverageLogger.java:[21,2] error: cannot find symbol symbol: variable log location: class BeverageLogger ... [INFO] 16 errors [INFO] ------------------------------------------------------------- [INFO] ------------------------------------------------------------------------ [INFO] [BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 3.090 s [INFO] Finished at: 2024-09-26T08:45:59+02:00 [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal [org.apache.maven.plugins:maven-compiler-plugin:3.13.0:compile (default-compile) on project sat-core: Compilation failure: Compilation failure: [ERROR] spring-advanced-training\sat-core\src\main\java\com\github\aha\sat\core\aop\BeverageLogger.java:[21,2] error: cannot find symbol [ERROR] symbol: variable log [ERROR] location: class BeverageLogger ... Note: The @Slf4j annotation is just an example. It's demonstrated here because these are the first errors in the build logs. However, it's related to any other already mentioned Lombok annotation. Explanation The compilation error is caused by a change in the behavior of annotation processing in Java 23. See JDK 23 Release notes and this statement: As of JDK 23, annotation processing is only run with some explicit configuration of annotation processing or with an explicit request to run annotation processing on the javac command line. This is a change in behavior from the existing default of looking to run annotation processing by searching the class path for processors without any explicit annotation processing related options needing to be present. You can find more details about it here. Solution In order to be able to use Lombok with the new Java 23, we need to turn on the full compilation processing. It can be done in Maven as: To have the latest maven-compiler-version (it's version 3.13.0 at the time of writing this article) Setup maven.compiler.proc property with full value. XML <properties> ... <java.version>23</java.version> <maven-compiler-plugin.version>3.13.0</maven-compiler-plugin.version> <maven.compiler.proc>full</maven.compiler.proc> </properties> <build> <plugins> ... <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>${maven-compiler-plugin.version}</version> <configuration> <source>${java.version}</source> <target>${java.version}</target> </configuration> </plugin> </plugins> </build> It's all we need to make our project compilable again. Plain Text [INFO] --- compiler:3.13.0:compile (default-compile) @ sat-core --- [WARNING] Parameter 'forceJavacCompilerUse' (user property 'maven.compiler.forceJavacCompilerUse') is deprecated: Use forceLegacyJavacApi instead [INFO] Recompiling the module because of changed source code. [INFO] Compiling 50 source files with javac [debug parameters release 23] to target\classes [INFO] [INFO] --- resources:3.3.1:testResources (default-testResources) @ sat-core --- [INFO] Copying 2 resources from src\test\resources to target\test-classes Conclusion This article has covered the issue related to using the Lombok library and upgrading to JDK 23. The complete change (but with more changes) is visible in this GitHub commit.
Understanding the shared steps in the project setup is crucial before delving into the specifics of each client-augmenting technology. My requirements from the last post were quite straightforward: I'll assume the viewpoint of a backend developer. No front-end build step: no TypeScript, no minification, etc. All dependencies are managed from the backend app, i.e., Maven It's important to note that the technology I'll be detailing, except Vaadin, follows a similar approach. Vaadin, with its unique paradigm, really stands out among the approaches. WebJars WebJars is a technology designed in 2012 by James Ward to handle these exact requirements. WebJars are client-side web libraries (e.g., jQuery and Bootstrap) packaged into JAR (Java Archive) files. Explicitly and easily manage the client-side dependencies in JVM-based web applications Use JVM-based build tools (e.g. Maven, Gradle, sbt, etc.) to download your client-side dependencies Know which client-side dependencies you are using Transitive dependencies are automatically resolved and optionally loaded via RequireJS Deployed on Maven Central Public CDN, generously provided by JSDelivr - WebJars website A WebJar is a regular JAR containing web assets. Adding a WebJar to a project's dependencies is nothing specific: XML <dependencies> <dependency> <groupId>org.webjars.npm</groupId> <artifactId>alpinejs</artifactId> <version>3.14.1</version> </dependency> </dependencies> The framework's responsibility is to expose the assets under a URL. For example, Spring Boot does it in the WebMvcAutoConfiguration class: Java public void addResourceHandlers(ResourceHandlerRegistry registry) { if (!this.resourceProperties.isAddMappings()) { logger.debug("Default resource handling disabled"); return; } addResourceHandler(registry, this.mvcProperties.getWebjarsPathPattern(), //1 "classpath:/META-INF/resources/webjars/"); addResourceHandler(registry, this.mvcProperties.getStaticPathPattern(), (registration) -> { registration.addResourceLocations(this.resourceProperties.getStaticLocations()); if (this.servletContext != null) { ServletContextResource resource = new ServletContextResource(this.servletContext, SERVLET_LOCATION); registration.addResourceLocations(resource); } }); } The default is "/webjars/**" Inside the JAR, you can reach assets by their respective path and name. The agreed-upon structure is to store the assets inside resources/webjars//. Here's the structure of the alpinejs-3.14.1.jar: Plain Text META-INF |_ MANIFEST.MF |_ maven.org.webjars.npm.alpinejs |_ resources.webjars.alpinejs.3.14.1 |_ builds |_ dist |_ cdn.js |_ cdn.min.js |_ src |_ package.json Within Spring Boot, you can access the non-minified version with /webjars/alpinejs/3.14.1/dist/cdn.js. Developers release client-side libraries quite often. When you change a dependency version in the POM, you must change the front-end path, possibly in multiple locations. It's boring, has no added value, and you risk missing a change. The WebJars Locator project aims to avoid all these issues by providing a path with no version, i.e., /webjars/alpinejs/dist/cdn.js. You can achieve this by adding the webjars-locator JAR to your dependencies: XML <dependencies> <dependency> <groupId>org.webjars.npm</groupId> <artifactId>alpinejs</artifactId> <version>3.14.1</version> </dependency> <dependency> <groupId>org.webjars</groupId> <artifactId>webjars-locator</artifactId> <version>0.52</version> </dependency> </dependencies> I'll use this approach for every front-end technology. I'll also add the Bootstrap CSS library to provide a better-looking user interface. Thymeleaf Thymeleaf is a server-side rendering technology. Thymeleaf is a modern server-side Java template engine for both web and standalone environments. Thymeleaf's main goal is to bring elegant natural templates to your development workflow — HTML that can be correctly displayed in browsers and also work as static prototypes, allowing for stronger collaboration in development teams. With modules for Spring Framework, a host of integrations with your favourite tools, and the ability to plug in your own functionality, Thymeleaf is ideal for modern-day HTML5 JVM web development — although there is much more it can do. - Thymeleaf I was still a consultant when I first learned about Thymeleaf. At the time, JSP was at the end of their life. JSF were trying to replace them; IMHO, they failed. I thought Thymeleaf was a fantastic approach: it allows you to see the results in a static environment at design time and in a server environment at development time. Even better, you can seamlessly move between one and the other using the same file. I've never seen this capability used. However, Spring Boot fully supports Thymeleaf. The icing on the cake: the latter is available via an HTML namespace on the page. If you didn't buy into JSF (spoiler: I didn't), Thymeleaf is today's go-to SSR templating language. Here's the demo sample from the website: HTML <table> <thead> <tr> <th th:text="#{msgs.headers.name}">Name</th> <th th:text="#{msgs.headers.price}">Price</th> </tr> </thead> <tbody> <tr th:each="prod: ${allProducts}"> <td th:text="${prod.name}">Oranges</td> <td th:text="${#numbers.formatDecimal(prod.price, 1, 2)}">0.99</td> </tr> </tbody> </table> Here is a Thymeleaf 101, in case you need to familiarise yourself with the technology. When you open the HTML file, the browser displays the regular value inside the tags, i.e., Name and Price. When you use it in the server, Thymeleaf kicks in and renders the value computed from th:text, #{msgs.headers.name} and #{msgs.headers.price}. The $ operator queries for a Spring bean of the same name passed to the model. ${prod.name} is equivalent to model.getBean("prod").getName()". The # calls a function. th:each allows for loops. Thymeleaf Integration With the Front-End Framework Most, if not all, front-end frameworks work with a client-side model. We need to bridge between the server-side model and the client-side one. The server-side code I'm using is the following: Kotlin data class Todo(val id: Int, var label: String, var completed: Boolean = false) //1 fun config() = beans { bean { mutableListOf( //2 Todo(1, "Go to the groceries", false), Todo(2, "Walk the dog", false), Todo(3, "Take out the trash", false) ) } bean { router { GET("/") { ok().render( //3 "index", //4 mapOf("title" to "My Title", "todos" to ref<List<Todo>>()) //5 ) } } } } Define the Todo class. Add an in-memory list to the bean factory. In a regular app, you'd use a Repository to read from the database Render an HTML template. The template is src/main/resources/templates/index.html with Thymeleaf attributes. Put the model in the page's context. Thymeleaf offers a th:inline="javascript" attribute on the `` tag. It renders the server-side data as JavaScript variables. The documentation explains it much better than I ever could: The first thing we can do with script inlining is writing the value of expressions into our scripts, like: /* ... var username = /*[[${session.user.name}]]*/ 'Sebastian'; ... /*]]>*/ The /*[[...]]*/ syntax, instructs Thymeleaf to evaluate the contained expression. But there are more implications here: Being a javascript comment (/*...*/), our expression will be ignored when displaying the page statically in a browser. The code after the inline expression ('Sebastian') will be executed when displaying the page statically. Thymeleaf will execute the expression and insert the result, but it will also remove all the code in the line after the inline expression itself (the part that is executed when displayed statically). - Thymeleaf documentation If we apply the above to our code, we can get the model attributes passed by Spring as: HTML <script th:inline="javascript"> /*<![CDATA[*/ window.title = /*[[${title}]]*/ 'A Title' window.todos = /*[[${todos}]]*/ [{ 'id': 1, 'label': 'Take out the trash', 'completed': false }] /*]]>*/ </script> When rendered server-side, the result is: HTML <script> /*<![CDATA[*/ window.title = "My title"; window.todos: [{"id":1,"label":"Go to the groceries","completed":false},{"id":2,"label":"Walk the dog","completed":false},{"id":3,"label":"Take out the trash","completed":false}] /*]]>*/ </script> Summary In this post, I've described two components I'll be using throughout the rest of this series: WebJars manage client-side dependencies in your Maven POM. Thymeleaf is a templating mechanism that integrates well with Spring Boot. The complete source code for this post can be found on GitHub. Go Further WebJars Instructions for Spring Boot
There are 9 types of java.lang.OutOfMemoryErrors, each signaling a unique memory-related issue within Java applications. Among these, java.lang.OutOfMemoryError: Metaspace is a challenging error to diagnose. In this post, we’ll delve into the root causes behind this error, explore potential solutions, and discuss effective diagnostic methods to troubleshoot this problem. Let’s equip ourselves with the knowledge and tools to conquer this common adversary. JVM Memory Regions To better understand OutOfMemoryError, we first need to understand different JVM Memory regions. Here is a video clip that gives a good introduction to different JVM memory regions. But in a nutshell, JVM has the following memory regions: Figure 1: JVM memory regions Young Generation: Newly created application objects are stored in this region. Old Generation: Application objects that are living for a longer duration are promoted from the Young Generation to the Old Generation. Basically, this region holds long-lived objects. Metaspace: Class definitions, method definitions, and other metadata that are required to execute your program are stored in the Metaspace region. This region was added in Java 8. Before that metadata definitions were stored in the PermGen. Since Java 8, PermGen was replaced by Metaspace. Threads: Each application thread requires a thread stack. Space allocated for thread stacks, which contain method call information and local variables are stored in this region. Code cache: Memory areas where compiled native code (machine code) of methods is stored for efficient execution are stored in this region. Direct buffer: ByteBuffer objects are used by modern frameworks (i.e., Spring WebClient) for efficient I/O operations. They are stored in this region. GC (Garbage Collection): Memory required for automatic garbage collection to work is stored in this region. JNI (Java Native Interface): Memory for interacting with native libraries and code written in other languages is stored in this region. misc: There are areas specific to certain JVM implementations or configurations, such as the internal JVM structures or reserved memory spaces, they are classified as ‘misc’ regions. What Is java.lang.OutOfMemoryError: Metaspace? Figure 2: java.lang.OutOfMemoryError: Metaspace With a lot of class definitions, method definitions are created in the Metaspace region than the allocated Metaspace memory limit (i.e., -XX:MaxMetaspaceSize), JVM will throw java.lang.OutOfMemoryError: Metaspace. What Causes java.lang.OutOfMemoryError: Metaspace? java.lang.OutOfMemoryError: Metaspace is triggered by the JVM under the following circumstances: Creating a large number of dynamic classes: If your application uses Groovy kind of scripting languages or Java Reflection to create new classes at runtime Loading a large number of classes: Either your application itself has a lot of classes or it uses a lot of 3rd party libraries/frameworks which have a lot of classes in it. Loading a large number of class loaders: Your application is loading a lot of class loaders. Solutions for OutOfMemoryError: Metaspace The following are the potential solutions to fix this error: Increase Metaspace size: If OutOfMemoryError surfaced due to an increase in the number of classes loaded, then increased the JVM’s Metaspace size (-XX:MetaspaceSize and -XX:MaxMetaspaceSize). This solution is sufficient to fix most of the OutOfMemoryError: Metaspace errors, because memory leaks rarely happen in the Metaspace region. Fix memory leak: Analyze memory leaks in your application using the approach given in this post. Ensure that class definitions are properly dereferenced when they are no longer needed to allow them to be garbage collected. Sample Program That Generates OutOfMemoryError: Metaspace To better understand java.lang.OutOfMemoryError: Metaspace, let’s try to simulate it. Let’s leverage BuggyApp, a simple open-source chaos engineering project. BuggyApp can generate various sorts of performance problems such as Memory Leak, Thread Leak, Deadlock, multiple BLOCKED threads, etc. Below is the Java program from the BuggyApp project that simulates java.lang.OutOfMemoryError: Metaspace when executed. import java.util.UUID; import javassist.ClassPool; public class OOMMetaspace { public static void main(String[] args) throws Exception { ClassPool classPool = ClassPool.getDefault(); while (true) { // Keep creating classes dynamically! String className = "com.buggyapp.MetaspaceObject" + UUID.randomUUID(); classPool.makeClass(className).toClass(); } } } In the above program, the OOMMetaspace’ class’s ‘main() method contains an infinite while (true) loop. Within the loop, the thread uses open-source library javassist to create dynamic classes whose names start with com.buggyapp.MetaspaceObject. Class names generated by this program will look something like this: com.buggyapp.MetaspaceObjectb7a02000-ff51-4ef8-9433-3f16b92bba78. When so many such dynamic classes are created, the Metaspace memory region will reach its limit and the JVM will throw java.lang.OutOfMemoryError: Metaspace. How to Troubleshoot OutOfMemoryError: Metaspace To diagnose OutOfMemoryError: Metaspace, we need to inspect the contents of the Metaspace region. Upon inspecting the contents, you can figure out the leaking area of the application code. Here is a blog post that describes a few different approaches to inspecting the contents of the Metaspace region. You can choose the approach that suits your requirements. My favorite options are: 1. -verbose:class If you are running on Java version 8 or below, then you can use this option. When you pass the -verbose:class option to your application during startup, it will print all the classes that are loaded into memory. Loaded classes will be printed in the standard error stream (i.e., console, if you aren’t routing your error stream to a log file). Example: java {app_name} -verbose:class When we passed the -verbose:class flag to the above program, in the console we started to see the following lines to be printed: [Loaded com.buggyapp.MetaspaceObjecta97f62c5-0f71-4702-8521-c312f3668f47 from __JVM_DefineClass__] [Loaded com.buggyapp.MetaspaceObject70967d20-609f-42c4-a2c4-b70b50592198 from __JVM_DefineClass__] [Loaded com.buggyapp.MetaspaceObjectf592a420-7109-42e6-b6cb-bc5635a6024e from __JVM_DefineClass__] [Loaded com.buggyapp.MetaspaceObjectdc7d12ad-21e6-4b17-a303-743c0008df87 from __JVM_DefineClass__] [Loaded com.buggyapp.MetaspaceObject01d175cc-01dd-4619-9d7d-297c561805d5 from __JVM_DefineClass__] [Loaded com.buggyapp.MetaspaceObject5519bef3-d872-426c-9d13-517be79a1a07 from __JVM_DefineClass__] [Loaded com.buggyapp.MetaspaceObject84ad83c5-7cee-467b-a6b8-70b9a43d8761 from __JVM_DefineClass__] [Loaded com.buggyapp.MetaspaceObject35825bf8-ff39-4a00-8287-afeba4bce19e from __JVM_DefineClass__] [Loaded com.buggyapp.MetaspaceObject665c7c09-7ef6-4b66-bc0e-c696527b5810 from __JVM_DefineClass__] [Loaded com.buggyapp.MetaspaceObject793d8aec-f2ee-4df6-9e0f-5ffb9789459d from __JVM_DefineClass__] : : This is a clear indication that classes with the com.buggyapp.MetaspaceObject prefixes are loaded so frequently into the memory. This is a great clue/hint to let you know where the leak is happening in the application. 2. -Xlog:class+load If you are running on Java version 9 or above, then you can use this option. When you pass the -Xlog:class+load option to your application during startup, it will print all the classes that are loaded into memory. Loaded classes will be printed in the file path you have configured. Example: java {app_name} -Xlog:class+load=info:/opt/log/loadedClasses.txt If you are still unable to determine the origination of the leak based on the class name, then you can do a deep dive by taking a heap dump from the application. You can capture a heap dump using one of the 8 options discussed in this post. You might choose the option that fits your needs. Once a heap dump is captured, you need to use tools like HeapHero, JHat, etc. to analyze the dumps. What Is Heap Dump? Heap Dump is basically a snapshot of your application memory. It contains detailed information about the objects and data structures present in the memory. It will tell what objects are present in the memory, whom they are referencing, who is referencing, what is the actual customer data stored in them, what size of they occupy, whether they are eligible for garbage collection, etc. They provide valuable insights into the memory usage patterns of an application, helping developers identify and resolve memory-related issues. How to Analyze Metaspace Memory Leak Through Heap Dump HeapHero is available in two modes: Cloud: You can upload the dump to the HeapHero cloud and see the results. On-Prem: You can register here, get the HeapHero installed on your local machine, and then do the analysis.Note: I prefer using the on-prem installation of the tool instead of using the cloud edition because heap dump tends to contain sensitive information (such as SSN, Credit Card Numbers, VAT, etc.), and I don’t want the dump to be analyzed in external locations. Once the heap dump is captured, from the above program, we upload it to the HeapHero tool. The tool analyzed the dump and generated the report. In the report go to the ‘Histogram’ view. This view will show all the classes that are loaded into the memory. In this view, you will notice the classes with the prefix com.buggyapp.MetaspaceObject. Right-click on the … that is next to the class name. Then click on the List Object(s) with > incoming references as shown in the below figure. Figure 3: Histogram view of showing all the loaded classes in memory Once you do it, the tool will display all the incoming references of this particular class. This will show the origin point of these classes as shown in the below figure. It will clearly show which part of the code is creating these class definitions. Once we know which part of the code is creating these class definitions, then it will be easy to fix the problem. Figure 4: Incoming references of the class Video Summary Here’s a video summary of the article: Conclusion In this post, we’ve covered a range of topics, from understanding JVM memory regions to diagnosing and resolving java.lang.OutOfMemoryError: Metaspace. We hope you’ve found the information useful and insightful. But our conversation doesn’t end here. Your experiences and insights are invaluable to us and to your fellow readers. We encourage you to share your encounters with java.lang.OutOfMemoryError: Metaspace in the comments below. Whether it’s a unique solution you’ve discovered, a best practice you swear by, or even just a personal anecdote, your contributions can enrich the learning experience for everyone.
There's a far smaller audience of folks who understand the intricacies of HTML document structure than those who understand the user-friendly Microsoft (MS) Word application. Automating HTML-to-DOCX conversions makes a lot of sense if we frequently need to generate well-formatted documents from dynamic web content, streamline reporting workflows, or convert any other web-based information into editable Word documents for a non-technical business audience. Automating HTML-to-DOCX conversions with APIs reduces the time and effort it takes to generate MS Word content for non-technical users. In this article, we'll review open-source and proprietary API solutions for streamlining HTML-to-DOCX conversions in Java, and we'll explore the relationship between HTML and DOCX file structures that makes this conversion relatively straightforward. How Similar are HTML and DOCX Structures? HTML and DOCX documents serve very different purposes, but they have more in common than we might initially think. They're both XML-based formats with similar approaches to structuring text on a page: HTML documents use an XML-based structure to organize how content appears in a web browser. DOCX documents use a series of zipped XML files to collectively define how content appears in the proprietary MS Word application. Content elements in an HTML document like paragraphs (<p>), headings (<h1>, <h2>, etc.), and tables (<table>) all roughly translate into DOCX iterations of the same concept. For example, DOCX files map HTML <p> tags to <w:p> elements, and they map <h1> tags to <w:pStyle> elements. Further, in a similar way to how HTML documents often reference CSS stylesheets (e.g., styles.css) for element styling, DOCX documents use an independent document.xml file to store content display elements and map them with Word styles and settings, stored in style.xml and settings.xml files respectively within the DOCX archive. Differences Between HTML and DOCX to Consider It's worth noting that HTML and DOCX files do handle certain types of content quite differently, despite sharing a similar derivative structure. Much of this can be attributed to differences between how web browser applications and the MS Word application interpret information. The challenges we encounter with HTML-to-DOCX conversions are largely driven by inconsistencies in the way custom styling, media content, and dynamic elements are interpreted. The styling used in native HTML and native DOCX documents is often custom/proprietary, and custom/proprietary HTML styles (e.g., custom fonts) won't necessarily translate into identical DOCX styles when we convert content between those formats. Further, in HTML files, multimedia (e.g., images, videos) are included on any given page as links, whereas DOCX files embed media objects directly. Finally, the dynamic code elements we find on some HTML pages — usually written in JavaScript — won't translate to DOCX whatsoever given that DOCX is a static format. Converting HTML to DOCX When we convert HTML to DOCX, we effectively parse content from HTML elements and subsequently map that content to appropriate DOCX elements. The same occurs in reverse when we make the opposite conversion (a process I've written about in the past). How that parsing and mapping take place depends entirely on how we structure our code — or which APIs we elect to use in our programming project. Open-Source Libraries for HTML-to-DOCX Conversions If we're looking for open-source libraries to make HTML-to-DOCX conversions, we'll go a long way with libraries like jsoup and docx4j. The jsoup library is designed to parse and clean HTML programmatically into a structure that we can easily work with, and the docx4j library offers features capable of mapping HTML tags to their corresponding DOCX elements. We can also finalize the creation of our DOCX documents with docx4j, literally organizing our mapped HTML elements into a series of XML files and zipping those with a .docx extension. The docx4j library is very similar to Microsoft's OpenXML SDK, only for Java developers instead of C#. HTML-to-DOCX Conversion Demonstration If we're looking to simplify HTML-to-DOCX conversions, we can turn our attention to a web API solution that gets in the weeds on our behalf, parsing and mapping HTML into a consistent DOCX result without requiring us to download multiple libraries or write a lot of extra code. JitPack a free solution to use, requiring only a free API key. We'll now walk through example code that we can use to structure our API call. To begin, we'll install the client using Maven. We'll first add the repository to our pom.xml: XML <repositories> <repository> <id>jitpack.io</id> <url>https://jitpack.io</url> </repository> </repositories> And after that, we'll add the dependency to our pom.xml: XML <dependencies> <dependency> <groupId>com.github.Cloudmersive</groupId> <artifactId>Cloudmersive.APIClient.Java</artifactId> <version>v4.25</version> </dependency> </dependencies> Next, we'll import the necessary classes to configure the API client, handle exceptions, etc.: Java // Import classes: //import com.cloudmersive.client.invoker.ApiClient; //import com.cloudmersive.client.invoker.ApiException; //import com.cloudmersive.client.invoker.Configuration; //import com.cloudmersive.client.invoker.auth.*; //import com.cloudmersive.client.ConvertWebApi; Now we'll configure our API client with an API key for authentication: Java ApiClient defaultClient = Configuration.getDefaultApiClient(); // Configure API key authorization: Apikey ApiKeyAuth Apikey = (ApiKeyAuth) defaultClient.getAuthentication("Apikey"); Apikey.setApiKey("YOUR API KEY"); // Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null) //Apikey.setApiKeyPrefix("Token"); Finally, we’ll create the API instance, prepare our input request, and handle our conversion (while catching any exceptions, of course): Java ConvertWebApi apiInstance = new ConvertWebApi(); HtmlToOfficeRequest inputRequest = new HtmlToOfficeRequest(); // HtmlToOfficeRequest | HTML input to convert to DOCX try { byte[] result = apiInstance.convertWebHtmlToDocx(inputRequest); System.out.println(result); } catch (ApiException e) { System.err.println("Exception when calling ConvertWebApi#convertWebHtmlToDocx"); e.printStackTrace(); } Once our conversion is complete, we can write the resulting byte[] array to a DOCX file, and we're all finished. We can perform subsequent operations with our new DOCX document, or we can store it for business users to access directly and call it a day. Conclusion In this article, we reviewed some of the similarities between HTML and DOCX file structures that make converting between both formats relatively simple and easy to accomplish with code. We then discussed two open-source libraries we could use in conjunction to handle HTML-to-DOCX conversions, and we learned how to call a free proprietary API to handle all our steps in one go.
With the rise of microservices architecture, there has been a rapid acceleration in the modernization of legacy platforms, leveraging cloud infrastructure to deliver highly scalable, low-latency, and more responsive services. Why Use Spring WebFlux? Traditional blocking architectures often struggle to keep up performance, especially under high load. Being Spring Boot developers, we know that Spring WebFlux, introduced as part of Spring 5, offers a reactive, non-blocking programming model designed to address these challenges. WebFlux leverages event-driven, non-blocking, asynchronous processing to maximize resource efficiency, making it particularly well-suited for I/O-intensive tasks such as database access, API calls, and streaming data. About This Article Adopting Spring WebFlux can significantly enhance the performance of the Spring Boot applications. Under normal load, Spring Boot WebFlux applications perform excellently, but for the scenarios where the source of the data is blocking, such as I/O-bound, the default (main) IO thread pool can cause contention if downstream responses are very slow and degrade the performance. In this article, I will try to cover how publishOn and subscribeOn reactor operators can come to the rescue. Understanding publishOn and subscribeOn (Note: I have used the Schedulers.boundedElastic() scheduler as a more scalable reactive thread pool group. Spring Boot WebFlux provides other schedulers that can be used based on need.) publishOn(Schedulers.boundedElastic()) and subscribeOn(Schedulers.boundedElastic()) are used by Reactor to control where certain parts of the reactive pipeline are executed, specifically on a different thread or thread pool. However, the two operators serve different purposes: 1. publishOn(Schedulers.boundedElastic()) Purpose This switches the downstream execution to the specified scheduler, meaning that any operators that come after the publishOn will be executed on the provided scheduler (in this case, boundedElastic). Use Case If you want to switch the execution thread for all the operators after a certain point in your reactive chain (for example, to handle blocking I/O or expensive computations), publishOn is the operator to use. Example Java Mono.fromSupplier(() -> expensiveBlockingOperation()) .publishOn(Schedulers.boundedElastic()) // Switch downstream to boundedElastic threads .map(result -> process(result)) .subscribe(); When To Use publishOn Use when you need to run only the downstream operations on a specific scheduler, but want to keep the upstream on the default (or another) scheduler. It is useful for separating the concerns of upstream and downstream processing, especially if upstream operators are non-blocking and you want to handle blocking operations later in the pipeline. 2. subscribeOn(Schedulers.boundedElastic()) Purpose This changes the thread where the subscription (upstream) occurs. It moves the entire chain of operators (from subscription to completion) to the provided scheduler, meaning all the work (including upstream and downstream operators) will run on the specified scheduler. Use Case Use this if you need to run the entire chain (both upstream and downstream) on a specific thread pool or scheduler, such as for blocking I/O tasks or when you want to handle the subscription (data fetching, database calls, etc.) on a different thread. Example Java Mono.fromSupplier(() -> expensiveBlockingOperation()) .subscribeOn(Schedulers.boundedElastic()) // Runs the entire chain on boundedElastic threads .map(result -> process(result)) .subscribe(); When To Use subscribeOn Use when you want to run the entire pipeline (upstream + downstream) on the boundedElastic scheduler. It's particularly useful for situations where the source of the data is blocking, such as I/O-bound operations (reading from disk, network calls, database queries, etc.), and you want to move everything off the default event loop thread (if using Netty or Reactor Netty). Differences Between publishOn and subscribeOn publishOn affects only the downstream operations from the point where it is called. If placed in the middle of a chain, everything after the publishOn will be scheduled on the provided scheduler, but everything before it will stay on the previous scheduler. subscribeOn affects the entire reactive chain, both upstream and downstream. It's often used to move blocking upstream operations (like I/O) to a non-blocking thread pool. Choosing Between the Two Use publishOn(Schedulers.boundedElastic())if: You need fine-grained control over where specific parts of the reactive chain run. You want to switch only the downstream operations (after a certain point) to a specific scheduler. Example: You're performing non-blocking reactive operations first, and then want to handle blocking operations downstream in a different thread pool. Use subscribeOn(Schedulers.boundedElastic())if: You want to run the entire reactive chain (from the point of subscription onward) on a different scheduler. The source operation (like a network call or database query) is blocking, and you want to move the blocking subscription and all subsequent operations to a specific scheduler like boundedElastic. In short: If you have blocking code upstream (like a blocking I/O source), use subscribeOn. If you want to isolate downstream blocking work (e.g., after some non-blocking calls), use publishOn. Common Use Case: Blocking I/O in Reactive Programming If you're working with I/O-bound operations (like file reads, database queries, etc.), and you want to offload blocking operations to a bounded thread pool, here's how you might use these: Database Call or a Blocking I/O Call Java Mono.fromCallable(() -> performBlockingDatabaseCall()) .subscribeOn(Schedulers.boundedElastic()) // Offload blocking database call to boundedElastic .map(result -> process(result)) // Further processing .subscribe(); Here, the subscribeOn ensures that the entire pipeline, including the blocking I/O, runs on the boundedElastic scheduler. Mixed Non-Blocking and Blocking Operations Java Mono.just("Initial value") .map(value -> transformNonBlocking(value)) // Non-blocking operation .publishOn(Schedulers.boundedElastic()) // Switch thread for blocking operation .flatMap(value -> Mono.fromCallable(() -> performBlockingOperation(value))) // Blocking operation .subscribe(); In this case, the publishOn ensures that only the downstream blocking work (i.e., the flatMap) is moved to a different scheduler, while the earlier non-blocking operations stay on the default one. Summary subscribeOn affects the entire reactive chain and is typically used when the source operation (like database access) is blocking. publishOn switches the scheduler for all operations downstream from where it is called and is better when you want to run only certain parts of the chain on a different thread.
Developers may be aware of the lifecycle of service instances when using dependency injection, but many don’t fully grasp how it works. You can find numerous articles online that clarify these concepts, but they often just reiterate definitions that you might already know. Let me illustrate with a detailed example that simplifies the explanation. When implementing dependency injection, developers have three options that determine the lifecycle of the instances: Singleton Scoped Transient While most developers recognize these terms, a significant number struggle to determine which option to choose for a service's lifetime. Definitions Let me start with definitions: Singleton lifetime service instances are created once per application from the service container. A single instance will serve all subsequent requests. Singleton services are disposed of at the end of the application (i.e., upon application restart). Transient lifetime service instances are created per request from the service container. Transient services are disposed of at the end of the request. Scoped lifetime service instances are created once per client request. Transient services are disposed of at the end of the request. When to Use Singleton - When you want to use single instances of services throughout the life cycle of the application Transient - When you want to use individual instances of services within the client request Scoped - When you want to use a single instance of service for each request What is a client request? In very simple words, you can consider it as an API/REST call coming to your application by button clicks of the user to get the response. Don’t worry, let's understand with an example. Example First, let's create interfaces/services and classes: C# // we are declaring 3 services as below Public interface ISingleton Public interface ITransient Public interface IScoped Now let's write the implementation for each service Interface/service created above. We will try to understand the concept by trying to update the callMeSingleton, callMeTransient, and callMeScoped variable. Singleton class implementation: C# class SingletonImplementation: ISingleton { var callMeSingleton = "" // other implementation public SetSingleton(string value) { callMeSingleton = value; } // other implementation } Transient class implementation: C# class TransientImplementation: ITransient { var callMeTransient = "" // other implementation public SetTransient(string value) { callMeTransient = value; } // other implementation } Scoped class implementation: C# class ScopedImplementation: IScoped { var callMeScoped = "" //other implementation public SetScoped(string value) { callMeScoped = value; } //other implementation } Let's register (ConfigureServices) with DI (Dependency Injection) to decide the life cycle of each service instance: C# services.AddSingleton<ISingleton, SingletonImplementation>(); services.AddTransient<ITransient , TransientImplementation>(); services.AddScoped<IScoped , ScopedImplementation>(); Let's use/call these services from 3 different classes (ClassA, ClassB, and ClassC) to understand the life cycle of each service: ClassA: C# public class ClassA { private ISingleton _singleton; //constructor to instantiate 3 different services we creates public ClassA(ISingleton singleton, ITransient _transient, IScoped _scoped) { _singleton = singleton; } public void UpdateSingletonFromClassA() { _singleton.SetSingleton("I am from ClassA"); } public void UpdateTransientFromClassA() { _transient.SetTransient("I am from ClassA"); } public void UpdateScopedFromClassA() { _scoped.SetScoped("I am from ClassA"); } // other implementation } ClassB: C# public class ClassB { private ISingleton _singleton; //constructor to instantiate 3 different services we creates public ClassB(ISingleton singleton, ITransient _transient, IScoped _scoped) { _singleton = singleton; } public void UpdateSingletonFromClassB() { _singleton.SetSingleton("I am from ClassB"); } public void UpdateTransientFromClassB() { _transient.SetTransient("I am from ClassB"); } public void UpdateScopedFromClassB() { _scoped.SetScoped("I am from ClassB"); } // other implementation } ClassC: C# public class ClassC { private ISingleton _singleton; //constructor to instantiate 3 different services we creates public ClassC(ISingleton singleton, ITransient _transient, IScoped _scoped) { _singleton = singleton; } public void UpdateSingletonFromClassC() { _singleton.SetSingleton("I am from ClassC"); } public void UpdateTransientFromClassC() { _transient.SetTransient("I am from ClassC"); } public void UpdateScopedFromClassC() { _scoped.SetScoped("I am from ClassC"); } // other implementation } Analysis Let's analyze the results and behavior for each life cycle one by one from the above implementation: Singleton All the classes (ClassA, ClassB, and ClassC) will use the same single instance of the SingletonImplementation class throughout the lifecycle of the application. This means that properties, fields, and operations of the SingletonImplementation class will be shared among instances used on all calling classes. Any updates to properties or fields will override previous changes. For example, in the code above, ClassA, ClassB, and ClassC are all utilizing the SingletonImplementation service as a singleton instance and calling SetSingleton to update the callMeSingleton variable. In this case, there will be a single value of the callMeSingleton variable for all requests trying to access this property. Whichever class accesses it last to update will override the value of callMeSingleton. ClassA - It will have its same instance as other classes for service TransientImplementation. ClassB - It will have its same instance as other classes for service TransientImplementation. ClassC - It will have its same instance as other classes for service TransientImplementation. ClassA, ClassB, and ClassC are updating the same instance of the SingletonImplementation class, which will override the value of callMeSingleton. Therefore, be careful when setting or updating properties in the singleton service implementation. Singleton services are disposed of at the end of the application (i.e., upon application restart). Transient All the classes (ClassA, ClassB, and ClassC) will use their individual instances of the TransientImplementation class. This means that if one class calls for properties, fields, or operations of the TransientImplementation class, it will only update or override its individual instance values. Any updates to properties or fields are not shared among other instances of TransientImplementation. Let's understand: ClassA - It will have its own instance of service of TransientImplementation. ClassB - It will have its own instance of service of TransientImplementation. ClassC - It will have its own instance of service of TransientImplementation. Let's say you have a ClassD which is calling transient service from ClassA, ClassB, and ClassC instances. In this case, each class instance would be treated as different/separate instance and each class would have its own value of callMeTransient. Read the inline comments below for ClassD: C# public ClassD { // other implementation // Below line of code will update the value of callMeTransient to "I am from ClassA" for the intance of ClassA only. // And it will not be changed by any next calls from Class B or B class ClassA.UpdateTransientFromClassA(); // Below line of code will update the value of callMeTransient to "I am from ClassB" for the intance of ClassB only. // And it will neither override the value for calssA instance nor will be changed by next call from Class C ClassB.UpdateTransientFromClassB(); // Below line of code will update the value of callMeTransient to "I am from ClassC" for the intance of ClassC only. // And it will neither override the value for calssA and classB instance nor will be changed by any next call from any other class. ClassC.UpdateTransientFromClassC(); // other implementation } Transient services are disposed at the end of each request. Use Transient when you want a state less behavior within the request. Scoped All the classes (ClassA, ClassB, and ClassC) will be using single instances of ScopedImplementation class for each request. This means that calls for properties/fields/operations on ScopedImplementation class will happen on single instance with in the scope of request. Any updates of properties/fields will be shared among other classes. Let's understand: ClassA - It will have its instance of service of TransientImplementation. ClassB - It will have its same instance of service of TransientImplementation as ClassA. ClassC - It will have its same instance of service of TransientImplementation as ClassA and ClassB. Let's say you have a ClassD which is calling scoped service from ClassA, ClassB, and ClassC instances. In this case, each class will have single instance of ScopedImplementation class. Read the inline comments for ClassD below. C# public class ClassD { // other implementation // Below code will update the value of callMeScoped to "I am from ClassA" for the instance of ClassA // But as it is Scoped life cycle so it is holding single instance ScopedImplementation of // Then it can be overridden by next call from ClassB or ClassC ClassA.UpdateScopedFromClassA(); // Below code will update the value of callMeScoped to "I am from ClassB" for single instance ScopedImplementation // And it will override the value of callMeScoped for classA instance too. ClassB.UpdateScopedFromClassB(); // Now if Class A will perform any operation on ScopedImplementation, // it will use the latest properties/field values which are overridden by classB. // Below code will update the value of callMeScoped to "I am from ClassC" // And it will override the value of callMeScoped for classA and ClassB instance too. ClassC.UpdateScopedFromClassC(); // now if Class B or Class A will perform any operation on ScopedImplementation , it will use the latest properties/field values which are overridden by classC // other implementation } Scoped services are disposed at the end of each request. Use Scoped when you want a stateless behavior between individual requests. Trivia Time The lifecycle of a service can be overridden by a parent service where it gets initialized. Confused? Let me explain: Let's take the same example from above classes and initialize the Transient and Scoped services from SingletonImplementation (which is a singleton) as below. That would initiate the ITransient and IScoped services and overwrite the lifecycle of these to singleton life cycle as parent service. In this case your application would not have any Transient or Scoped services (considering you just have these 3 services we were using in our examples). Read through the lines in the below code: C# public class SingletonImplementation: ISingleton { // constructor to add initialize the services. private readonly ITransient _transient private readonly IScoped _scoped SingletonImplementation(ITransient transient, IScoped scoped) { _transient = transient; // Now _transient would behave as singleton service irrespective of how it was registered as Transient _scoped = scoped; // now scoped would behave as singleton service irrespective of it being registered as Scoped } var callMeSingleton = "" // other implementation } Summary I hope the above article is helpful in understanding the topic. I would recommend try it yourself with the context set above and you will never be confused again. Singleton is the easiest to understand because once you create its instance, it will be shared across applications throughout the lifecycle of the application. On the similar lines of Singleton, Scoped instances mimic the same behavior but only throughout the lifecycle of a request across application. Transient is totally stateless, for each request and each class instance will hold its own instance of serivice.
Understanding some of the concepts of functional programming that form the basis for the functions within the itertools module helps in understanding how such functions work. These concepts provide insight into the way the module functions operate and their conformance with regard to the paradigm that makes them powerful and efficient tools in Python. This article is going to explain some concepts related to functional programming through specific functions of the itertools module. The article can't possibly talk about all the methods in detail. Instead, it will show how the ideas work in functions like: takewhile dropwhile groupby partial Higher-Order Functions (HOF) A higher-order function is a function that does at least one of the following: Accepts one or more functions as an argument Returns a function as a result All other functions are first-order functions. Example 1: HOF Accepting a Function In the code below, the apply_operation function accepts another function named operation that can be any mathematical operation like add, subtract, or multiply and applies it to variables x and y: Python def apply_operation(operation, x, y): return operation(x, y) def add(a, b): return a + b def multiply(a, b): return a * b print(apply_operation(add, 5, 3)) # 8 print(apply_operation(multiply, 5, 3)) # 15 Example 2: HOF Returning a Function Python def get_func(func_type: str): if func_type == 'add': return lambda a, b: a + b elif func_type == 'multiply': return lambda a, b: a * b else: raise ValueError("Unknown function type") def apply_operation(func, a, b): return func(a, b) func = get_func('add') print(apply_operation(func, 2, 3)) # 5 Advantages of Higher-Order Functions Reusability Higher-order functions help avoid code duplication. In the apply_operation example, the function is reusable as it currently accepts add and multiply; similarly, we can pass the subtract function to it without any changes. Python def subtract(a, b): return a – b print(apply_operation(subtract, 5, 3)) # 2 Functional Composition Since higher-order functions can return functions that can help in function composition, my other article also discusses it. This is useful for creating flexible, modular code. Python def add_one(x): return x + 1 def square(x): return x * x def compose(f, g): return lambda x: f(g(x)) composed_function = compose(square, add_one) print(composed_function(2)) # 9 Here, add_one is applied first, and then the square is applied to the result, producing 9 (square(add_one(2))). Lazy Evaluation Lazy evaluation is about delaying the evaluation of an expression until its value is actually needed. This allows for optimized memory usage and can handle very large datasets efficiently by only processing elements on demand. In some cases, you may only need a few elements from an iterable before a condition is met or a result is obtained. Lazy evaluation allows you to stop the iteration process as soon as the desired outcome is achieved, saving computational resources. In the itertools module, functions like takeWhile, dropWhile, chain, etc. all support lazy evaluation. Currying Currying is all about breaking a function that takes multiple arguments into a sequence of functions, each of which takes one argument. This enables such a function to be partially applied and forms the basis of the partial function in the itertools module. Python does not natively support currying like Haskell, but we can emulate currying in Python by either using lambda functions or functools.partial. Python def add_three(a, b, c): return a + b + c add_curried = lambda a: lambda b: lambda c: a + b + c result = add_curried(1)(2)(3) # Output: 6 Currying breaks down a function into smaller steps, making it easier to reuse parts of a function in different contexts. Partial Functions A partial function fixes a certain number of arguments to a function, producing a new function with fewer arguments. This is similar to currying, but in partial functions, you fix some arguments of the function and get back a function with fewer parameters. The benefits of both currying and partial application help with code reusability and modularity, allowing functions to be easily reused in different contexts. These techniques facilitate function composition, where simpler functions can be combined to build more complex ones. This makes it easier to create modular and adaptable systems, as demonstrated in the article through the use of the partial function. takewhile and dropwhile Both takewhile and dropwhile are lazy evaluation functions from the itertools module, which operate on iterables based on a predicate function. They are designed to either include or skip elements from an iterable based on a condition. 1. takewhile The takewhile function returns elements from the iterable as long as the predicate function returns True. Once the predicate returns False, it stops and does not yield any more elements, even if subsequent elements would satisfy the predicate. Python from itertools import takewhile numbers = [1,2,3,4,5,6,7] list(takewhile(lambda x: x < 3, numbers)) # [1,2] 2. dropwhile The dropwhile function is the opposite of takewhile. It skips elements as long as the predicate returns True, and once the predicate returns False, it yields the remaining elements (without further checking the predicate). Python from itertools import dropwhile numbers = [1,2,3,4,5,6,7] list(dropwhile(lambda x: x < 3, numbers)) # [3, 4, 5, 6, 7] Functional Programming Concepts Both takewhile and dropwhile are higher-order functions because they take a predicate function ( a lambda function) as an argument, demonstrating how functions can be passed as arguments to other functions. They also support lazy evaluation; in takewhile, the evaluation stops as soon as the first element fails the predicate. For example, when 3 is encountered, no further elements are processed. In dropwhile, elements are skipped while the predicate is True. Once the first element fails the predicate, all subsequent elements are yielded without further checks. groupby The groupby function from the itertools module groups consecutive elements in an iterable based on a key function. It returns an iterator that produces groups of elements, where each group shares the same key (the result of applying the key function to each element). Unlike database-style GROUP BY operations, which group all similar elements regardless of their position, groupby only groups consecutive elements that share the same key. If non-consecutive elements have the same key, they will be in separate groups. Python from itertools import groupby people = [ {"name": "Alice", "age": 30}, {"name": "Bob", "age": 30}, {"name": "Charlie", "age": 25}, {"name": "David", "age": 25}, {"name": "Eve", "age": 35} ] grouped_people = groupby(people, key=lambda person: person['age']) for age, group in grouped_people: print(f"Age: {age}") for person in group: print(f" Name: {person['name']}") Functional Programming Concepts Higher-order function: groupby accepts a key function as an argument, which determines how elements are grouped, making it a higher-order function. Lazy evaluation: Like most itertools functions, groupby yields groups lazily as the iterable is consumed. partial As explained above, partial allows you to fix a certain number of arguments in a function, returning a new function with fewer arguments. Python from functools import partial def create_email(username, domain): return f"{username}@{domain}" create_gmail = partial(create_email, domain="gmail.com") create_yahoo = partial(create_email, domain="yahoo.com") email1 = create_gmail("alice") email2 = create_yahoo("bob") print(email1) # Output: alice@gmail.com print(email2) # Output: bob@yahoo.com partial is used to fix the domain part of the email (gmail.com or yahoo.com), so you only need to provide the username when calling the function. This reduces redundancy when generating email addresses with specific domains. Functional Programming Concepts Function currying: partial is a form of currying, where a function is transformed into a series of functions with fewer arguments. It allows pre-setting of arguments, creating a new function that "remembers" the initial values. Higher-order function: Since partial returns a new function, it qualifies as a higher-order function. Conclusion Exploring concepts like higher-order functions, currying, and lazy evaluation can help Python developers make better use of the itertools functions. These fundamental principles help developers understand the workings of functions such as takewhile, dropwhile, groupby, and partial, enabling them to create more organized and streamlined code.
Is it possible to have a programming language that has no syntax? It sounds like a contradiction. Programming languages are all about syntax, plus a bit of code generation, optimization, run-time environment, and so on. But syntax is the most important part as far as programmers are concerned. When encountering a new programming language, it takes time to learn the syntax. Could we just make the syntax disappear or at least make it as simple as possible? Could we also make the syntax arbitrary so that the programmer writing the code can define it for themselves? Ouroboros is a programming language that tries to do just that. It has the simplest syntax ever. It is so simple that it does not even have a syntax analyzer. All it has is a lexical analyzer, which is 20 lines long. At the same time, you can write complex programs and even expressions with parentheses and operators of different precedence, assuming you write your own syntax for that in the program. That way, no syntax also means any syntax. This article is an introduction to Ouroboros, a programming language with no syntax. It is a toy, never meant to be used in production, but it is a fun toy to play with, especially if you have ever wanted to create your own programming language. There were programming languages with minimal syntax. One of the very first languages was LISP, which used only parentheses to group statements as lists. If you are familiar with TCL, you may remember how simple the language is. However, it still defines complex expressions and control structures as part of the language. Another simple language to mention is FORTH. It is a stack language. The syntax is minimal. You either put something on the stack or call a function that works with the values on the stack. FORTH was also famous for its minimal assembly core and for the fact that the rest of the compiler was written in FORTH itself. These languages inspired the design of Ouroboros. LISP is known for the simplest syntax. One might say that LISP has the simplest syntax of all programming languages, but it would be a mistake. True to its name, it uses parentheses to delimit lists, which can be either data or programming structures. As you may know, LISP stands for "Lots of Irritating Superfluous Parentheses." Ouroboros does not do that. It inherits the use of { and } from TCL, but unlike LISP, you are forced to use them only where they are really needed. Ouroboros, although being an interpreted language, can compile itself. Well, not really compile, but you can define syntax for the language in the language itself. However, it is not like in the case of compilers where the compiler is written in the source language. One of the first compilers was the PASCAL compiler written by Niklaus Wirth in PASCAL. The C compiler was also written in C, and more and more language compilers are written in the language they compile. In the case of an interpreted language, it is a bit different. It is not a separate program that reads the source code and generates machine code. It is the executing code, the application program itself, that becomes part of the interpreter. That way, you cannot look at it and say, "This code is not Ouroboros." Any code can be, depending on the syntax you define for it at the start of the code. The Name of the Game Before diving into what Ouroboros is, let’s talk about the name itself. Ouroboros coils around itself in an endless cycle of creation and recreation. The name "Ouroboros" is as multifaceted as the language itself, offering layers of meaning that reflect its unique nature and aspirations. The Eternal Cycle At its core, Ouroboros draws inspiration from the ancient symbol of a serpent consuming its own tail. This powerful image represents the cyclical nature of creation and destruction, perfectly encapsulating our language’s self-referential definition. Just as the serpent feeds upon itself to sustain its existence, Ouroboros the language is defined by its own constructs, creating a closed loop of logic and functionality. UR: The Essence of Simplicity Abbreviated as "UR," Ouroboros embraces the concept of fundamental simplicity. In German, "Ur—" signifies something primordial, primitive, or in its most basic form. This perfectly encapsulates the design philosophy behind Ouroboros: a language stripped down to its absolute essentials. By pushing the simplification of syntax to the extreme, Ouroboros aims to be the "ur-language" of programming — a return to the most elemental form of computation. Like the basic building blocks of life or the fundamental particles of physics, Ouroboros provides a minimal set of primitives from which complex structures can emerge. This radical simplicity is not a limitation but a feature. It challenges programmers to think at the most fundamental level, fostering a deep understanding of computational processes. In Ouroboros, every construct is essential, every symbol significant. It’s programming distilled to its purest form. Our Shared Creation The name begins with "Our-," emphasizing the collaborative nature of this language. Ouroboros is not just a tool but a shared endeavor that belongs to its community of developers and users. It’s a language crafted by us, for us, evolving through our collective efforts and insights. Hidden Treasures Delve deeper into the name, and you’ll uncover more linguistic gems: "Oro" in many Romance languages means "gold" or "prayer." Ouroboros can be seen as a golden thread of logic, or a prayer-like mantra of computational thought. "Ob-" as a prefix often means "toward" or "about," suggesting that Ouroboros is always oriented toward its own essence, constantly reflecting upon and refining itself. "Boros" could be playfully interpreted as a variation of "bytes," hinting at the language’s digital nature. Parsing the name as "our-ob-oros" reveals a delightful multilingual wordplay: "our way to the treasure." This blend of English ("our"), Latin ("ob" meaning "towards"), and Greek ("oros," which can be associated with "boundaries" or "definitions") mirrors the language’s eclectic inspirations. Just as Ouroboros draws from the diverse traditions of TCL, LISP, and FORTH, its name weaves together linguistic elements from different cultures. This multilingual, multi-paradigm approach guides us toward the treasures of computation, defining new boundaries along the way, much like how TCL offers flexibility, LISP promotes expressiveness, and FORTH emphasizes simplicity and extensibility. A Name That Bites Back Ultimately, Ouroboros is a name that challenges you to think recursively, to see the end in the beginning and the whole in every part. It’s a linguistic puzzle that mirrors the very nature of the programming language it represents — complex, self-referential, and endlessly fascinating. As you embark on your journey with Ouroboros, remember that you’re not just writing code; you’re participating in an ancient cycle of creation, where every end is a new beginning, and every line of code feeds into the greater whole of computational possibility. What Is Ouroboros? Ouroboros is a programming language that has no syntax. I have already said that, and now comes the moment of truth: it is a "lie." There is no programming language with absolutely no syntax. UR has a syntax, and it is defined with this sentence: You write the lexical elements of the language one after the other. Syntax That is all. When the interpreter starts to execute the code, it begins reading the lexical elements one after the other. It reads as many elements as it needs to execute some code and not more. To be specific, it reads exactly one lexical element before starting execution. When the execution triggered by the element is finished, it goes on reading the next element. The execution itself can trigger more reads if the command needs more elements. We will see it in the next example soon. A lexical element can be a number, a string, a symbol, or a word. Symbols and words can and should have an associated command to execute. For example, the command puts is borrowed shamelessly from TCL and is associated with the command that prints out a string. Plain Text puts "Hello, World!" It is the simplest program in Ouroboros. When the command behind puts starts to execute, it asks the interpreter to read the next element and evaluate it. In this example, it is a constant string, so it is not difficult to calculate. The value of a constant string is the string itself. The next example is a bit more complex: Plain Text puts add "Hello, " "World!" In this case, the argument to the command puts is another command: add. When puts asks the interpreter to get its argument, the interpreter reads the next element and then starts to execute. As add starts to execute, it needs two arguments, which it asks from the interpreter. Since these arguments are strings, add concatenates them and returns the result. Blocks There is a special command denoted by the symbol {. The lexical analyzer recognizing this character will ask the interpreter to read the following elements until it finds the closing }. This call is recursive in nature if there are embedded blocks. The resulting command is a block command. A block command executes all the commands in it and results in the last result of the commands in the block. Plain Text puts add {"Hello, " "World!"} If we close the two strings into a block, then the output will be a single World! without the `Hello, `. The block "executes" both strings, but the value of the block is only the second string. Commands The commands implemented are documented in the readme of the project on GitHub. The actual set of commands is not fascinating. Every language has a set of commands. The fascinating part is that in UR there is no difference between functions and commands. Are puts or add commands or functions? How about if and while? They are all commands, and they are not part of the language per se. They are part of the implementation. The command if asks the interpreter to fetch one argument, evaluated. It will use this as the condition. After this, it will fetch the next two elements without evaluation. Based on the boolean interpretation of the condition, it will ask the interpreter to evaluate one of the two arguments. Similarly, the command while will fetch two arguments without evaluation. It then evaluates the first as a condition, and if it is true, it will evaluate the second and then go back to the condition. It fetched the condition unevaluated because it will need to evaluate it again and again. In the case of the if command, the condition is evaluated only once, so we did not need a reference to the unevaluated version. Many commands use the unevaluated version of the arguments. This use makes it possible to use the "binary" operators as multi-argument operators. If you want to add up three numbers, you can write add add 1 2 3, or add* 1 2 3 {}, or {add* 1 2 3}. The command add fetches the first argument unevaluated and sees if it is a *. If it is *, then it will fetch the arguments until it encounters the end of the arguments or an empty block. This is a little syntactic sugar, which should be peculiar in the case of a language that has no syntax. It really is there to make the experiment and the playing with the language bearable. On the other side, it erodes the purity of the language. It is also only a technical detail, and I mention it only because we will need to understand it when we discuss the metamorphic nature of the language. It will be needed to understand the use of the first example there. Variables UR supports variables. Variables are strings with values associated with them. The value can be any object. When the interpreter sees a symbol or a bare word (identifier) to evaluate, it will check the value associated with it. If the value is a command, then it will execute the command. In other cases, it will return the value. The variables are scoped. If you set a variable in a block, then the variable is visible only in that block. If there are variables with the same name in the parent block, then the variable in the child block will shadow the variable in the parent block. Variable handling and scoping are implementation details and not strictly part of the language. The implementation as it is now supports boolean, long, double, big integer, big decimal, and string primitive values. It also supports lists and objects. A list is a list of values, and it can be created with the list command. The argument to the command is a block. The command list will ask the interpreter to fetch the argument unevaluated. Afterward, it evaluates the block from the start the same way as the block command does. However, instead of throwing away the resulting values and returning the last one, it returns a list of the results. An object is a map of values. It can be created with the object command. The argument to the command is the parent object. The fields of the parent object are copied to the new object. Objects also have methods. They are the fields that have a command as a value. Introspection The interpreter is open like a cracked safe after a heist. Nothing is hard-wired into the language. When I wrote that the language interpreter recognizes bare words, symbols, strings, etc., it was only true for the initial setup. The lexical analyzers implemented are UR commands, and they can be redefined. They are associated with the names $keyword, $string, $number, $space, $block, $blockClose, and $symbol. The interpreter uses the variable structures to find these commands. There is another variable named $lex that is a list of the lexical analyzers. The interpreter uses this list when it needs to read the next lexical element. It invokes the first, then the second, and so on until one of them returns a non-null value, a lexical element, which is a command. If you modify this list, then you can change the lexical analyzers, and that way you can change the syntax of the language. The simplest example is changing the interpretation of the end-of-line character. You may remember that we can use the binary operators using multiple arguments terminated with an empty block. It would be nice if we could omit the block and just write add* 1 2 3 simply adding a new-line at the end. We can do that by changing the lexical analyzer that recognizes the end-of-line character, and this is exactly what we are going to do in this example. Plain Text set q add* 3 2 1 {} puts q insert $lex 0 '{ if { eq at source 0 "\n"} {sets substring 1 length source source '{}} set q add* 3 2 1 {} puts q We insert a new lexical analyzer at the beginning of the list. If the very first character of the current state of the source code is a new-line character, then the lexical analyzer eats this character and returns an empty block. The command source returns the source code that was not parsed by the interpreter yet. The command sets sets the source code to the string value specified. The first puts q will print 6 because at the time of the first calculation, new-lines are just ignored, and that way the value of q is add* 3 2 1 {}. The second puts q will print 5 because the new-line is eaten by the lexical analyzer, and the value of q is add* 3 2 {}. Here, the closing {} was the result of the lexical analysis of the new-line character. The values 1 and {} on the next line are calculated, but they do not have any effect. This is a very simple example. If you want to see something more complex, the project file src/test/resources/samples/xpression.ur contains a script that defines a numerical expression parser. There is a special command called fixup. This command forces the interpreter to parse the rest of the source. After this point, the lexical analyzers are not used anymore. Executing this command does not give any performance benefit, and that is not the purpose. It is more like a declaration that all the codes that are part of the source code introspection and the metamorphic calculation are done. A special implementation of the command can also take the parsed code and generate an executable, turning the interpreter into a compiler. Technical Considerations The current version is implemented in Java. Ouroboros is not a JVM language, though. We do not compile the code to Java byte-code. The Java code interprets the source and executes it. The implementation is an MVP focusing on the metamorphic nature of the language. It is meant to be an experiment. This is the reason why there are no file, network, and other I/O operations except the single puts command that writes to the standard output. The Java service loader feature is used to load the commands and to register them with their respective names in the interpreter. It means that implementing extra commands is as simple as creating them, writing a class implementing a ContextAgent to register them (see the source code), and put them on the classpath. The whole code is open-source and available on GitHub. It is licensed under the Apache License 2.0 (see the license file in the repo). It is exactly 100 classes at the time of writing this article. It means that the source code is simple, short, and easy to understand. If you need some straightforward scripting language in your application, you can use it. It was not meant to be for production, though. Going Further There is no plan currently to extend the language and include more commands. We only plan to create more metamorphic code in the language. The reason for that is that we do not see the language as a practical tool as of today. If it proves to be useful and gains a user base and utilization, we certainly will incorporate more commands to support I/O, file handling, networking, and so on. We also have visions of implementing the interpreter in other languages, like in Rust and Go. Anyone suggesting or wanting to develop commands for better usability or adding features is welcome. It can be a parallel project, or it can be merged into the main project if that makes sense. Conclusion In exploring Ouroboros, we delved into the concept of a programming language that minimizes syntax to the point of almost non-existence. This radical approach challenges the conventional understanding of what a programming language should be, presenting a system where syntax is both absent and infinitely customizable. By drawing inspiration from languages like LISP, TCL, and FORTH, Ouroboros embodies simplicity and introspection, allowing programmers to define their syntax and commands within the language itself. While Ouroboros is not designed for practical production use, it serves as an intriguing experiment in language design and metaprogramming. Its self-referential nature and minimalistic design offer a playground for developers interested in the fundamentals of computation, syntax design, and language interpretation. Whether it evolves into a more robust tool or remains a fascinating intellectual exercise, Ouroboros pushes the boundaries of how we think about programming languages, inviting us to consider the possibility of a language where syntax is as mutable and recursive as the Ouroboros serpent itself.
From a Java perspective, I’ve been the beneficiary of some pretty amazing features over the years: Generics (Java 5) Streams and Lambda Expressions (Java 8) Enhanced Collection Functionality (Java 9) Sealed Classes (Java 17) As key features become available, I’ve been able to reduce development time as I implement features, while also seeing benefits in performance and supportability. However, one area that seems to have lagged behind is the adoption of a key internet protocol: HTTP/2. While the second major release has been around for over nine years, migration from the 1.x version has been slower than expected. I wanted to explore HTTP/2 to understand not only the benefits but also what it looks like to adopt this new version. In this article, we’ll look at my anecdotal experience, plus some challenges I found, too. About HTTP/2 HTTP/2 was released in May 2015 and included the following improvements over the prior version of HTTP: Multiplexing: Allows multiple requests and responses to be sent over a single connection Header compression: Reduces the overhead of HTTP headers by using compression Server push: Enables servers to send resources to a client proactively Resource prioritization: Allows consumers to specify the importance of given resources, affecting the order in which they’re loaded Binary protocol: Provides an alternative to the text-based format of HTTP/1.x Additionally, HTTP/2 is backward compatible with HTTP/1.x. Common Use Cases for HTTP/2 Below are just some of the strong use cases for HTTP/2: A full-stack application that contains a chatty consumer, consistently communicating with the service tier An application that relies heavily on content being stored inside the HTTP headers A solution that is dependent on server-initiated events to broadcast updates to consumers A client application that can benefit from providing prioritization instructions to the underlying service A web client that requires large amounts of data to be retrieved from the service tier Migrating to HTTP/2 for any of these use cases could provide noticeable improvements from a consumer perspective. What’s Involved With HTTP/2 Adoption? When I think about a lower-than-expected adoption rate of 45%-50% (as noted in this Cloudflare blog), I wonder if developers believe the upgrade to HTTP/2 won’t be easy. But I don’t get why they feel that way. After all, HTTP/2 is backwards compatible with HTTP/1.x. Using Spring Boot 3.x (which requires Java 17+ and uses the default Jetty server) as an example, upgrading to HTTP/2 is actually kind of easy. The biggest hurdle is making sure you are using SSL/TLS — which is honestly a good idea for your services anyway. Properties files server.port=8443 server.ssl.key-store=classpath:keystore.p12 server.ssl.key-store-password=<your_password_goes_here> server.ssl.key-store-type=PKCS12 server.ssl.key-alias=<your_alias_goes_here> With SSL/TLS in place, you just need to enable HTTP/2 via this property: Properties files server.http2.enabled=true At this point, your service will start and utilize HTTP/2. By default, all of the features noted above will be ready for use. Is that all there is to it? But Wait … There’s More to the Story Depending on where your service is running, the effort to upgrade to HTTP/2 might not yield the results you were expecting. This is because network infrastructure often stands between your service and the consumers wanting to take advantage of HTTP/2 greatness. That layer needs to fully support HTTP/2 as well. What does this mean? It means your service could receive the request and provide an HTTP/2 response, only for a router to downgrade the protocol to HTTP/1.1. Here’s the big takeaway: before you get all excited about using HTTP/2 and spend time upgrading your services, you should confirm that your network layer supports it. Recently, Heroku announced support of HTTP/2 at the router level. They’re addressing this exact scenario, and HTTP/2 is now available in a public beta. The illustration below demonstrates how they make HTTP/2 service responses available to consumers: This initial push from Heroku makes it possible for service developers like me to build applications that can take advantage of HTTP/2 features like header compression and multiplexing. This means faster delivery to consumers while potentially reducing compute and network loads. If your cloud provider doesn’t have infrastructure that supports HTTP/2, requests to your HTTP/2-enabled service will result in an HTTP/1.x response. As a result, you won’t get the HTTP/2 benefits you’re looking for. Challenges With HTTP/2 While my own experience of upgrading my Spring Boot services to leverage HTTP/2 hasn’t run up against any significant challenges — especially with support now at the cloud provider network level — I am reading more about others who’ve struggled with the adoption. Based on some of the customer experiences I’ve found, here are some items to be aware of during your journey to HTTP/2: Increase in compute cost: These features can lead to more processing power than what you may have needed for HTTP/1.x. Impact on other portions of the response: After adding SSL/TLS to your service, expect that more time will be required to perform this layer of processing. Advanced features can be misconfigured: You’ll want to understand concepts like multiplexing, stream prioritization, flow control, and header compression, as these items can impact performance in a negative manner if not configured correctly. If your path to production includes dev, QA, and staging environments, you should be able to identify and mitigate any of these hurdles long before your code reaches production. Conclusion My readers may recall my personal mission statement, which I feel can apply to any IT professional: “Focus your time on delivering features/functionality that extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.” — J. Vester Upgrading to HTTP/2 certainly adheres to my mission statement by giving service owners features like multiplexing, header compression, resource prioritization, and binary responses — all of which can impact the overall performance of a service or the consuming application. At the same time, cloud providers who support HTTP/2 — like Heroku — also get credit for adhering to my mission statement. Without this layer of support, applications that interact with these services wouldn’t be able to take advantage of these benefits. When I reflect on my personal experience with Java, I can’t imagine a world where I am writing Java code without using generics, streams, lambdas, enhanced collections, and sealed classes. All of these features are possible because I took the time to see the benefits and perform the upgrade. The question really isn’t if you should upgrade to HTTP/2, but rather which upcoming development iteration will cover this enhancement. Have a really great day!