Also known as the build stage of the SDLC, coding focuses on the writing and programming of a system. The Zones in this category take a hands-on approach to equip developers with the knowledge about frameworks, tools, and languages that they can tailor to their own build needs.
A framework is a collection of code that is leveraged in the development process by providing ready-made components. Through the use of frameworks, architectural patterns and structures are created, which help speed up the development process. This Zone contains helpful resources for developers to learn about and further explore popular frameworks such as the Spring framework, Drupal, Angular, Eclipse, and more.
Java is an object-oriented programming language that allows engineers to produce software for multiple platforms. Our resources in this Zone are designed to help engineers with Java program development, Java SDKs, compilers, interpreters, documentation generators, and other tools used to produce a complete application.
JavaScript (JS) is an object-oriented programming language that allows engineers to produce and implement complex features within web browsers. JavaScript is popular because of its versatility and is preferred as the primary choice unless a specific function is needed. In this Zone, we provide resources that cover popular JS frameworks, server applications, supported data types, and other useful topics for a front-end engineer.
Programming languages allow us to communicate with computers, and they operate like sets of instructions. There are numerous types of languages, including procedural, functional, object-oriented, and more. Whether you’re looking to learn a new language or trying to find some tips or tricks, the resources in the Languages Zone will give you all the information you need and more.
Development and programming tools are used to build frameworks, and they can be used for creating, debugging, and maintaining programs — and much more. The resources in this Zone cover topics such as compilers, database management systems, code editors, and other software tools and can help ensure engineers are writing clean code.
Development at Scale
As organizations’ needs and requirements evolve, it’s critical for development to meet these demands at scale. The various realms in which mobile, web, and low-code applications are built continue to fluctuate. This Trend Report will further explore these development trends and how they relate to scalability within organizations, highlighting application challenges, code, and more.
How To Get Started With New Pattern Matching in Java 21
Drupal 9 Essentials
In this article, learn how the Dapr project can reduce the cognitive load on Java developers and decrease application dependencies. Coding Java applications for the cloud requires not only a deep understanding of distributed systems, cloud best practices, and common patterns but also an understanding of the Java ecosystem to know how to combine many libraries to get things working. Tools and frameworks like Spring Boot have significantly impacted developer experience by curating commonly used Java libraries, for example, logging (Log4j), parsing different formats (Jackson), serving HTTP requests (Tomcat, Netty, the reactive stack), etc. While Spring Boot provides a set of abstractions, best practices, and common patterns, there are still two things that developers must know to write distributed applications. First, they must clearly understand which dependencies (clients/drivers) they must add to their applications depending on the available infrastructure. For example, they need to understand which database or message broker they need and what driver or client they need to add to their classpath to connect to it. Secondly, they must know how to configure that connection, the credentials, connection pools, retries, and other critical parameters for the application to work as expected. Understanding these configuration parameters pushes developers to know how these components (databases, message brokers, configurations stores, identity management tools) work to a point that goes beyond their responsibilities of writing business logic for their applications. Learning best practices, common patterns, and how a large set of application infrastructure components work is not bad, but it takes a lot of development time out of building important features for your application. In this short article, we will look into how the Dapr project can help Java developers not only to implement best practices and distributed patterns out of the box but also to reduce the application’s dependencies and the amount of knowledge required by developers to code their applications. We will be looking at a simple example that you can find here. This Pizza Store application demonstrates some basic behaviors that most business applications can relate to. The application is composed of three services that allow customers to place pizza orders in the system. The application will store orders in a database, in this case, PostgreSQL, and use Kafka to exchange events between the services to cover async notifications. All the asynchronous communications between the services are marked with red dashed arrows. Let’s look at how to implement this with Spring Boot, and then let’s add Dapr. The Spring Boot Way Using Spring Boot, developers can create these three services and start writing the business logic to process the order placed by the customer. Spring Boot Developers can use http://start.spring.io to select which dependencies their applications will have. For example, with the Pizza Store Service, they will need Spring Web (to host and serve the FrontEnd and some REST endpoints), but also the Spring Actuators extension if we aim to run these services on Kubernetes. But as with any application, if we want to store data, we will need a database/persistent storage, and we have many options to select from. If you look into Spring Data, you can see that Spring Data JPA provides an abstraction to SQL (relational) databases. As you can see in the previous screenshot, there are also NoSQL options and different layers of abstractions here, depending on what your application is doing. If you decide to use Spring Data JPA, you are still responsible for adding the correct database Driver to the application classpath. In the case of PostgreSQL, you can also select it from the list. We face a similar dilemma if we think about exchanging asynchronous messages between the application’s services. There are too many options: Because we are developers and want to get things moving forward, we must make some choices here. Let’s use PostgreSQL as our database and Kafka as our messaging system/broker. I am a true believer in the Spring Boot programming model, including the abstraction layers and auto-configurations. However, as a developer, you are still responsible for ensuring that the right PostgreSQL JDBC driver and Kafka Client are included in your services classpath. While this is quite common in the Java space, there are a few drawbacks when dealing with larger applications that might consist of tens or hundreds of services. Application and Infrastructure Dependencies Drawbacks Looking at our simple application, we can spot a couple of challenges that application and operation teams must deal with when taking this application to production. Let’s start with application dependencies and their relationship with the infrastructure components we have decided to use. The Kafka Client included in all services needs to be kept in sync with the Kafka instance version that the application will use. This dependency pushes developers to ensure they use the same Kafka Instance version for development purposes. If we want to upgrade the Kafka Instance version, we need to upgrade, which means releasing every service that includes the Kafka Client again. This is particularly hard because Kafka tends to be used as a shared component across different services. Databases such as PostgreSQL can be hidden behind a service and never exposed to other services directly. But imagine two or more services need to store data; if they choose to use different database versions, operation teams will need to deal with different stack versions, configurations, and maybe certifications for each version. Aligning on a single version, say PostgreSQL 16.x, once again couples all the services that need to store or read persistent data with their respective infrastructure components. While versions, clients, and drivers create these coupling between applications and the available infrastructure, understanding complex configurations and their impact on application behavior is still a tough challenge to solve. Spring Boot does a fantastic job at ensuring that all configurations can be externalized and consumed from environment variables or property files, and while this aligns perfectly with the 12-factor apps principles and with container technologies such as Docker, defining these configurations parameter values is the core problem. Developers using different connection pool sizes, retry, and reconnection mechanisms being configured differently across environments are still, to this day, common issues while moving the same application from development environments to production. Learning how to configure Kafka and PostgreSQL for this example will depend a lot on how many concurrent orders the application receives and how many resources (CPU and memory) the application has available to run. Once again, learning the specifics of each infrastructure component is not a bad thing for developers. Still, it gets in the way of implementing new services and new functionalities for the store. Decoupling Infrastructure Dependencies and Reusing Best Practices With Dapr What if we can extract best practices, configurations, and the decision of which infrastructure components we need for our applications behind a set of APIs that application developers can consume without worrying about which driver/client they need or how to configure the connections to be efficient, secure and work across environments? This is not a new idea. Any company dealing with complex infrastructure and multiple services that need to connect to infrastructure will sooner or later implement an abstraction layer on top of common services that developers can use. The main problem is that building those abstractions and then maintaining them over time is hard, costs development time, and tends to get bypassed by developers who don’t agree or like the features provided. This is where Dapr offers a set of building blocks to decouple your applications from infrastructure. Dapr Building Block APIs allow you to set up different component implementations and configurations without exposing developers to the hassle of choosing the right drivers or clients to connect to the infrastructure. Developers focus on building their applications by just consuming APIs. As you can see in the diagram, developers don’t need to know about “infrastructure land” as they can consume and trust APIs to, for example, store and retrieve data and publish and subscribe to events. This separation of concern allows operation teams to provide consistent configurations across environments where we may want to use another version of PostgreSQL, Kafka, or a cloud provider service such as Google PubSub. Dapr uses the component model to define these configurations without affecting the application behavior and without pushing developers to worry about any of those parameters or the client/driver version they need to use. Dapr for Spring Boot Developers So, how does this look in practice? Dapr typically deploys to Kubernetes, meaning you need a Kubernetes cluster to install Dapr. Learning about how Dapr works and how to configure it might be too complicated and not related at all to developer tasks like building features. For development purposes, you can use the Dapr CLI, a command line tool designed to be language agnostic, allowing you to run Dapr locally for your applications. I like the Dapr CLI, but once again, you will need to learn about how to use it, how to configure it, and how it connects to your application. As a Spring Boot developer, adding a new command line tool feels strange, as it is not integrated with the tools that I am used to using or my IDE. If I see that I need to download a new CLI or if I depend on deploying my apps into a Kubernetes cluster even to test them, I would probably step away and look for other tools and projects. That is why the Dapr community has worked so hard to integrate with Spring Boot more natively. These integrations seamlessly tap into the Spring Boot ecosystem without adding new tools or steps to your daily work. Let’s see how this works with concrete examples. You can add the following dependency in your Spring Boot application that integrates Dapr with Testcontainers. <dependency> <groupId>io.diagrid.dapr</groupId> <artifactId>dapr-spring-boot-starter</artifactId> <version>0.10.7</version> </dependency> View the repository here. Testcontainers (now part of Docker) is a popular tool in Java to work with containers, primarily for tests, specifically integration tests that use containers to set up complex infrastructure. Our three Pizza Spring Boot services have the same dependency. This allows developers to enable their Spring Boot applications to consume the Dapr Building Block APIs for their local development without any Kubernetes, YAML, or configurations needed. Once you have this dependency in place, you can start using the Dapr SDK to interact with Dapr Building Blocks APIs, for example, if you want to store an incoming order using the Statestore APIs: Where `STATESTORE_NAME` is a configured Statestore component name, the `KEY` is just a key that we want to use to store this order and `order` is the order that we received from the Pizza Store front end. Similarly, if you want to publish events to other services, you can use the PubSub Dapr API; for example, to emit an event that contains the order as the payload, you can use the following API: The publishEvent API publishes an event containing the `order` as a payload into the Dapr PubSub component named (PUBSUB_NAME) and inside a specific topic indicated by PUBSUB_TOPIC. Now, how is this going to work? How is Dapr storing state when we call the saveState() API, or how are events published when we call publishEvent()? By default, the Dapr SDK will try to call the Dapr API endpoints to localhost, as Dapr was designed to run beside our applications. For development purposes, to enable Dapr for your Spring Boot application, you can use one of the two built-in profiles: DaprBasicProfile or DaprFullProfile. The Basic profile provides access to the Statestore and PubSub API, but more advanced features such as Actors and Workflows will not work. If you want to get access to all Dapr Building Blocks, you can use the Full profile. Both of these profiles use in-memory implementations for the Dapr components, making your applications faster to bootstrap. The dapr-spring-boot-starter was created to minimize the amount of Dapr knowledge developers need to start using it in their applications. For this reason, besides the dependency mentioned above, a test configuration is required in order to select which Dapr profile we want to use. Since Spring Boot 3.1.x, you can define a Spring Boot application that will be used for test purposes. The idea is to allow tests to set up your application with all that is needed to test it. From within the test packages (`src/test/<package>`) you can define a new @SpringBootApplication class, in this case, configured to use a Dapr profile. As you can see, this is just a wrapper for our PizzaStore application, which adds a configuration that includes the DaprBasicProfile. With the DaprBasicProfile enabled, whenever we start our application for testing purposes, all the components that we need for the Dapr APIs to work will be started for our application to consume. If you need more advanced Dapr setups, you can always create your domain-specific Dapr profiles. Another advantage of using these test configurations is that we can also start the application using test configuration for local development purposes by running `mvn spring-boot:test-run` You can see how Testcontainers is transparently starting the `daprio/daprd` container. As a developer, how that container is configured is not important as soon as we can consume the Dapr APIs. I strongly recommend you check out the full example here, where you can run the application on Kubernetes with Dapr installed or start each service and test locally using Maven. If this example is too complex for you, I recommend you to check these blog posts where I create a very simple application from scratch: Using the Dapr StateStore API with Spring Boot Deploying and configuring our simple application in Kubernetes
Real-time communication has become an essential aspect of modern applications, enabling users to interact with each other instantly. From video conferencing and online gaming to live customer support and collaborative editing, real-time communication is at the heart of today's digital experiences. In this article, we will explore popular real-time communication protocols, discuss when to use each one, and provide examples and code snippets in JavaScript to help developers make informed decisions. WebSocket Protocol WebSocket is a widely used protocol that enables full-duplex communication between a client and a server over a single, long-lived connection. This protocol is ideal for real-time applications that require low latency and high throughput, such as chat applications, online gaming, and financial trading platforms. Example Let's create a simple WebSocket server using Node.js and the ws library. 1. Install the ws library: Shell npm install ws 2. Create a WebSocket server in server.js: JavaScript const WebSocket = require('ws'); const server = new WebSocket.Server({ port: 8080 }); server.on('connection', (socket) => { console.log('Client connected'); socket.on('message', (message) => { console.log(`Received message: ${message}`); }); socket.send('Welcome to the WebSocket server!'); }); 3. Run the server: Shell node server.js WebRTC WebRTC (Web Real-Time Communication) is an open-source project that enables peer-to-peer communication directly between browsers or other clients. WebRTC is suitable for applications that require high-quality audio, video, or data streaming, such as video conferencing, file sharing, and screen sharing. Example Let's create a simple WebRTC-based video chat application using HTML and JavaScript. In index.html: HTML <!DOCTYPE html> <html> <head> <title>WebRTC Video Chat</title> </head> <body> <video id="localVideo" autoplay muted></video> <video id="remoteVideo" autoplay></video> <script src="main.js"></script> </body> </html> In main.js: JavaScript const localVideo = document.getElementById('localVideo'); const remoteVideo = document.getElementById('remoteVideo'); // Get media constraints const constraints = { video: true, audio: true }; // Create a new RTCPeerConnection const peerConnection = new RTCPeerConnection(); // Set up event listeners peerConnection.onicecandidate = (event) => { if (event.candidate) { // Send the candidate to the remote peer } }; peerConnection.ontrack = (event) => { remoteVideo.srcObject = event.streams[0]; }; // Get user media and set up the local stream navigator.mediaDevices.getUserMedia(constraints).then((stream) => { localVideo.srcObject = stream; stream.getTracks().forEach((track) => peerConnection.addTrack(track, stream)); }); MQTT MQTT (Message Queuing Telemetry Transport) is a lightweight, publish-subscribe protocol designed for low-bandwidth, high-latency, or unreliable networks. MQTT is an excellent choice for IoT devices, remote monitoring, and home automation systems. Example Let's create a simple MQTT client using JavaScript and the mqtt library. 1. Install the mqtt library: Shell npm install mqtt 2. Create an MQTT client in client.js: JavaScript const mqtt = require('mqtt'); const client = mqtt.connect('mqtt://test.mosquitto.org'); client.on('connect', () => { console.log('Connected to the MQTT broker'); // Subscribe to a topic client.subscribe('myTopic'); // Publish a message client.publish('myTopic', 'Hello, MQTT!'); }); client.on('message', (topic, message) => { console.log(`Received message on topic ${topic}: ${message.toString()}`); }); 3. Run the client: Shell node client.js Conclusion Choosing the right real-time communication protocol depends on the specific needs of your application. WebSocket is ideal for low latency, high throughput applications, WebRTC excels in peer-to-peer audio, video, and data streaming, and MQTT is perfect for IoT devices and scenarios with limited network resources. By understanding the strengths and weaknesses of each protocol and using JavaScript code examples provided, developers can create better, more efficient real-time communication experiences. Happy learning!!
In modern application development, delivering personalized and controlled user experiences is paramount. This necessitates the ability to toggle features dynamically, enabling developers to adapt their applications in response to changing user needs and preferences. Feature flags, also known as feature toggles, have emerged as a critical tool in achieving this flexibility. These flags empower developers to activate or deactivate specific functionalities based on various criteria such as user access, geographic location, or user behavior. React, a popular JavaScript framework known for its component-based architecture, is widely adopted in building user interfaces. Given its modular nature, React applications are particularly well-suited for integrating feature flags seamlessly. In this guide, we'll explore how to integrate feature flags into your React applications using IBM App Configuration, a robust platform designed to manage application features and configurations. By leveraging feature flags and IBM App Configuration, developers can unlock enhanced flexibility and control in their development process, ultimately delivering tailored user experiences with ease. IBM App Configuration can be integrated with any framework be it React, Angular, Java, Go, etc. React is a popular JavaScript framework that uses a component-based architecture, allowing developers to build reusable and modular UI components. This makes it easier to manage complex user interfaces by breaking them down into smaller, self-contained units. Adding feature flags to React components will make it easier for us to handle the components. Integrating With IBM App Configuration IBM App Configuration provides a comprehensive platform for managing feature flags, environments, collections, segments, and more. Before delving into the tutorial, it's important to understand why integrating your React application with IBM App Configuration is necessary and what benefits it offers. By integrating with IBM App Configuration, developers gain the ability to dynamically toggle features on and off within their applications. This capability is crucial for modern application development, as it allows developers to deliver controlled and personalized user experiences. With feature flags, developers can activate or deactivate specific functionalities based on factors such as user access, geographic location, or user preferences. This not only enhances user experiences but also provides developers with greater flexibility and control over feature deployments. Additionally, IBM App Configuration offers segments for targeted rollouts, enabling developers to gradually release features to specific groups of users. Overall, integrating with IBM App Configuration empowers developers to adapt their applications' behavior in real time, improving agility, and enhancing user satisfaction. To begin integrating your React application with App Configuration, follow these steps: 1. Create an Instance Start by creating an instance of IBM App Configuration on cloud.ibm.com. Within the instance, create an environment, such as Dev, to manage your configurations. Now create a collection. Creating collections comes in handy when there are multiple feature flags created for various projects. Each project can have a collection in the same App Configuration instance and you can tag these feature flags to the collection to which they belong. 2. Generate Credentials Access the service credentials section and generate new credentials. These credentials will be required to authenticate your React application with App Configuration. 3. Install SDK In your React application, install the IBM App Configuration React SDK using npm: CSS npm i ibm-appconfiguration-react-client-sdk 4. Configure Provider In your index.js or App.js, wrap your application component with AppConfigProvider to enable AppConfig within your React app. The Provider must be wrapped at the main level of the application, to ensure the entire application has access. The AppConfigProvider requires various parameters as shown in the screenshot below. All of these values can be found in the credentials created. 5. Access Feature Flags Now, within your App Configuration instance, create feature flags to control specific functionalities. Copy the feature flag ID for further integration into your code. Integrating Feature Flags Into React Components Once you've set up the AppConfig in your React application, you can seamlessly integrate feature flags into your components. Enable Components Dynamically Use the feature flag ID copied from the App Configuration instance to toggle specific components based on the flag's status. This allows you to enable or disable features dynamically without redeploying your application. Utilizing Segments for Targeted Rollouts IBM App Configuration offers segments to target specific groups of users, enabling personalized experiences and controlled rollouts. Here's how to leverage segments effectively: Define Segments Create segments based on user properties, behaviors, or other criteria to target specific user groups. Rollout Percentage Adjust the rollout percentage to control the percentage of users who receive the feature within a targeted segment. This enables gradual rollouts or A/B testing scenarios. Example If the rollout percentage is set to 100% and a particular segment is targeted, then the feature is rolled out to all the users in that particular segment. If the rollout percentage is set between 1% to 99% and the rollout percentage is 60%, for example, and a particular segment is targeted, then the feature is rolled out to randomly 60% of the users in that particular segment. If the rollout percentage is set to 0% and a particular segment is targeted, then the feature is rolled out to none of the users in that particular segment. Conclusion Integrating feature flags with IBM App Configuration empowers React developers to implement dynamic feature toggling and targeted rollouts seamlessly. By leveraging feature flags and segments, developers can deliver personalized user experiences while maintaining control over feature deployments. Start integrating feature flags into your React applications today to unlock enhanced flexibility and control in your development process.
After JUnit 5 was released, a lot of developers just added this awesome new library to their projects, because unlike other versions, in this new version, it is not necessary to migrate from JUnit 4 to 5, you just need to include the new library in your project, and with all the engine of JUnit 5 you can do your new tests using JUnit 5, and the older one with JUnit 4 or 3, will keep running without problem. But what can happen in a big project, a project that was built 10 years ago with two versions of JUnit running in parallel? New developers have started to work on the project, some of them with JUnit experience, others not. New tests are created using JUnit 5, new tests are created using JUnit 4, and at some point a developer without knowledge, when they will create a new scenario in a JUnit 5 test that has been already created, they just include a JUnit 4 annotation, and the test became a mix, some @Test of JUnit 4 and some @Test of JUnit 5, and each day is more difficult to remove the JUnit 4 library. So, how do you solve this problem? First of all, you need to show to your team, what is from JUnit 5 and what is from JUnit 4, so that new tests be created using JUnit 5 instead of JUnit 4. After that is necessary to follow the Boy Scout rule, whenever they pass a JUnit 4 test they must migrate to JUnit 5. Let’s see the main changes released in JUnit 5. All starts by the name, in JUnit 5, you don’t see packages called org.junit5, but rather org.junit.jupiter. To sum up, everything you see with “Jupiter”, it means that is from JUnit 5. They chose this name because Jupiter starts with “JU”, and is the fifth planet from the sun. Another change is about the @Test, this annotation was moved to a new package: org.junit.jupiter.api and now no one attribute like “expected,” or “timeout” is used anymore, use extension instead. For example, for timeout, now you have one annotation for this: @Timeout(value = 100, unit = TimeUnit.MILLISECONDS). Another change is that neither test methods nor classes need to be public. Now instead of using @Before and @After in your test configuration, you have to use @BeforeEach and @AfterEach, and you have also @BeforeAll and @AfterAll. To ignore tests, now you have to use @Disable instead of @Ignore. A great news that was released in JUnit 5 was the annotation @ParameterizedTest, with that is possible to run one test multiple times with different arguments. For example, if you want to test a method that creates some object and you want to validate if the fields are filled correctly, you just do the following: Java @ParameterizedTest @MethodSource("getInvalidSources") void shouldCheckInvalidFields(String name, String job, String expectedMessage) { Throwable exception = catchThrowable(() -> new Client(name, job)); assertThat(exception).isInstanceOf(IllegalArgumentException.class) .hasMessageContaining(expectedMessage); } static Stream<Arguments> getInvalidSources() { return Stream.of(Arguments.arguments("Jean Donato", "", "Job is empty"), Arguments.arguments("", "Dev", "Name is empty")); } There are so many nice features in JUnit 5, I recommend you check it out the JUnit 5 User Guide, to analyze what is useful to your project. Now that all developers know what was changed in JUnit 5, you can start the process of removing JUnit 4 from your project. So, if you are still using JUnit 4 in 2024, and your project is a big project, you will probably have some dependencies using JUnit 4. I recommend you analyze your libraries to check if some of them are using JUnit 4. In the image below I’m using Dependency Analyzer from IntelliJ. As you can see, jersey-test is using JUnit 4, that is, even if I remove JUnit 4 from my project, JUnit 4 will be available to use because Jersey. The easier way will be to bump jersey to 2.35 because JUnit 5 was introduced in jersey-test 2.35, but I can’t update the jersey-test framework because other libraries will break in my project. So, in this case, what can I do? I can exclude JUnit from Jersey with Dependency Exclusions from Maven (like the image below). That way JUnit 4 will not be used anymore, but rather our JUnit 5. When you run some tests that use Jersey, they will not be loaded, because there are methods in Jersey using JUnit 4 annotations, setUp and tearDown, using @Before and @After. To solve this, you can create one “Configuration Class” whose extends JerseyTest implementing setUp and tearDown with @BeforeEach and @AfterEach calling super.setUp() and super.TearDown(). Java public class JerseyConfigToJUnit5 extends JerseyTest { @BeforeEach public void setUp() throws Exception { super.setUp(); } @AfterEach public void tearDown() throws Exception { super.tearDown(); } } So, if you have already checked your libraries and no one has more dependency from JUnit 4, you finally can migrate all your tests to JUnit 5, for this process, there is a good tool that saves you from a lot of work, is OpenRewrite, a automated refactoring ecosystem for source code, they will change all your old packages, the older annotations, and everything to the new one. That’s it folks, now you and your teammates can enjoy JUnit 5 and relax your mind knowing that new tests will be created with JUnit 5 and the project will not become a Frankenstein. So, remember, keep your project up-to-date, because if you forget your libraries, each day will be more difficult to update, always use specifications, and frameworks that follow the specifications, and have a good design in your code, this permits you to change and move with the facility.
Thread dump analysis is a traditional approach followed to analyze the performance bottlenecks in Java-based applications. In the modern era, we have APM tools that provide various metrics and screens to drill down and identify performance issues, even at the code level. But for some of the performance issues or occasions, thread dump analysis still stands as the best way to identify the bottlenecks. When To Use a Thread Dump To analyze any performance issue, it is good to take a series of thread dumps with a 1 to 2-second time gap. Taking 10-15 thread dumps each with 1-2 second intervals helps to analyze the threads that got stuck or execute the same code across thread dumps. Thread dumps can be taken in the following scenarios: The application is hung and not responding The application takes time to respond High CPU usage on the server where the application is running Increase in active threads or total number of threads Thread dumps are also sometimes automatically generated by the application servers. For example, the WebSphere application server generates a thread dump during the OutOfMemoryError situation, which helps to analyze the various states of the thread at that moment. For scenarios #1 and 2, focus should be given to the threads that are in blocked, parked/waiting, and runnable states. For scenario #3, the focus should be given to the threads which are in a runnable state. Some threads in infinite loop execution might cause high CPU usage and looking at runnable state might help to find that. For scenario #4, focus should be given to the threads that are in runnable and parked/wait thread states. In all the scenarios, ignore the threads that are in a parked or timed waiting state, which is waiting for the tasks/requests to be executed. Analysis Tool Usage Using a tool to analyze the thread dumps will give many statistics about the thread and its states. However, sometimes it may not reveal the real bottleneck in the system. It is always better to go through the thread dumps manually and do the analysis via tools like Notepad++. Tools like IBM Thread Dump Analyzer can be used if there are many thread dumps to analyze. It can be helpful to see the thread dumps in an organized view to speed up the analysis process. Though it won’t give many sophisticated statistics like the online analysis tools, it can help to visualize the thread dump better, provide a view to see the threads that got blocked due to another thread, and also help to compare the thread dumps. While analyzing the thread dumps, it is important to know which application server for which the thread dump was taken as that will help to focus on analyzing the right threads. For example, if a thread dump was taken on the WebSphere application server, then the "Web Container" thread pool should be the first place to start the analysis as that is the entry point for the WebSphere application server which will start serving the request that comes to it. Thread Dump Types Generally, two kinds of threads will be there in the thread dump. One category of threads is related to the application and helps to execute the application code. Another category would be the threads which will do the operations, such as reading/writing from the network, heartbeat check, and various other operations including JVM internals like GC, etc. Depending upon the problem, the focus should be given to these two thread categories. Most of the time, application code might be the culprit for the performance bottleneck; hence, focus should be given more to the application threads. Thread Pools Thread dumps show the various thread pools available in the application. In the WebSphere application server, threads with the name "Web Container: <id>" belong to the WebSphere thread pool. Counting the number of such threads should be equivalent to the thread pool size defined. If it goes beyond, that indicates a thread leak in the thread pool. Different thread pool in the thread dumps needs to be verified for their size. ForkJoinPool is another thread pool used by Java CompletableFuture to run the tasks asynchronously. If there are too many asynchronous tasks in this pool, then the size of the pool needs to be increased, or another pool with a bigger size needs to be created. Otherwise, this ForkJoinPool will become a bottleneck for asynchronous task execution. If the application is creating a thread pool using the Java Executor framework, then the default name of "pool-<id1>-thread-<id2>" will be given for those threads. Here "id1" indicates the thread pool number and "id2" indicates the thread count in the thread pool. Sometimes if the developers create new thread pools every time without closing them via the Executor framework, then it will create different pools each time, and the number of threads will increase. It may not create a problem if the threads are not actively executing something, but it will result in an OutOfMemoryError where a new thread can’t be created by reaching the maximum number of thread creation. Looking at different thread pools and ensuring that all of them are within the defined/expected limit is always good while analyzing any thread dumps. Application Methods Focusing on the application methods from the stack trace of the thread dump can help analyze the problem in the application code. If there are synchronized codes or blocks in the application, then the application threads will wait to acquire a lock on an object to enter specific code/block execution. This will be expensive, as only one thread is allowed to enter the code execution by making other threads wait. This kind of situation can be seen in the thread dump where threads wait to acquire the lock of an object. The code can be modified to avoid this synchronization if it is not needed. Conclusion Thread dumps contain various details about the JVM, JVM arguments, memory, GC-related information, the hardware on which it is running, etc. It is always recommended to go through those details which might help the analysis.
I blogged about Java stream debugging in the past, but I skipped an important method that's worthy of a post of its own: peek. This blog post delves into the practicalities of using peek() to debug Java streams, complete with code samples and common pitfalls. Understanding Java Streams Java Streams represent a significant shift in how Java developers work with collections and data processing, introducing a functional approach to handling sequences of elements. Streams facilitate declarative processing of collections, enabling operations such as filter, map, reduce, and more in a fluent style. This not only makes the code more readable but also more concise compared to traditional iterative approaches. A Simple Stream Example To illustrate, consider the task of filtering a list of names to only include those that start with the letter "J" and then transforming each name into uppercase. Using the traditional approach, this might involve a loop and some "if" statements. However, with streams, this can be accomplished in a few lines: List<String> names = Arrays.asList("John", "Jacob", "Edward", "Emily"); // Convert list to stream List<String> filteredNames = names.stream() // Filter names that start with "J" .filter(name -> name.startsWith("J")) // Convert each name to uppercase .map(String::toUpperCase) // Collect results into a new list .collect(Collectors.toList()); System.out.println(filteredNames); Output: [JOHN, JACOB] This example demonstrates the power of Java streams: by chaining operations together, we can achieve complex data transformations and filtering with minimal, readable code. It showcases the declarative nature of streams, where we describe what we want to achieve rather than detailing the steps to get there. What Is the peek() Method? At its core, peek() is a method provided by the Stream interface, allowing developers a glance into the elements of a stream without disrupting the flow of its operations. The signature of peek() is as follows: Stream<T> peek(Consumer<? super T> action) It accepts a Consumer functional interface, which means it performs an action on each element of the stream without altering them. The most common use case for peek() is logging the elements of a stream to understand the state of data at various points in the stream pipeline. To understand peek, let's look at a sample similar to the previous one: List<String> collected = Stream.of("apple", "banana", "cherry") .filter(s -> s.startsWith("a")) .collect(Collectors.toList()); System.out.println(collected); This code filters a list of strings, keeping only the ones that start with "a". While it's straightforward, understanding what happens during the filter operation is not visible. Debugging With peek() Now, let's incorporate peek() to gain visibility into the stream: List<String> collected = Stream.of("apple", "banana", "cherry") .peek(System.out::println) // Logs all elements .filter(s -> s.startsWith("a")) .peek(System.out::println) // Logs filtered elements .collect(Collectors.toList()); System.out.println(collected); By adding peek() both before and after the filter operation, we can see which elements are processed and how the filter impacts the stream. This visibility is invaluable for debugging, especially when the logic within the stream operations becomes complex. We can't step over stream operations with the debugger, but peek() provides a glance into the code that is normally obscured from us. Uncovering Common Bugs With peek() Filtering Issues Consider a scenario where a filter condition is not working as expected: List<String> collected = Stream.of("apple", "banana", "cherry", "Avocado") .filter(s -> s.startsWith("a")) .collect(Collectors.toList()); System.out.println(collected); Expected output might be ["apple"], but let's say we also wanted "Avocado" due to a misunderstanding of the startsWith method's behavior. Since "Avocado" is spelled with an upper case "A" this code will return false: Avocado".startsWith("a"). Using peek(), we can observe the elements that pass the filter: List<String> debugged = Stream.of("apple", "banana", "cherry", "Avocado") .peek(System.out::println) .filter(s -> s.startsWith("a")) .peek(System.out::println) .collect(Collectors.toList()); System.out.println(debugged); Large Data Sets In scenarios involving large datasets, directly printing every element in the stream to the console for debugging can quickly become impractical. It can clutter the console and make it hard to spot the relevant information. Instead, we can use peek() in a more sophisticated way to selectively collect and analyze data without causing side effects that could alter the behavior of the stream. Consider a scenario where we're processing a large dataset of transactions, and we want to debug issues related to transactions exceeding a certain threshold: class Transaction { private String id; private double amount; // Constructor, getters, and setters omitted for brevity } List<Transaction> transactions = // Imagine a large list of transactions // A placeholder for debugging information List<Transaction> highValueTransactions = new ArrayList<>(); List<Transaction> processedTransactions = transactions.stream() // Filter transactions above a threshold .filter(t -> t.getAmount() > 5000) .peek(t -> { if (t.getAmount() > 10000) { // Collect only high-value transactions for debugging highValueTransactions.add(t); } }) .collect(Collectors.toList()); // Now, we can analyze high-value transactions separately, without overloading the console System.out.println("High-value transactions count: " + highValueTransactions.size()); In this approach, peek() is used to inspect elements within the stream conditionally. High-value transactions that meet a specific criterion (e.g., amount > 10,000) are collected into a separate list for further analysis. This technique allows for targeted debugging without printing every element to the console, thereby avoiding performance degradation and clutter. Addressing Side Effects Streams shouldn't have side effects. In fact, such side effects would break the stream debugger in IntelliJ which I have discussed in the past. It's crucial to note that while collecting data for debugging within peek() avoids cluttering the console, it does introduce a side effect to the stream operation, which goes against the recommended use of streams. Streams are designed to be side-effect-free to ensure predictability and reliability, especially in parallel operations. Therefore, while the above example demonstrates a practical use of peek() for debugging, it's important to use such techniques judiciously. Ideally, this debugging strategy should be temporary and removed once the debugging session is completed to maintain the integrity of the stream's functional paradigm. Limitations and Pitfalls While peek() is undeniably a useful tool for debugging Java streams, it comes with its own set of limitations and pitfalls that developers should be aware of. Understanding these can help avoid common traps and ensure that peek() is used effectively and appropriately. Potential for Misuse in Production Code One of the primary risks associated with peek() is its potential for misuse in production code. Because peek() is intended for debugging purposes, using it to alter state or perform operations that affect the outcome of the stream can lead to unpredictable behavior. This is especially true in parallel stream operations, where the order of element processing is not guaranteed. Misusing peek() in such contexts can introduce hard-to-find bugs and undermine the declarative nature of stream processing. Performance Overhead Another consideration is the performance impact of using peek(). While it might seem innocuous, peek() can introduce a significant overhead, particularly in large or complex streams. This is because every action within peek() is executed for each element in the stream, potentially slowing down the entire pipeline. When used excessively or with complex operations, peek() can degrade performance, making it crucial to use this method judiciously and remove any peek() calls from production code after debugging is complete. Side Effects and Functional Purity As highlighted in the enhanced debugging example, peek() can be used to collect data for debugging purposes, but this introduces side effects to what should ideally be a side-effect-free operation. The functional programming paradigm, which streams are a part of, emphasizes purity and immutability. Operations should not alter state outside their scope. By using peek() to modify external state (even for debugging), you're temporarily stepping away from these principles. While this can be acceptable for short-term debugging, it's important to ensure that such uses of peek() do not find their way into production code, as they can compromise the predictability and reliability of your application. The Right Tool for the Job Finally, it's essential to recognize that peek() is not always the right tool for every debugging scenario. In some cases, other techniques such as logging within the operations themselves, using breakpoints and inspecting variables in an IDE, or writing unit tests to assert the behavior of stream operations might be more appropriate and effective. Developers should consider peek() as one tool in a broader debugging toolkit, employing it when it makes sense and opting for other strategies when they offer a clearer or more efficient path to identifying and resolving issues. Navigating the Pitfalls To navigate these pitfalls effectively: Reserve peek() strictly for temporary debugging purposes. If you have a linter as part of your CI tools, it might make sense to add a rule that blocks code from invoking peek(). Always remove peek() calls from your code before committing it to your codebase, especially for production deployments. Be mindful of performance implications and the potential introduction of side effects. Consider alternative debugging techniques that might be more suited to your specific needs or the particular issue you're investigating. By understanding and respecting these limitations and pitfalls, developers can leverage peek() to enhance their debugging practices without falling into common traps or inadvertently introducing problems into their codebases. Final Thoughts The peek() method offers a simple yet effective way to gain insights into Java stream operations, making it a valuable tool for debugging complex stream pipelines. By understanding how to use peek() effectively, developers can avoid common pitfalls and ensure their stream operations perform as intended. As with any powerful tool, the key is to use it wisely and in moderation. The true value of peek() is in debugging massive data sets, these elements are very hard to analyze even with dedicated tools. By using peek() we can dig into the said data set and understand the source of the issue programmatically.
A BPMN Workflow engine based on the Jakarta EE Framework forms a powerful and effective combination for developing enterprise applications with a focus on business process management. Both, Jakarta EE and BPMN 2.0 are standardized and widely supported. The scalability of Jakarta EE provides a secure foundation for building enterprise applications with robust business process management capabilities. This enables developers to leverage the strengths of both technologies to create efficient, interoperable, and maintainable BPM solutions. In the following, I will explain the aspects in more detail. Standardization Jakarta EE provides a standardized platform for building enterprise applications, offering a set of specifications and APIs. This standardization ensures portability and interoperability across different Jakarta EE-compliant application servers. This allows developers to work within a unified framework without the need to learn proprietary techniques. This not only streamlines the development process but also promotes a broader ecosystem where developers can focus on leveraging the standardized features, thus enhancing the overall efficiency and maintainability of the applications. BPMN 2.0 on the other side is an industry-standard notation for modeling business processes. It provides a common language for business analysts and developers to collaborate on defining and refining business processes. This makes it easy for developers, architects, and non-technical teams to talk about the same things in a common language. Moreover, BPMN facilitates interoperability among various BPMN modeling tools. This compatibility ensures that models created in one tool can be seamlessly transferred and further developed in another, fostering a collaborative and flexible environment for business process modeling. BPMN effectively builds the bridge between the business and IT departments while promoting a standardized and interoperable approach to process modeling. Integration Capabilities The integration of business applications into the existing IT infrastructure is essential for a sustainable architecture. Jakarta EE is designed to support the integration of various enterprise components and systems employing a robust architecture that facilitates seamless communication and collaboration. Technologies like the Java API for RESTful Web Services (JAX-RS), Java Message Service (JMS), or Jakarta Security 3.0 provide essential building blocks for developing scalable and interoperable enterprise applications. These technologies empower BPM systems to effectively handle diverse interactions with different platforms, applications, databases, and services. Utilizing XML as its foundation, BPMN 2.0 seamlessly integrates with Jakarta EE components like the Jakarta XML Binding 4.0 API. Leveraging the BPMN 2.0 extension mechanism, a custom business process can be augmented with the technical details about integration platforms and services within a microservices architecture. This capability facilitates the orchestration of business processes spanning multiple systems and services, enabling a cohesive and efficient integration framework. Transaction Management Another aspect I want to talk about is transactions. Transactions are an essential prerequisite for the execution of business processes. Jakarta EE provides a robust transaction management framework that ensures the reliability and integrity of business processes. In a BPMN Workflowsystem, multiple tasks and events can often orchestrate a single business transaction. Jakarta EE’s robust transaction management capabilities help to coordinate and synchronize these steps, ensuring that either all of them succeed or none do. This atomicity is crucial for maintaining data consistency and reliability in complex business scenarios. Jakarta EE’s transaction management support thus plays a fundamental role in the development of dependable business applications by providing a framework for handling transactions in a coordinated and fault-tolerant manner. Scalability and Performance When we talk about scalability and performance, we usually only think of horizontal scaling in the form of more server capacity. But well-scalable architecture is also characterized by the optimal use of available system resources. With its micro-container architecture Jakarta EE offers features for building scalable and high-performance enterprise applications, a critical aspect for BPM systems that often need to manage a substantial volume of concurrent processes and user interactions. But Jakarta EE Application Servers also extend to modern cloud environments, allowing them to be seamlessly deployed in a cluster configuration within a cloud infrastructure. This cloud-ready nature of Jakarta EE enhances the flexibility and scalability of BPM systems, enabling them to efficiently handle varying workloads and ensuring optimal performance. The ability to run Jakarta EE Application Servers in a cluster in cloud environments underscores its relevance in supporting the development of robust and scalable BPMN-driven applications tailored to contemporary technological landscapes. Security Security is an ongoing topic, especially for business applications. Jakarta EE includes robust security features, addressing concerns such as authentication, authorization, and secure communication. These features are not only vital but also pivotal for building secure BPM systems, especially given the sensitive nature of the business processes and data they often handle. In the context of BPMN applications, the processing of trusted data emerges as an exceptionally crucial aspect. Jakarta EE’s security mechanisms play a paramount role in guaranteeing that only authorized users have access to specific processes and data, providing a resilient defense against unauthorized access or potential security breaches. This emphasis on processing trusted data underscores Jakarta EE’s commitment to fostering a secure environment within BPM systems, instilling confidence in the integrity and confidentiality of the information being managed. Platforms and Tooling Finally, let's talk about available platforms and tools. Jakarta EE has a rich ecosystem of tools, libraries, and frameworks that can be leveraged for the development of BPMN enterprise applications. Widely used open-source server platforms for building Jakarta EE applications include JBoss Wildfly, Payara/Glassfish, and Open Liberty, which are all prepared for operation in cloud environments. Applications can be seamlessly exchanged between these platforms. For the modeling of BPMN diagrams, a variety of commercial and open-source tools are available. One free BPMN modeling tool is Open-BPMN, which can be run on different IDEs such as Visual Studio Code, Eclipse IDE, and Eclipse Theia as well as a standalone web application. Open-BPMN can be utilized by business analysts to design top-level business processes, as well as by architects and developers to model the technical details of complex processing logic. Built on the Eclipse Graphical Language Server Platform (GLSP), Open-BPMN provides an extension mechanism that allows the customization of the BPMN modeling platform to individual application requirements within a vertical domain. The use of the BPMN 2.0 extension mechanism ensures the continued validity of the BPMN 2.0 standard. Imixs-Workflow is an open-source BPMN Workflow engine based on the Jakarta EE Framework. In its latest version, it supports Jakarta EE 10 and includes a BPMN modeling extension for Open-BPMN. Imixs-Workflow provides a comprehensive set of APIs and Plug-Ins that allow the integration of BPMN 2.0 into any business application. The workflow engine supports a powerful multi-level security concept with a fine-grained access control seamlessly integrated into the Jakarta EE Security API. With the event-driven modeling concept, human-centric workflows can be developed in less time. Summary In summary, the integration of a BPMN Workflow engine with the Jakarta EE Framework establishes a robust foundation for developing enterprise applications centered on business process management. The collaboration between Jakarta EE and BPMN 2.0, characterized by standardization and broad support, not only ensures the creation of efficient, interoperable, and maintainable BPM solutions but also signifies a commitment to industry standards.
NCache Java Edition with distributed cache technique is a powerful tool that helps Java applications run faster, handle more users, and be more reliable. In today's world, where people expect apps to work quickly and without any problems, knowing how to use NCache Java Edition is very important. It's a key piece of technology for both developers and businesses who want to make sure their apps can give users fast access to data and a smooth experience. This makes NCache Java Edition an important part of making great apps. This article is made especially for beginners to make the ideas and steps of adding NCache to your Java applications clear and easy to understand. It doesn't matter if you've been developing for years or if you're new to caching, this article will help you get a good start with NCache Java Edition. Let’s start with a step-by-step process to set up a development workstation for NCache with the Java setup. NCache Server Installation: Java Edition NCache has different deployment options. The classification is listed below: On-premises Cloud Using Docker/Kubernetes You can check all the deployment options and the package available for the deployment here. NCache recommends at least SO-16 (16GB RAM, 8v CPU) to get optimum performance in a production environment, for a higher transaction load we should go with SO-32, SO-64, or SO-128. NCache Server Deployment With Docker Image NCache provides different images (alachisoft/ncache - Docker Image | Docker Hub) for Windows and Linux platform Java edition. Let’s see how to deploy the NCache server using the latest Linux Docker image. Use the below Docker command to pull the latest image: Dockerfile docker pull alachisoft/ncache:latest-java Now we successfully pulled the Docker image. Run the Docker image using the Docker command below: For a development workstation: Dockerfile docker run --name ncache -itd -p 8251:8251 -p 9800:9800 -p 8300:8300 -p 8301:8301 alachisoft/ncache:latest-java Use the actual host configuration for the production NCache server: Dockerfile docker run –name ncache -itd –network host alachisoft/ncache:latest-java The above command will run the NCache server and listen to port 8251. Now, launch NCache Management Center using the browser (localhost:8251). You will get a modal popup to register your license key as shown below: Click on Start Free Trial to activate the free trial with the license key, using the form below. You can register your license key using this registration page form or the Docker command to register the license key as given below: Dockerfile docker exec -it ncache /opt/ncache/bin/tools/register-ncacheevaluation -firstname [registered first name] -lastname [registered last name] -company [registered company name] -email [registered e-mail id] -key [key] Now, open the NCache Management Center from the browser http://localhost:8251/. NCache Cache Cluster Let’s install one more image in a different instance, with proper network configuration. Use the document below for the network configuration with NCache docker image deployment: Create NCache Containers for Windows Server I deployed one image in the 10.0.0.4 instance and another in the 10.0.0.5 instance. I just hopped into 10.0.0.4 NCache Management Center and removed the default cluster cache created during the installation. Let’s create a new clustered cache using the NCache Management Center Wizard. Click on New from the Clustered Cache page as shown in the figure below: It’s a 7-step process to create a clustered cache with the NCache Management Center interface, which we will go through one by one. Step 1: In-Memory Store In this step, you can define the in-memory store type, the name of the clustered cache, and the serialization type. In my case, I named the clustered cache as demoCache and the serialization as JSON. Step 2: Caching Topology Define the caching topology in the screen; in my case, I just went with default options. Step 3: Cache Partitions and Size In this screen, we can define the cache partition size. In my case, I just went with the default value. With this option, it will skip step 4. Also, I added two server nodes: 10.0.0.4 and 10.0.0.5. Step 5: Cluster TCP Parameters Define the Cluster Port, Port Range, and Batch Interval values. In my case, I went with the default values. Step 6: Encryption and Compression Settings You can enable the encryption and compression settings in this step. I just went with default values. Step 7: Advanced Options You can enable eviction and also check other advanced options. In my case, I checked to start the cache on the finish. Finally, click on Finish. Once the process is complete, it will create and start the clustered cache with two nodes. (10.0.0.4 and 10.0.0.5). Now the cluster is formed. Start the Cache You can use the start option from the NCache Management Center to start the clustered cache, as shown in the below figure. You can also use the command below to start the server: PowerShell start-cache –name demoCache Run a Stress Test Click Test-Stress and select the duration to run the stress test. This is one of my favorite features in NCache Management Center where you can initiate a stress test with ease just by a button click. You can also use the commands below to start the server. For example, to initiate a Test-Stress for the demoCache cluster with default settings: PowerShell test-stress –cachename demoCache Click on Monitor to check the metrics. You can monitor the number of requests processed by each node. Click on Statistics to get the complete statistics of the clustered caches. SNMP Counter to Monitor NCache Simple Network Management Protocol (SNMP) is a key system used for keeping an eye on and managing different network devices and their activities. It's a part of the Internet Protocol Suite and helps in sharing important information about the network's health and operations between devices like routers, switches, servers, and printers. This allows network managers to change settings, track how well the network is doing, and get alerts on any issues. SNMP is widely used and important for keeping networks running smoothly and safely. It's a vital part of managing and fixing networks. NCache has made SNMP monitoring easier by now allowing the publication of counters through a single port. Before, a separate port was needed for each cache. Make sure the NCache service and cache(s) to monitor are up and running. Configure NCache Service The Alachisoft.NCache.Service.dll.config file, located in the %NCHOME%\bin\service folder, provides the ability to activate or deactivate the monitoring of cache counters via SNMP by modifying particular options. These options are marked by specific tags. Update the value for the tags below: <add key="NCacheServer.EnableSnmpMonitoring" value="true"/> <add key="NCacheServer.SnmpListenersInfoPort" value="8256"/> <add key="NCacheServer.EnableMetricsPublishing" value="true"/> Change the NCacheServer.EnableSnmpMonitoring tag to true to turn on or off the SNMP monitoring of NCache cache counters. Initially, this tag is off (false). Change the NCacheServer.SnmpListenersInfoPort tag to true to set the port for SNMP to listen on. The default port is 8256, but you can adjust it according to your needs. Change the NCacheServer.EnableMetricsPublishing tag to true if you want to start or stop sending metrics to the NCache Service. Remember to reboot the NCache Service once you've made the necessary adjustments to the service configuration files. SNMP Monitoring NCache has made available a single MIB file called alachisoft.mib that keeps track of various counters which can be checked using SNMP. This file tells you about the ports used for different types of caches and client activities. You can find this file at %NCHOME%\bin\resources. To look at these counters, you can use a program called MIB Browser Free Tool to go through the MIB file. Use port 8256 to connect with NCache, and open the SNMP Table from View to check all the attributes of the NCache as shown in the figure below: To check specific attribute details in the SNMP Table, first pick the attributes you want to see. For a sample, I have selected cacheName, cacheSize, cacheCount, fetchesPerSec, requestsPerSec, additionPerSec. Then, click on View from the menu at the top before you choose the SNMP Table. You'll then see the values of the counter in the table as shown in the figure below. Summary This article provides a beginner-friendly guide on how to get started with NCache Java Edition, covering essential steps such as installing the NCache server, deploying it using a Docker image, starting the cache, conducting a stress test to evaluate its performance, and monitoring its operation through JMX counters. It helps you get started with enhancing your Java application's speed and reliability by implementing distributed caching with NCache.
While working on a user model, I found myself navigating through best practices and diverse strategies for managing a token service, transitioning from straightforward functions to a fully-fledged, independent service equipped with handy methods. I delved into the nuances of securely storing and accessing secret tokens, discerning between what should remain private and what could be public. Additionally, I explored optimal scenarios for deploying the service or function and pondered the necessity of its existence. This article chronicles my journey, illustrating the evolution from basic implementations to a comprehensive, scalable solution through a variety of examples. Services In a Node.js application, services are modular, reusable components responsible for handling specific business logic or functionality, such as user authentication, data access, or third-party API integration. These services abstract away complex operations behind simple interfaces, allowing different parts of the application to interact with these functionalities without knowing the underlying details. By organizing code into services, developers achieve separation of concerns, making the application more scalable, maintainable, and easier to test. Services play a crucial role in structuring the application’s architecture, facilitating a clean separation between the application’s core logic and its interactions with databases, external services, and other application layers. I decided to show an example with JWT Service. Let’s jump to the code. First Implementation In our examples, we are going to use jsonwebtoken as a popular library in the Node.js ecosystem. It will allow us to encode, decode, and verify JWTs easily. This library excels in situations requiring the safe and quick sharing of data between web application users, especially for login and access control. To create a token: TypeScript jsonwebtoken.sign(payload, JWT_SECRET) and verify: TypeScript jsonwebtoken.verify(token, JWT_SECRET, (error, decoded) => { if (error) { throw error } return decoded; }); For the creation and verifying tokens we have to have JWT_SECRET which lying in env. TypeScript process.env.JWT_SECRET That means we have to read it to be able to proceed to methods. TypeScript if (!JWT_SECRET) { throw new Error('JWT secret not found in environment variables!'); } So, let’s sum it up to the one object with methods: TypeScript require('dotenv').config(); import jsonwebtoken from 'jsonwebtoken'; const JWT_SECRET = process.env.JWT_SECRET!; export const jwt = { verify: <Result>(token: string): Promise<Result> => { if (!JWT_SECRET) { throw new Error('JWT secret not found in environment variables!'); } return new Promise((resolve, reject) => { jsonwebtoken.verify(token, JWT_SECRET, (error, decoded) => { if (error) { reject(error); } else { resolve(decoded as Result); } }); }); }, sign: (payload: string | object | Buffer): Promise<string> => { if (!JWT_SECRET) { throw new Error('JWT secret not found in environment variables!'); } return new Promise((resolve, reject) => { try { resolve(jsonwebtoken.sign(payload, JWT_SECRET)); } catch (error) { reject(error); } }); }, }; jwt.ts file jwt Object With Methods This object demonstrates setting up JWT authentication functionality in a Node.js application. To read env variables helps: require(‘dotenv’).config();and with access to process, we are able to get JWT_SECRET value. Let’s reduce repentance of checking the secret. TypeScript checkEnv: () => { if (!JWT_SECRET) { throw new Error('JWT_SECRET not found in environment variables!'); } }, Incorporating a dedicated function within the object to check the environment variable for the JWT secret can indeed make the design more modular and maintainable. But still some repentance, because we still have to call it in each method: this.checkEnv(); Additionally, I have to consider the usage of this context because I have arrow functions. My methods have to become function declarations instead of arrow functions for verify and sign methods to ensure this.checkEnvworks as intended. Having this we can create tokens: TypeScript const token: string = await jwt.sign({ id: user.id, }) or verify them: TypeScript jwt.verify(token) At this moment we can think, is not better to create a service that is going to handle all of this stuff? Token Service By using the service we can improve scalability. I still checking the existing secret within the TokenService for dynamic reloading of environment variables (just as an example), I streamline it by creating a private method dedicated to this check. This reduces repetition and centralizes the logic for handling missing configurations: TypeScript require('dotenv').config(); import jsonwebtoken from 'jsonwebtoken'; export class TokenService { private static jwt_secret = process.env.JWT_SECRET!; private static checkSecret() { if (!TokenService.jwt_secret) { throw new Error('JWT token not found in environment variables!'); } } public static verify = <Result>(token: string): Promise<Result> => { TokenService.checkSecret(); return new Promise((resolve, reject) => { jsonwebtoken.verify(token, TokenService.jwt_secret, (error, decoded) => { if (error) { reject(error); } else { resolve(decoded as Result); } }); }); }; public static sign = (payload: string | object | Buffer): Promise<string> => { TokenService.checkSecret(); return new Promise((resolve, reject) => { try { resolve(jsonwebtoken.sign(payload, TokenService.jwt_secret)); } catch (error) { reject(error); } }); }; } TokenService.ts File But I have to consider moving the check for the presence of necessary configuration outside of the methods and into the initialization or loading phase of my application, right? This ensures that my application configuration is valid before it starts up, avoiding runtime errors due to missing configuration. And in this moment the word proxy comes to my mind. Who knows why, but I decided to check it out: Service With Proxy First, I need to refactor my TokenService to remove the repetitive checks from each method, assuming that the secret is always present: TypeScript require('dotenv').config(); import jsonwebtoken from 'jsonwebtoken'; export class TokenService { private static jwt_secret = process.env.JWT_SECRET!; public static verify<TokenPayload>(token: string): Promise<TokenPayload> { return new Promise((resolve, reject) => { jsonwebtoken.verify(token, TokenService.jwt_secret, (error, decoded) => { if (error) { reject(error); } else { resolve(decoded as TokenPayload); } }); }); } public static sign(payload: string | object | Buffer): Promise<string> { return new Promise((resolve, reject) => { try { resolve(jsonwebtoken.sign(payload, TokenService.jwt_secret)); } catch (error) { reject(error); } }); } } Token Service Without Checking Function the Secret Then I created a proxy handler that checks the JWT secret before forwarding calls to the actual service methods: TypeScript const tokenServiceHandler = { get(target, propKey, receiver) { const originalMethod = target[propKey]; if (typeof originalMethod === 'function') { return function(...args) { if (!TokenService.jwt_secret) { throw new Error('Secret not found in environment variables!'); } return originalMethod.apply(this, args); }; } return originalMethod; } }; Token Service Handler Looks fancy. Finally, for the usage of the proxied token service, I have to create an instance of the Proxy class: TypeScript const proxiedTokenService = new Proxy(TokenService, tokenServiceHandler); Now, instead of calling TokenService.verify or TokenService.sign directly, I can use proxiedTokenService for these operations. The proxy ensures that JWT secret check is performed automatically before any method logic is executed: TypeScript try { const token = proxiedTokenService.sign({ id: 123 }); console.log(token); } catch (error) { console.error(error.message); } try { const payload = proxiedTokenService.verify('<token>'); console.log(payload); } catch (error) { console.error(error.message); } This approach abstracts away the repetitive pre-execution checks into the proxy mechanism, keeping this method's implementations clean and focused on their core logic. The proxy handler acts as a middleware layer for my static methods, applying the necessary preconditions transparently. Constructor What about constructor usage? There’s a significant distinction between initializing and checking environment variables in each method call; the former approach doesn’t account for changes to environment variables after initial setup: TypeScript export class TokenService { private jwt_secret: string; constructor() { if (!process.env.JWT_SECRET) { throw new Error('JWT secret not found in environment variables!'); } this.jwt_secret = process.env.JWT_SECRET; } public verify(token: string) { // Implementation... } public sign(payload) { // Implementation... } } const tokenService = new TokenService(); Constructor Approach The way the service is utilized will stay consistent; the only change lies in the timing of the service’s initialization. Service Initialization We’ve reached the stage of initialization where we can perform necessary checks before using the service. This is a beneficial practice with extensive scalability options. TypeScript require('dotenv').config(); import jsonwebtoken from 'jsonwebtoken'; export class TokenService { private static jwt_secret: string = process.env.JWT_SECRET!; static initialize = () => { if (!this.jwt_secret) { throw new Error('JWT secret not found in environment variables!'); } this.jwt_secret = process.env.JWT_SECRET!; }; public static verify = <Result>(token: string): Promise<Result> => new Promise((resolve, reject) => { jsonwebtoken.verify(token, TokenService.jwt_secret, (error, decoded) => { if (error) { reject(error); } else { resolve(decoded as Result); } }); }); public static sign = (payload: string | object | Buffer): Promise<string> => new Promise((resolve, reject) => { try { resolve(jsonwebtoken.sign(payload, TokenService.jwt_secret)); } catch (error) { reject(error); } }); } Token Service With Initialization Initialization acts as a crucial dependency, without which the service cannot function. To use this approach effectively, I need to call TokenService.initialize() early in my application startup sequence, before any other parts of my application attempt to use the TokenService. This ensures that my service is properly configured and ready to use. TypeScript import { TokenService } from 'src/services/TokenService'; TokenService.initialize(); This approach assumes that my environment variables and any other required setup do not change while my application is running. But what if my application needs to support dynamic reconfiguration, I might consider additional mechanisms to refresh or update the service configuration without restarting the application, right? Dynamic Reconfiguration Supporting dynamic reconfiguration in the application, especially for critical components like TokenService that rely on configurations like JWT_SECRET, requires a strategy that allows the service to update its configurations at runtime without a restart. For that, we need something like configuration management which allows us to refresh configurations dynamically from a centralized place. Dynamic configuration refresh mechanism — this could be a method in my service that can be called to reload its configuration without restarting the application: TypeScript export class TokenService { private static jwt_secret = process.env.JWT_SECRET!; public static refreshConfig = () => { this.jwt_secret = process.env.JWT_SECRET!; if (!this.jwt_secret) { throw new Error('JWT secret not found in environment variables!'); } }; // our verify and sign methods will be the same } Token Service With Refreshing Config I need to implement a way to monitor my configuration sources for changes. This could be as simple as watching a file for changes or as complex as subscribing to events from a configuration service. This is just an example: TypeScript import fs from 'fs'; fs.watch('config.json', (eventType, filename) => { if (filename) { console.log(`Configuration file changed, reloading configurations.`); TokenService.refreshConfig(); } }); If active monitoring is not feasible or reliable, we can consider scheduling periodic checks to refresh configurations. This approach is less responsive but can be sufficient depending on how frequently my configurations change. Cron Job Another example can be valuable with using a cron job within a Node.js application to periodically check and refresh configuration for services, such as a TokenService, is a practical approach for ensuring my application adapts to configuration changes without needing a restart. This can be especially useful for environments where configurations might change dynamically (e.g., in cloud environments or when using external configuration management services). For that, we can use node-cron package to achieve the periodical check: TypeScript import cron from 'node-cron'' import { TokenService } from 'src/services/TokenService' cron.schedule('0 * * * *', () => { TokenService.refreshConfiguration(); }, { scheduled: true, timezone: "America/New_York" }); console.log('Cron job scheduled to refresh TokenService configuration every hour.'); Cron Job periodically checks the latest configurations. In this setup, cron.schedule is used to define a task that calls TokenService.refreshConfiguration every hour ('0 * * * *' is a cron expression that means "at minute 0 of every hour"). Conclusion Proper initialization ensures the service is configured with essential environment variables, like the JWT secret, safeguarding against runtime errors and security vulnerabilities. By employing best practices for dynamic configuration, such as periodic checks or on-demand reloading, applications can adapt to changes without downtime. Effectively integrating and managing the TokenService enhances the application's security, maintainability, and flexibility in handling user authentication. I trust this exploration has provided you with meaningful insights and enriched your understanding of service configurations.
Angular, a powerful framework for building dynamic web applications, is known for its component-based architecture. However, one aspect that often puzzles new developers is the fact that Angular components do not have a display: block style by default. This article explores the implications of this design choice, its impact on web development, and how developers can effectively work with it. The world of front-end development is replete with frameworks that aim to provide developers with robust tools to build interactive and dynamic web applications. Among these, Angular stands out as a powerful platform, known for its comprehensive approach to constructing applications’ architecture. Particularly noteworthy is the way Angular handles components — the fundamental building blocks of Angular applications. Understanding Angular Components In Angular, components are the fundamental building blocks that encapsulate data binding, logic, and template rendering. They play a crucial role in defining the structure and behavior of your application’s interface. Definition and Role A component in Angular is a TypeScript class decorated with @Component(), where you can define its application logic. Accompanying this class is a template, typically an HTML file, that determines the component's visual representation, and optionally CSS files for styling. The component's role is multifaceted: it manages the data and state necessary for the view, handles user interactions, and can also be reusable throughout the application. TypeScript import { Component } from '@angular/core'; @Component({ selector: 'app-my-component', templateUrl: './my-component.component.html', styleUrls: ['./my-component.component.css'] }) export class MyComponent { // Component logic goes here } Angular’s Shadow DOM Angular components utilize a feature known as Shadow DOM, which encapsulates their markup and styles, ensuring that they’re independent of other components. This means that styles defined in one component will not leak out and affect other parts of the application. Shadow DOM allows for style encapsulation by creating a boundary around the component. As a developer, it’s essential to understand the structure and capabilities of Angular components to fully leverage the power of the framework. Recognizing the inherent encapsulation provided by Angular’s Shadow DOM is particularly important when considering how components are displayed and styled within an application. Display Block: The Non-Default in Angular Components Angular components are different from standard HTML elements in many ways, one of which is their default display property. Unlike basic HTML elements, which often come with a display value of block or inline, Angular components are assigned none as their default display behavior. This decision is intentional and plays an important role in Angular’s encapsulation philosophy and component rendering process. Comparison With HTML Elements Standard HTML elements like <div>, <p>, and <h1> come with a default styling that can include the CSS display: block property. This means that when you drop a <div> into your markup, it naturally takes up the full width available to it, creating a "block" on the page. <!-- Standard HTML div element --> <div>This div is a block-level element by default.</div> In contrast, Angular components start without any assumptions on their display property. That is, they don’t inherently behave as block or inline elements; they are essentially “display-agnostic” until specified. Rationale Behind Non-Block Default Angular’s choice to diverge from the typical block behavior of HTML elements is deliberate. One reason for this is to encourage developers to consciously decide how each component should be displayed within the application’s layout. It prevents unexpected layout shifts and the overwriting of global styles that may occur when components with block-level styles are introduced into existing content. By not having a display property set by default, Angular invites developers to think responsively and adapt their components to various screen sizes and layout requirements by setting explicit display styles that suit the component’s purpose within the context of the application. In the following section, we will explore how to work with the display properties of Angular components, ensuring that they fit seamlessly into your application’s design with explicit and intentional styling choices. Working With Angular’s Display Styling When building applications with Angular, understanding and properly implementing display styling is crucial for achieving the desired layout and responsiveness. Since Angular components come without a preset display rule, it’s up to the developer to define how each component should be displayed within the context of the application. 1. Explicitly Setting Display Styles You have complete control over how the Angular component is displayed by explicitly setting the CSS display property. This can be defined inline, within the component's stylesheet, or even dynamically through component logic. /* app-example.component.css */ :host { display: block; } <!-- Inline style --> <app-example-component style="display: block;"></app-example-component> // Component logic setting display dynamically export class ExampleComponent implements OnInit { @HostBinding('style.display') displayStyle: string = 'block'; } Choosing to set your component’s display style via the stylesheet ensures that you can leverage CSS’s full power, including media queries for responsiveness. 2. Responsive Design Considerations Angular’s adaptability allows you to create responsive designs by combining explicit display styles with modern CSS techniques. Using media queries, flexbox, and CSS Grid, you can responsively adjust the layout of your components based on the viewport size. CSS /* app-example.component.css */ :host { display: grid; grid-template-columns: repeat(auto-fill, minmax(150px, 1fr)); } @media (max-width: 768px) { :host { display: block; } } By setting explicit display values in style sheets and using Angular’s data-binding features, you can create a responsive and adaptive user interface. This level of control over styling reflects the thoughtful consideration that Angular brings to the development process, enabling you to create sophisticated, maintainable, and scalable applications. Next, we will wrap up our discussion and revisit the key takeaways from working with Angular components and their display styling strategies. Conclusion Throughout this exploration of Angular components and their display properties, it’s become apparent that Angular’s choice to use a non-block default for components is a purposeful design decision. This approach promotes a more thoughtful application of styles and supports encapsulation, a core principle within Angular’s architecture. It steers developers toward crafting intentional and adaptive layouts, a necessity in the diverse landscape of devices and screen sizes. By understanding Angular’s component architecture and the reasoning behind its display styling choices, developers are better equipped to make informed decisions. Explicit display settings and responsive design considerations are not afterthoughts but integral parts of the design and development process when working with Angular. Embracing these concepts allows developers to fully leverage the framework’s capabilities, leading to well-structured, maintainable, and responsive applications that stand the test of time and technology evolution. The information provided in this article aims to guide Angular developers to harness these tools effectively, ensuring that the user experiences they create are as robust as the components they comprise.