Introduction to Snowflake for Beginners
5 Steps To Tame Unplanned Work
Observability and Application Performance
Making data-driven decisions, as well as business-critical and technical considerations, first comes down to the accuracy, depth, and usability of the data itself. To build the most performant and resilient applications, teams must stretch beyond monitoring into the world of data, telemetry, and observability. And as a result, you'll gain a far deeper understanding of system performance, enabling you to tackle key challenges that arise from the distributed, modular, and complex nature of modern technical environments.Today, and moving into the future, it's no longer about monitoring logs, metrics, and traces alone — instead, it’s more deeply rooted in a performance-centric team culture, end-to-end monitoring and observability, and the thoughtful usage of data analytics.In DZone's 2023 Observability and Application Performance Trend Report, we delve into emerging trends, covering everything from site reliability and app performance monitoring to observability maturity and AIOps, in our original research. Readers will also find insights from members of the DZone Community, who cover a selection of hand-picked topics, including the benefits and challenges of managing modern application performance, distributed cloud architecture considerations and design patterns for resiliency, observability vs. monitoring and how to practice both effectively, SRE team scalability, and more.
E-Commerce Development Essentials
Real-Time Data Architecture Patterns
These days, writing tests is a standard part of development. Unfortunately, we need to deal from time to time with a situation when a static method is invoked by a tested component. Our goal is to mitigate this part and avoid third-party component behavior. This article sheds light on the mocking of static methods by using the "inline mock maker" introduced by Mockito in the 3.4 version. In other words, this article explains Mockito.mockStatic method in order to help us with unwanted invocation of the static method. In This Article, You Will Learn How to mock and verify static methods with mockStatic feature How to setup mockStatic in different Mockito versions Introduction Many times, we have to deal with a situation when our code invokes a static method. It can be our own code (e.g., some utility class or class from a third-party library). The main concern in unit testing is to focus on the tested component and ignore the behavior of any other component (including static methods). An example is when a tested method in component A is calling an unrelated static method from component B. Even so, it's not recommended to use static methods; we see them a lot (e.g., utility classes). The reasoning for avoiding the usage of static methods is summarized very well in Mocking Static Methods With Mockito. Generally speaking, some might say that when writing clean object-orientated code, we shouldn’t need to mock static classes. This could typically hint at a design issue or code smell in our application. Why? First, a class depending on a static method has tight coupling, and second, it nearly always leads to code that is difficult to test. Ideally, a class should not be responsible for obtaining its dependencies, and if possible, they should be externally injected. So, it’s always worth investigating if we can refactor our code to make it more testable. Of course, this is not always possible, and sometimes we need to mock static methods. A Simple Utility Class Let's define a simple SequenceGenerator utility class used in this article as a target for our tests. This class has two "dumb" static methods (there's nothing fancy about them). The first nextId method (lines 10-12) generates a new ID with each invocation, and the second nextMultipleIds method (lines 14-20) generates multiple IDs as requested by the passed argument. Java @UtilityClass public class SequenceGenerator { private static AtomicInteger counter; static { counter = new AtomicInteger(1); } public static int nextId() { return counter.getAndIncrement(); } public static List<Integer> nextMultipleIds(int count) { var newValues = new ArrayList<Integer>(count); for (int i = 0; i < count; i++) { newValues.add(counter.getAndIncrement()); } return newValues; } } MockedStatic Object In order to be able to mock static methods, we need to wrap the impacted class by "inline mock maker." The mocking of static methods from our SequenceGenerator class introduced above is achievable by MockedStatic instance retrieved via Mockito.mockStatic method. This can be done as: Java try (MockedStatic<SequenceGenerator> seqGeneratorMock = mockStatic(SequenceGenerator.class)) { ... } Or Java MockedStatic<SequenceGenerator> seqGeneratorMock = mockStatic(SequenceGenerator.class)); ... seqGeneratorMock.close(); The created mockStatic instance has to be always closed. Otherwise, we risk ugly side effects in next tests running in the same thread when the same static method is involved (i.e., SequenceGenerator in our case). Therefore, the first option seems better, and it is used in most articles on this topic. The explanation can be found on the JavaDoc site (chapter 48) as: When using the inline mock maker, it is possible to mock static method invocations within the current thread and a user-defined scope. This way, Mockito assures that concurrently and sequentially running tests do not interfere. To make sure a static mock remains temporary, it is recommended to define the scope within a try-with-resources construct. To learn more about this topic, check out these useful links: The official site, JavaDoc site and GitHub repository. Mock Method Invocation Static methods (e.g., our nextId or nextMultipleIds methods defined above) can be mocked with MockedStatic.when. This method accepts a functional interface defined by MockedStatic.Verification. There are two cases we can deal with. Mocked Method With No Argument The simplest case is mocking a static method with no argument (nextId method in our case). In this case, it's sufficient to pass to seqGeneratorMock.when method only a method reference (see line 5). The returned value is specified in a standard way (e.g., with thenReturn method). Java @Test void whenWithoutArgument() { try (MockedStatic<SequenceGenerator> seqGeneratorMock = mockStatic(SequenceGenerator.class)) { int newValue = 5; seqGeneratorMock.when(SequenceGenerator::nextId).thenReturn(newValue); assertThat(SequenceGenerator.nextId()).isEqualTo(newValue); } } Mocked Method With One or More Arguments Usually, we have a static method with some arguments (nextMultipleIds in our case). Then, we need to use a lambda expression instead of the method reference (see line 5). Again, we can use the standard methods (e.g. then, thenRetun, thenThrow etc.) to handle the response with the desired behavior. Java @Test void whenWithArgument() { try (MockedStatic<SequenceGenerator> seqGeneratorMock = mockStatic(SequenceGenerator.class)) { int newValuesCount = 5; seqGeneratorMock.when(() -> SequenceGenerator.nextMultipleIds(newValuesCount)) .thenReturn(List.of(1, 2, 3, 4, 5)); assertThat(SequenceGenerator.nextMultipleIds(newValuesCount)).hasSize(newValuesCount); } } Verify Method Invocation Similarly, we can also verify calls of the mocked component by calling seqGeneratorMock.verify method for the method reference (see line 7) Java @Test void verifyUsageWithoutArgument() { try (MockedStatic<SequenceGenerator> seqGeneratorMock = mockStatic(SequenceGenerator.class)) { var person = new Person("Pamela"); seqGeneratorMock.verify(SequenceGenerator::nextId); assertThat(person.getId()).isEqualTo(0); } } Or the lambda expression (see line 6). Java @Test void verifyUsageWithArgument() { try (MockedStatic<SequenceGenerator> seqGeneratorMock = mockStatic(SequenceGenerator.class)) { List<Integer> nextIds = SequenceGenerator.nextMultipleIds(3); seqGeneratorMock.verify(() -> SequenceGenerator.nextMultipleIds(ArgumentMatchers.anyInt())); assertThat(nextIds).isEmpty(); } } Note: please be aware that seqGeneratorMock doesn't provide any value here, as the static methods are still mocked with the defaults. There's no spy version so far. Therefore, any expected return value has to be mocked, or the default value is returned. Setup The mockStatic feature is enabled in Mockito 5.x by default. Therefore, no special setup is needed. But we need to set up Mockito for the older versions (e.g., 4.x). Mockito 5.x+ As it was already mentioned, we don't need to set up anything in version 5.x. See the statement in the GitHub repository: Mockito 5 switches the default mockmaker to mockito-inline, and now requires Java 11. Old Mockito Versions When an older version is used, and we use the mock-inline feature via mockStatic then we can see an error like this: Plain Text org.mockito.exceptions.base.MockitoException: The used MockMaker SubclassByteBuddyMockMaker does not support the creation of static mocks Mockito's inline mock maker supports static mocks based on the Instrumentation API. You can simply enable this mock mode, by placing the 'mockito-inline' artifact where you are currently using 'mockito-core'. Note that Mockito's inline mock maker is not supported on Android. at com.github.aha.poc.junit.person.StaticUsageTests.mockStaticNoArgValue(StaticUsageTests.java:15) at java.base/java.lang.reflect.Method.invoke(Method.java:580) at java.base/java.util.ArrayList.forEach(ArrayList.java:1596) at java.base/java.util.ArrayList.forEach(ArrayList.java:1596) Generally, there are two options to enable it for such Mockito versions (see all Mockito versions here). Use MockMaker Resource The first option is based on adding <project>\src\test\resources\mockito-extensions\org.mockito.plugins.MockMaker to our Maven project with this content: Plain Text mock-maker-inline Use mock-inline Dependency The other, and probably better, option is adding mockito-inline dependency: XML <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-inline</artifactId> <version>5.2.0</version> <scope>test</scope> </dependency> Note: this dependency already contains MockMaker resource mentioned above. Therefore, this option seems more convenient. Maven Warning No matter what version is used (see above), Maven build can produce these warnings: Plain Text WARNING: A Java agent has been loaded dynamically (<user_profile>\.m2\repository\net\bytebuddy\byte-buddy-agent\1.12.9\byte-buddy-agent-1.12.9.jar) WARNING: If a serviceability tool is in use, please run with -XX:+EnableDynamicAgentLoading to hide this warning WARNING: If a serviceability tool is not in use, please run with -Djdk.instrument.traceUsage for more information WARNING: Dynamic loading of agents will be disallowed by default in a future release OpenJDK 64-Bit Server VM warning: Sharing is only supported for boot loader classes because bootstrap classpath has been appended Mockito works correctly even with these warnings. It's probably caused by/depends on the used tool, JDK version, etc. Conclusion In this article, the mocking of static methods with the help of Mockito inline mock maker was covered. The article started with the basics of static mocking and then followed with a demonstration of when and verify usage (either with the method reference or the lambda expression). In the end, the setup of the Mockito inline maker was shown for different Mockito versions. The used source code can be found here.
Docker Extensions was announced as a beta at DockerCon 2022. Docker Extensions became generally available in January 2023. Developing performance tools' related extensions was on my to-do list for a long time. Due to my master's degree, I couldn't spend time learning Docker Extensions SDK. I expected someone would have created the extension by now, considering it's almost 2024. It's surprising to me that none has been developed as far as I know. But no more. Introducing the Apache JMeter Docker Extension. Now, you can run Apache JMeter tests in Docker Desktop without installing JMeter locally. In this blog post, we will explore how to get started with this extension and understand its functionality. We will also cover generating HTML reports and other related topics. About Docker Extensions Docker Extensions enables third parties to extend the functionalities of Docker by integrating their tools. Think of it like a mobile app store but for Docker. I frequently use the official Docker Disk Usage extension to analyze disk usage and free up unused space. Extensions enhance the productivity and workflow of developers. Check out the Docker Extension marketplace for some truly amazing extensions. Go see it for yourself! Prerequisite for Docker Extension The only prerequisite for Docker Extension is to have Docker Desktop 4.8.0 and later installed in your local. Apache JMeter Docker Extension Apache JMeter Docker Extension is an open-source, lightweight extension and the only extension available as of this writing. It will help you to run JMeter tests on Docker without installing JMeter locally. This extension simplifies the process of setting up and executing JMeter tests within Docker containers, streamlining your performance testing workflow. Whether you're a seasoned JMeter pro or just getting started, this tool can help you save time and resources. Features Includes base image qainsights/jmeter:latest by default. Light-weight and secured container Supports JMeter plugins Mount volume for easy management Supports property files Supports proxy configuration Generates logs and results Intuitive HTML report Displays runtime console logs Timely notifications How To Install Apache JMeter Docker Extension Installation is a breeze. There are two ways you can install the extension. Command Line Run docker extension install qainsights/jmeter-docker-extension:0.0.2 in your terminal and follow the prompts. IMPORTANT: Before you install, make sure you are using the latest version tag. You can check the latest tags in Docker Hub. Dockerfile $> docker extension install qainsights/jmeter-docker-extension:0.0.1 Extensions can install binaries, invoke commands, access files on your machine and connect to remote URLs. Are you sure you want to continue? [y/N] y Image not available locally, pulling qainsights/jmeter-docker-extension:0.0.1... Extracting metadata and files for the extension "qainsights/jmeter-docker-extension:0.0.1" Installing service in Desktop VM... Setting additional compose attributes Installing Desktop extension UI for tab "JMeter"... Extension UI tab "JMeter" added. Starting service in Desktop VM...... Service in Desktop VM started Extension "JMeter" installed successfully Web Here is the direct link to install the JMeter extension. Follow the prompts to get it installed. Install JMeter Docker Extension Click on Install anyway to install the extension. How To Get Started With JMeter Docker Extension After installing the JMeter Docker extension, navigate to the left sidebar as shown below, then click on JMeter. Now, it is time to execute our first tests on Docker using the JMeter extension. The following are the prerequisites to execute the JMeter tests. valid JMeter test plan optional proxy credentials optional JMeter properties file The user interface is pretty simple, intuitive, and self-explanatory. All it has is text fields, buttons, and the output console log. The extension has the following sections: Image and Volume This extension works well with the qainsights/jmeter:latest image Other images might not work; I have not tested it. Mapping the volume from the host to the Docker container is crucial to sharing the test plan, CSV test data, other dependencies, property files, results, and other files. Test Plan A valid test plan must be kept inside the shared volume. Property Files This section helps you to pass the runtime parameters to the JMeter test plan. Logs and Results This section helps you to configure the logs and results. After each successful test, logs and an HTML report will be generated and saved in a shared volume. Proxy and its credentials Optionally, you can send a proxy and its credentials. This is helpful when you are on the corporate network so that the container can access the application being tested. Below is the example test where the local volume /Users/naveenkumar/Tools/apache-jmeter-5.6.2/bin/jmeter-tests is mapped to the container volume jmeter-tests. Here is the content in /Users/naveenkumar/Tools/apache-jmeter-5.6.2/bin/jmeter-tests folder in my local. The above artifacts will be shared with the Docker container once it is up and running. In the above example, /jmeter-tests/CSVSample.jmx will be executed inside the container. It will use the below loadtest.properties. Once all the values are configured, hit the Run JMeter Test button. During the test, you can pay attention to a couple of sections. One is console logs. For each test, the runtime logs will be streamed from the Docker container, as shown below. In case there are any errors, you can check them under the Notifications section. Once the test is done, Notifications will display the status and the location of the HTML report (your mapped volume). Here is the auto-generated HTML report. How JMeter Docker Extension Works and Its Architecture On a high level, this extension is simple, as shown in the below diagram. Once you click on the Run button, the extension first validates all the input and the required fields. If the validation check passes, then the extension will look up the artifacts from the mapped volume. Then, it passes all respective JMeter arguments to the image qainsights/jmeter:latest. If the image is not present, it will get pulled from the Docker container registry. Then, the container will be created by Docker and perform the test execution. During the test execution, container logs will be streamed to the output console logs. To stop the test, click the Terminate button to nuke the container. This action is irreversible and will not generate any test results. Once the test is done, the HTML report and the logs will be shared with the mapped volume. How To Uninstall the Extension There are two ways to uninstall the extension. Using the CLI, issue docker extension uninstall qainsights/jmeter-docker-extension:0.0.1 or from the Docker Desktop. Navigate to Docker Desktop > Extensions > JMeter, then click on the menu to uninstall, as shown below. Known Issues There are a couple of issues (or more :) if you find) you can start the test as much as you want to generate more load to the target under test. Supports only frequently used JMeter arguments. If you would like to add more arguments, please raise an issue in the GitHub repo. Upcoming Features There are a couple of features I am planning to implement based on the reception. Add a dashboard to track the tests Display graphs/charts runtime Way to add JMeter plugins on the fly If you have any other exciting ideas, please let me know. JMeter Docker Extension GitHub Repo Conclusion In conclusion, the introduction of the Apache JMeter Docker Extension is a significant step forward for developers and testers looking to streamline their performance testing workflow. With this open-source and lightweight extension, you can run JMeter tests in Docker without the need to install JMeter locally, saving you time and resources. Despite a few known issues and limitations, such as supporting only frequently used JMeter arguments, the extension holds promise for the future. In summary, the Apache JMeter Docker Extension provides a valuable tool for developers and testers, enabling them to perform JMeter tests efficiently within Docker containers, and it's a welcome addition to the Docker Extension ecosystem. It's worth exploring for anyone involved in performance testing and looking to simplify their workflow.
Coding is not just about strings of code — it's an art form, a creative endeavor where developers use Agile methodologies as their paintbrushes and palettes to craft software masterpieces. In this article, we'll explore how Agile strategies align with the creative spirit of developers, turning the process of coding into a journey of craftsmanship. The Agile Developer's Canvas Imagine the Agile Developer's Canvas as an artist's sketchbook. It's a place where developers lay the foundation for their creations. User stories, sprint planning, and retrospectives are the vivid colors on this canvas, guiding developers through the creative process. This visual representation ensures that the development journey is not just a technical process but a canvas filled with the aspirations and visions of the creators. Practical Considerations When filling the Agile Developer's Canvas, consider it not just as a planning tool but as a living document that evolves with the project. Regularly revisit and update the canvas during retrospectives to reflect new learnings and changes in project goals. This dynamic approach ensures that the canvas stays relevant and continues to guide the team through the entire creative journey. Encourage teams to use the Agile Developer's Canvas as a communication tool. Display it prominently in team spaces to serve as a constant reminder of the project's vision and priorities. This visual representation can align team members, fostering a shared understanding and commitment to the creative process. Refactoring as Sculpting Think of refactoring as a sculptor refining their masterpiece. Developers, like artists, continuously shape and refine their code to enhance its quality and elegance. It's not just about fixing bugs; it's about transforming the code into a work of art. This section encourages developers to view refactoring as a form of creative expression, providing practical tips for chiseling away imperfections and creating code that's not just functional but beautiful. Practical Considerations Incorporate refactoring into the definition of done (DoD) for user stories. This ensures that refactoring isn't treated as a separate task but is an integral part of delivering quality code with each iteration. Promote a mindset shift from viewing refactoring as merely fixing technical debt to considering it an opportunity for creative expression. Encourage developers to share their refactoring stories, celebrating instances where code was transformed into something more elegant and maintainable. Suggest organizing occasional "refactoring workshops" where team members collaboratively explore and refactor specific parts of the codebase. This not only enhances collective knowledge but also fosters a sense of shared responsibility for the overall craftsmanship of the code. The Symphony of Continuous Integration In the world of software development, Continuous Integration is like the conductor of a symphony, bringing harmony to the integration of various code contributions. Automated testing becomes the virtuoso, ensuring the quality of the software symphony. Real-world examples paint a picture of a well-coordinated orchestra of developers, each contributing to a harmonious composition of code that resonates with excellence. Practical Considerations Consider integrating Continuous Integration practices into sprint planning. Devote time to discuss integration strategies, potential challenges, and how each team member's contributions will harmonize within the larger development symphony. Highlight the role of feedback loops within the Continuous Integration process. Quick feedback on integration results helps developers course-correct early, preventing dissonance in the overall symphony. Encourage the use of visual dashboards displaying the status of builds and tests. This transparency allows team members to appreciate the collaborative effort and progress made in orchestrating a seamless software symphony. User Stories as Narrative Threads User stories are the threads that weave through the fabric of software development. Instead of dry technicalities, think of them as the characters in a story. This section delves into the art of crafting compelling user stories, transforming them from mere requirements into engaging narratives. By infusing creativity into these stories, developers can create software that tells a meaningful tale, capturing the attention and satisfaction of end-users. Practical Considerations Advocate for user story workshops that involve cross-functional team members. This collaborative approach ensures that diverse perspectives contribute to crafting meaningful narratives, fostering a sense of inclusivity in the creative process. Promote storytelling techniques within the team. Encourage developers to share their favorite user stories, emphasizing the impact on end-users and how each story contributes to the overarching narrative of the software. Consider implementing a "user story showcase" during sprint reviews, where developers present the narratives behind completed user stories. This not only celebrates achievements but also reinforces the connection between code and the meaningful stories it helps tell. Conclusion As developers embark on the Agile journey, they're not just writing code; they're creating something meaningful. The Agile Developer's Canvas, refactoring as sculpting, the symphony of continuous integration, and user stories as narrative threads provide the tools for developers to transcend the technicalities and embrace the artistry of their craft. This journey isn't just about delivering exceptional software; it's about contributing to the evolving narrative of software development as a creative and human-centered art form. By embracing this perspective, developers become not just coders but artists, each line of code a stroke on the canvas of digital innovation.
We can agree decoupling is a good practice that simplifies the code and the maintainability of the project. A common way of decoupling the code is to divide the responsibilities into different layers. A very common division is: View layer: In charge of rendering HTML and interacting with the user Domain layer: In charge of the business logic Infra layer: In charge of getting the data from the backend and returning it to the domain layer (Here, it is very common to use the repository pattern, which is just a contract to get the data. The contract is unique but you can have multiple implementations; for example, one for a REST API and another for a GraphQL API. You should be able to change the implementation without changing other pieces in the code.) Let's see a couple of examples of use cases where it is very typical to put the performance over the decoupling. (Spoiler: we can have both.) Imagine, you have an endpoint that returns the list of products, and one of the fields is the category_id. The response can be something like this (I removed other fields to make a simple example): JSON [ { id: 1, name: "Product 1", category_id: 1 }, { id: 2, name: "Product 2", category_id: 2 }, ... ] We need to show the category name in the frontend (not the ID), so we need to call another endpoint to get the category name. That endpoint returns something like this: JSON [ { id: 1, name: "Mobile" }, { id: 2, name: "TVs" }, { id: 3, name: "Keyboards" }, ... ] You can think the backend should do the join and return all-in-one requests, but that is not always possible. We can do the join in the frontend, in the function or method in charge of recovering the products. We can do both requests and join the information. For example: TypeScript async function getProductList(): Promise<Product[]> { const products = await fetchProducts() const categories = await fetchCategories() return products.map(product => { const category = categories.find(category => category.id === product.category_id); return { ...product, category_name: category.name } }) } Our application doesn't need to know anything about that we need 2 calls to recover the information, and we can use the category_name in the frontend without any problem. Now imagine you need to show the list of categories; for example, in a dropdown. You can reuse the fetchCategories function, as it does exactly what you need. In your view, the code is something like this: Vue.js Component <template> <dropdown :options="categories" /> <product-list :products="products" /> </template> <script lang="ts" setup> import { fetchCategories, getProductList } from '@/repositories'; const categories = await fetchCategories(); const products = await getProductList(); </script> At that point, you realize you are doing 2 calls to the same endpoint to recover the same data - data you recovered to compose the product list - and that is not good in terms of performance, network load, back-end load, etc. At this moment, you start to think about how to reduce the number of calls to the backend; in this case, just to reuse the category list. You can have the temptation of moving the calls to the view and doing the joining of the products and the categories. Vue.js Component // ❌❌❌ Not nice solution <template> <dropdown :options="categories" /> <product-list :products="products" /> </template> <script lang="ts" setup> import { fetchCategories, fetchProducts } from '@/repositories'; const categories = await fetchCategories(); const products = await fetchProducts().map(product => { const category = categories.find(category => category.id === product.category_id); return { ...product, category_name: category.name }; }); </script> With that, you resolved the performance problems, but you added another BIG problem: infra, view, and domain coupling. Now your view knows the shape of the data in the infra (backend) and makes it hard to reuse the code. We can go deep on this and do things even worse: what happens if your head bar is in another component that needs the list of categories? You need to think about the application in a global way. Imagine something more complex: a scenario where you need the categories in the header, product list, filters, and footer. With the previous approach, your app layer (Vue, React, etc.) needs to think about how to get the data to minimize the requests. And that is not good, as the app layer should be focused on the view, not on the infra. Using a Global Store One solution to this problem is to use a global store (Vuex, Pinia, Redux, etc.) to delegate the requests and just use the store in the view. The store only should load the data if is not loaded yet, and the view should not care about how the data is loaded. This sounds like cache, right? We solved the performance issue, but we're still having infra and view coupled. Infra Cache to the Rescue To decouple as much as possible the infra and the view, we should move the cache to the infra layer (the layer in charge of getting the data from the backend). By doing that, we can call the infra methods at any time doing just a single request to the backend; but the important concept is that the domain, the application, and the view know nothing about the cache, the network speed, the number of requests, etc. The infra layer is just a layer to get the data with a contract (how to ask for the data and how the data is returned). Following the decoupling principles, we should be able to change the infra-layer implementation without changing the domain, application, or view layers. For example, we can replace the backend that uses REST with a backend that uses GraphQL, and we can get the product with the category names without doing 2 requests. But again, this is something the infra layer should care about, not the view. There are different strategies you can follow to implement the cache in the infra layer: HTTP cache (Proxy or Browser internal cache), but in these cases, for better flexibility invalidating the caches in the frontend, it's better that our application (infra layer again) manage the cache. If you are using Axios, you can use Axios Cache Interceptor to manage the cache in the infra layer. This library makes caching very simple: TypeScript // Example from axios cache interceptor page import Axios from 'axios'; import { setupCache } from 'axios-cache-interceptor'; // same object, but with updated typings. const axios = setupCache(Axios); const req1 = axios.get('https://api.example.com/'); const req2 = axios.get('https://api.example.com/'); const [res1, res2] = await Promise.all([req1, req2]); res1.cached; // false res2.cached; // true You only need to wrap the axios instance with the cache interceptor, and the library will take care of the rest. TTL TTL is the time the cache will be valid. After that time, the cache will be invalidated and the next request will be done to the backend. The TTL is a very important concept, as it defines how fresh the data is. When you are caching data, a challenging problem is data inconsistency. In our example, we can think of a shopping cart. If it's cached and the user adds a new product, and if your app makes a request to get the updated version of the cart, it will get the cached version, and the user will not see the new product. There are strategies to invalidate the cache and solve this problem, but that is out of the scope of this post. However, you need to know that this is not a trivial problem: different use cases need different strategies. As longer the TTL is, the bigger the data inconsistency problem is, as more events can happen in that time. But for the goal we are looking for (allowing to decouple the code easily), a very low TTL (ex., 10 seconds) is enough to remove the data inconsistency problem. Why Is a Low TTL Enough? Think about how the user interacts with the application: The user will ask for a URL (it can be part of a SPA or SSR page). The application will create the layout of the page, mounting the independent components: the header, the footer, the filters, and the content (product list in the example). Each component asks for the data it needs to do. The application will render the page with the data recovered and send it to the browser (SSR) or inject/update it in the DOM (SPA). All those processes are repeated in each page change (maybe partially in a SPA) and the most important thing: it is executed in a short period of time (maybe milliseconds). So with a low TTL, we can be pretty sure we will do only a request to the backend, and we will not have data inconsistency problems as in the next page change or user interaction the cached expired and we will get the fresh data. Summarizing This caching strategy in low TTL is a very good solution to decouple the infra and the view: Developers don't need to think about how to get the data to minimize the requests in the view layer. If you need the list of the categories in a sub-component, you ask for it and don't need to if another component is requesting the same data. Avoids maintaining a global app state (stores) Makes it more natural to do multiple requests follow the contract in a repository pattern to get the data you need in the repository layer, and do the join in the infra layer. In general terms, simplifies the code complexity No cache invalidation is challenged, as the TTL is very low (maybe for some very specific use cases).
What Is a Micro Frontend? Micro frontends are an architectural approach that extends the concept of microservices to the front end of web applications. In a micro frontend architecture, a complex web application is broken down into smaller, independently deployable, and maintainable units called micro frontends. Each micro frontend is responsible for a specific part of the user interface and its related functionality. Key characteristics and concepts of micro frontends include: Independence: Micro frontends are self-contained and independently developed, tested, and deployed. This autonomy allows different teams to work on different parts of the application with minimal coordination. Technology agnostic: Each micro frontend can use different front-end technologies (e.g., Angular, React, Vue.js) as long as they can be integrated into the parent or Shell application. Isolation: Micro frontends are isolated from each other, both in terms of code and dependencies. This isolation ensures that changes in one micro frontend do not impact others. Integration: A container or Shell application is responsible for integrating and orchestrating the micro frontends. It provides the overall structure of the user interface and handles routing between micro frontends. Independent deployment: Micro frontends can be deployed independently, allowing for continuous delivery and faster updates. This reduces the risk of regression issues and accelerates the release cycle. Loose coupling: Micro frontends communicate through well-defined APIs and shared protocols, such as HTTP, allowing them to be loosely coupled. This separation of concerns simplifies development and maintenance. User Interface composition: The container application assembles the user interface by composing the micro frontends together. This composition can be done on the server-side (Server-Side Includes) or client-side (Client-Side Routing). Scaling and performance: Micro frontends enable the horizontal scaling of specific parts of an application, helping to optimize performance for frequently accessed areas. Decentralized teams: Different teams or development groups can work on individual micro frontends. This decentralization is advantageous for large or distributed organizations. Micro frontend architectures are particularly useful in large, complex web applications, where a monolithic approach might lead to development bottlenecks, increased complexity, and difficulties in maintaining and scaling the application. By using micro frontends, organizations can achieve greater flexibility, agility, and maintainability in their front-end development processes, aligning with the broader trend of microservices in the world of software architecture. Micro Frontends Hosted Into a Single Shell UI Let's look at how two Angular micro frontends can be hosted into a single Shell UI. To host two Angular micro frontends in a single Shell Angular UI, you can use a micro frontend framework like single-spa or qiankun to achieve this. These frameworks enable you to integrate multiple independently developed micro frontends into a single application Shell. Here’s a high-level overview of how to set up such an architecture: 1. Create the Shell Angular Application Set up your Shell Angular application as the main container for hosting the micro frontends. You can create this application using the Angular CLI or any other preferred method. 2. Create the Micro Frontends Create your two Angular micro frontends as separate Angular applications. Each micro frontend should have its own routing and functionality. 3. Configure Routing for Micro Frontends In each micro frontend application, configure the routing so that each micro frontend has its own set of routes. You can use Angular routing for this. 4. Use a Micro Frontend Framework Integrate a micro frontend framework like single-spa or qiankun into your Shell Angular application. Here’s an example of how to use single-spa in your Shell Angular application: Install single-spa: npm install single-spa Shell Angular Application Code In your Shell Angular application, configure the single-spa to load the micro frontends. import { registerApplication, start } from 'single-spa'; // Register the micro frontends registerApplication({ name: 'customer-app', app: () => System.import('customer-app'), // Load customer-app activeWhen: (location) => location.pathname.startsWith('/customer-app'), }); registerApplication({ name: 'accounts-app', app: () => System.import('accounts-app'), // Load accounts-app activeWhen: (location) => location.pathname.startsWith('/accounts-app'), }); // Start single-spa start(); 5. Host Micro Frontends Configure your Shell Angular’s routing to direct to the respective micro frontends based on the URL. For example, when a user accesses /customer-app, the shell should load the customer micro frontend, and for /accounts-app, load accounts micro frontend. 6. Development and Build Develop and build your micro frontends separately. Each should be a standalone Angular application. 7. Deployment Deploy the Shell Angular application along with the micro frontends, making sure they are accessible from the same domain. With this setup, your Shell Angular application will act as the main container for hosting the micro frontends, and you can navigate between the micro frontends within the shell’s routing. This allows you to have a single Angular UI that hosts multiple micro frontends, each with its own functionality.
On the 19th of September, 2023, Java 21 was released. It is time to take a closer look at the changes since the last LTS release, which is Java 17. In this blog, some of the changes between Java 17 and Java 21 are highlighted, mainly by means of examples. Enjoy! Introduction First of all, the short introduction is not entirely correct because Java 21 is mentioned in one sentence with being an LTS release. An elaborate explanation is given in this blog of Nicolai Parlog. In short, Java 21 is a set of specifications defining the behaviour of the Java language, the API, the virtual machine, etc. A reference implementation of Java 21 is implemented by OpenJDK. Updates to the reference implementation are made in this OpenJDK repository. After the release, a fork is created jdk21u. This jdk21u fork is maintained and will receive updates for a longer time than the regular 6-month cadence. Even with jdk21u, there is no guarantee that fixes are made during a longer time period. This is where the different Vendor implementations make a difference. They build their own JDKs and make them freely available, often with commercial support. So, it is better to say “JDK21 is a version, for which many vendors offer support." What has changed between Java 17 and Java 21? A complete list of the JEPs (Java Enhancement Proposals) can be found at the OpenJDK website. Here you can read the nitty gritty details of each JEP. For a complete list of what has changed per release since Java 17, the Oracle release notes give a good overview. In the next sections, some of the changes are explained by example, but it is mainly up to you to experiment with these new features in order to get acquainted with them. Do note that no preview or incubator JEPs are considered here. The sources used in this post are available at GitHub. Check out an earlier blog if you want to know what has changed between Java 11 and Java 17. Last thing to mention in this introduction, is the availability of a Java playground, where you can experiment with Java from within your browser. Prerequisites Prerequisites for this blog are: You must have a JDK21 installed; You need some basic Java knowledge. JEP444: Virtual Threads Let’s start with the most important new feature in JDK21: virtual threads. Virtual threads are lightweight threads that dramatically reduce the effort of writing, maintaining, and observing high-throughput concurrent applications. Up till now, threads were implemented as wrappers around Operating System (OS) threads. OS threads are costly and if you send an http request to another server, you will block this thread until you have received the answer of the server. The processing part (creating the request and processing the answer) is just a small portion of the entire time the thread was blocked. Sending the request and waiting for the answer takes up much more time than the processing part. A way to circumvent this, is to use asynchronous style. Disadvantage of this approach is the more complex implementation. This is where virtual threads come to the rescue. You are able to keep the implementation simple like you did before and still have the scalability of the asynchronous style. The Java application PlatformThreads.java demonstrates what happens when creating 1.000, 10.000, 100.000 and 1.000.000 threads concurrently. The threads only wait for one second. Dependent on your machine, you will get different results because the threads are bound to the OS threads. Java public class PlatformThreads { public static void main(String[] args) { testPlatformThreads(1000); testPlatformThreads(10_000); testPlatformThreads(100_000); testPlatformThreads(1_000_000); } private static void testPlatformThreads(int maximum) { long time = System.currentTimeMillis(); try (var executor = Executors.newCachedThreadPool()) { IntStream.range(0, maximum).forEach(i -> { executor.submit(() -> { Thread.sleep(Duration.ofSeconds(1)); return i; }); }); } time = System.currentTimeMillis() - time; System.out.println("Number of threads = " + maximum + ", Duration(ms) = " + time); } } The output of running this application is the following: Shell Number of threads = 1000, Duration(ms) = 1094 Number of threads = 10000, Duration(ms) = 1625 Number of threads = 100000, Duration(ms) = 5292 [21,945s][warning][os,thread] Attempt to protect stack guard pages failed (0x00007f8525d00000-0x00007f8525d04000). # # A fatal error has been detected by the Java Runtime Environment: # Native memory allocation (mprotect) failed to protect 16384 bytes for memory to guard stack pages # An error report file with more information is saved as: # /home/<user_dir>/MyJava21Planet/hs_err_pid8277.log [21,945s][warning][os,thread] Attempt to protect stack guard pages failed (0x00007f8525c00000-0x00007f8525c04000). [thread 82370 also had an error] [thread 82371 also had an error] [21,946s][warning][os,thread] Failed to start thread "Unknown thread" - pthread_create failed (EAGAIN) for attributes: stacksize: 1024k, guardsize: 0k, detached. [21,946s][warning][os,thread] Failed to start the native thread for java.lang.Thread "pool-4-thread-32577" ... What do you see here? The application takes about 1s for 1.000 threads, 1.6s for 10.000 threads, 5.3s for 100.000 threads and it crashes with 1.000.000 threads. The boundary for the maximum number of OS threads on my machine lies somewhere between 100.000 and 1.000.000 threads. Change the application by replacing the Executors.newCachedThreadPool with the new Executors.newVirtualThreadPerTaskExecutor (VirtualThreads.java). Java try (var executor = Executors.newVirtualThreadPerTaskExecutor()) { IntStream.range(0, maximum).forEach(i -> { executor.submit(() -> { Thread.sleep(Duration.ofSeconds(1)); return i; }); }); } Run the application again. The output is the following: Shell Number of threads = 1000, Duration(ms) = 1020 Number of threads = 10000, Duration(ms) = 1056 Number of threads = 100000, Duration(ms) = 1106 Number of threads = 1000000, Duration(ms) = 1806 Number of threads = 10000000, Duration(ms) = 22010 The application takes about 1s for 1.000 threads (similar to the OS threads), 1s for 10.000 threads (better than OS threads), 1.1s for 100.000 threads (also better), 1.8s for 1.000.000 (does not crash) and even 10.000.000 threads are no problem, taking about 22s in order to execute. This is quite amazing and incredible, isn’t it? JEP431: Sequenced Collections Sequenced Collections fill the lack of a collection type that represents a sequence of elements with a defined encounter order. Besides that, a uniform set of operations were absent that apply such collections. There have been quite some complaints from the community about this topic and this is now solved by the introduction of some new collection interfaces. The overview is available in the following image which is based on the overview as created by Stuart Marks. Besides the new introduced interfaces, some unmodifiable wrappers are available now. Java Collections.unmodifiableSequencedCollection(sequencedCollection) Collections.unmodifiableSequencedSet(sequencedSet) Collections.unmodifiableSequencedMap(sequencedMap) The next sections will show these new interfaces based on the application SequencedCollections.java. SequencedCollection A sequenced collection is a Collection whose elements have a predefined encounter order. The new interface SequencedCollection is: Java interface SequencedCollection<E> extends Collection<E> { // new method SequencedCollection<E> reversed(); // methods promoted from Deque void addFirst(E); void addLast(E); E getFirst(); E getLast(); E removeFirst(); E removeLast(); } In the following example, a list is created and reversed. The first and last item are retrieved and a new first and last item are added. Java private static void sequencedCollection() { List<String> sc = Stream.of("Alpha", "Bravo", "Charlie", "Delta").collect(Collectors.toCollection(ArrayList::new)); System.out.println("Initial list: " + sc); System.out.println("Reversed list: " + sc.reversed()); System.out.println("First item: " + sc.getFirst()); System.out.println("Last item: " + sc.getLast()); sc.addFirst("Before Alpha"); sc.addLast("After Delta"); System.out.println("Added new first and last item: " + sc); } The output is: Shell Initial list: [Alpha, Bravo, Charlie, Delta] Reversed list: [Delta, Charlie, Bravo, Alpha] First item: Alpha Last item: Delta Added new first and last item: [Before Alpha, Alpha, Bravo, Charlie, Delta, After Delta] As you can see, no real surprises here, it just works. SequencedSet A sequenced set is a Set that is a SequencedCollection that contains no duplicate elements. The new interface is: Java interface SequencedSet<E> extends Set<E>, SequencedCollection<E> { SequencedSet<E> reversed(); // covariant override } In the following example, a SortedSet is created and reversed. The first and last item are retrieved and it is tried to add a new first and last item. Java private static void sequencedSet() { SortedSet<String> sortedSet = new TreeSet<>(Set.of("Charlie", "Alpha", "Delta", "Bravo")); System.out.println("Initial list: " + sortedSet); System.out.println("Reversed list: " + sortedSet.reversed()); System.out.println("First item: " + sortedSet.getFirst()); System.out.println("Last item: " + sortedSet.getLast()); try { sortedSet.addFirst("Before Alpha"); } catch (UnsupportedOperationException uoe) { System.out.println("addFirst is not supported"); } try { sortedSet.addLast("After Delta"); } catch (UnsupportedOperationException uoe) { System.out.println("addLast is not supported"); } } The output is: Shell Initial list: [Alpha, Bravo, Charlie, Delta] Reversed list: [Delta, Charlie, Bravo, Alpha] First item: Alpha Last item: Delta addFirst is not supported addLast is not supported The only difference with a SequencedCollection is that the elements are sorted alphabetically in the initial list and that the addFirst and addLast methods are not supported. This is obvious because you cannot guarantee that the first element will remain the first element when added to the list (it will be sorted again anyway). SequencedMap A sequenced map is a Map whose entries have a defined encounter order. The new interface is: Java interface SequencedMap<K,V> extends Map<K,V> { // new methods SequencedMap<K,V> reversed(); SequencedSet<K> sequencedKeySet(); SequencedCollection<V> sequencedValues(); SequencedSet<Entry<K,V>> sequencedEntrySet(); V putFirst(K, V); V putLast(K, V); // methods promoted from NavigableMap Entry<K, V> firstEntry(); Entry<K, V> lastEntry(); Entry<K, V> pollFirstEntry(); Entry<K, V> pollLastEntry(); } In the following example, a LinkedHashMap is created, and some elements are added and the list is reversed. The first and last elements are retrieved and new first and last items are added. Java private static void sequencedMap() { LinkedHashMap<Integer,String> hm = new LinkedHashMap<Integer,String>(); hm.put(1, "Alpha"); hm.put(2, "Bravo"); hm.put(3, "Charlie"); hm.put(4, "Delta"); System.out.println("== Initial List =="); printMap(hm); System.out.println("== Reversed List =="); printMap(hm.reversed()); System.out.println("First item: " + hm.firstEntry()); System.out.println("Last item: " + hm.lastEntry()); System.out.println(" == Added new first and last item =="); hm.putFirst(5, "Before Alpha"); hm.putLast(3, "After Delta"); printMap(hm); } The output is: Shell == Initial List == 1 Alpha 2 Bravo 3 Charlie 4 Delta == Reversed List == 4 Delta 3 Charlie 2 Bravo 1 Alpha First item: 1=Alpha Last item: 4=Delta == Added new first and last item == 5 Before Alpha 1 Alpha 2 Bravo 4 Delta 3 After Delta Also here no surprises. JEP440: Record Patterns Record patterns enhance the Java programming language in order to deconstruct record values. This will make it easier to navigate into the data. Let’s see how this works with application RecordPatterns.java. Assume the following GrapeRecord which consists out of a color and a number of pits. Java record GrapeRecord(Color color, Integer nbrOfPits) {} When you need to access the number of pits, you had to implicitely cast the GrapeRecord and you were able to access the nbrOfPits member using the grape variable. Java private static void singleRecordPatternOldStyle() { Object o = new GrapeRecord(Color.BLUE, 2); if (o instanceof GrapeRecord grape) { System.out.println("This grape has " + grape.nbrOfPits() + " pits."); } } With Record Patterns, you can add the record members as part of the instanceof check and access them directly. Java private static void singleRecordPattern() { Object o = new GrapeRecord(Color.BLUE, 2); if (o instanceof GrapeRecord(Color color, Integer nbrOfPits)) { System.out.println("This grape has " + nbrOfPits + " pits."); } } Introduce a record SpecialGrapeRecord which consists out of a record GrapeRecord and a boolean. Java record SpecialGrapeRecord(GrapeRecord grape, boolean special) {} You have created a nested record. Record Patterns also support nested records as can be seen in the following example: Java private static void nestedRecordPattern() { Object o = new SpecialGrapeRecord(new GrapeRecord(Color.BLUE, 2), true); if (o instanceof SpecialGrapeRecord(GrapeRecord grape, boolean special)) { System.out.println("This grape has " + grape.nbrOfPits() + " pits."); } if (o instanceof SpecialGrapeRecord(GrapeRecord(Color color, Integer nbrOfPits), boolean special)) { System.out.println("This grape has " + nbrOfPits + " pits."); } } JEP441: Pattern Matching for Switch Pattern matching for instanceof has been introduced with Java 17. Pattern matching for switch expressions will allow to test expressions against a number of patterns. This leads to several new and interesting possibilities as is demonstrated in application PatternMatchingSwitch.java. Pattern Matching Switch When you want to verify whether an object is an instance of a particular type, you needed to write something like the following: Java private static void oldStylePatternMatching(Object obj) { if (obj instanceof Integer i) { System.out.println("Object is an integer:" + i); } else if (obj instanceof String s) { System.out.println("Object is a string:" + s); } else if (obj instanceof FruitType f) { System.out.println("Object is a fruit: " + f); } else { System.out.println("Object is not recognized"); } } This is quite verbose and the reason is that you cannot test whether the value is of a particular type in a switch expression. With the introduction of pattern matching for switch, you can refactor the code above to the following, less verbose code: Java private static void patternMatchingSwitch(Object obj) { switch(obj) { case Integer i -> System.out.println("Object is an integer:" + i); case String s -> System.out.println("Object is a string:" + s); case FruitType f -> System.out.println("Object is a fruit: " + f); default -> System.out.println("Object is not recognized"); } } Switches and Null When the object argument in the previous example happens to be null, a NullPointerException will be thrown. Therefore, you need to check for null values before evaluating the switch expression. The following code uses pattern matching for switch, but if obj is null, a NullPointerException is thrown. Java private static void oldStyleSwitchNull(Object obj) { try { switch (obj) { case Integer i -> System.out.println("Object is an integer:" + i); case String s -> System.out.println("Object is a string:" + s); case FruitType f -> System.out.println("Object is a fruit: " + f); default -> System.out.println("Object is not recognized"); } } catch (NullPointerException npe) { System.out.println("NullPointerException thrown"); } } However, now it is possible to test against null and determine in your switch what to do when the value happens to be null. Java private static void switchNull(Object obj) { switch (obj) { case Integer i -> System.out.println("Object is an integer:" + i); case String s -> System.out.println("Object is a string:" + s); case FruitType f -> System.out.println("Object is a fruit: " + f); case null -> System.out.println("Object is null"); default -> System.out.println("Object is not recognized"); } } Case Refinement What if you need to add extra checks based on a specific FruitType in the previous example? This would lead to extra if-statements in order to determine what to do. Java private static void inefficientCaseRefinement(Object obj) { switch (obj) { case String s -> System.out.println("Object is a string:" + s); case FruitType f -> { if (f == FruitType.APPLE) { System.out.println("Object is an apple"); } if (f == FruitType.AVOCADO) { System.out.println("Object is an avocado"); } if (f == FruitType.PEAR) { System.out.println("Object is a pear"); } if (f == FruitType.ORANGE) { System.out.println("Object is an orange"); } } case null -> System.out.println("Object is null"); default -> System.out.println("Object is not recognized"); } } This type of problem is solved by allowing when-clauses in switch blocks to specify guards to pattern case labels. The case label is called a guarded case label and the boolean expression is called the guard. The above code becomes the following code, which is much more readable. Java private static void caseRefinement(Object obj) { switch (obj) { case String s -> System.out.println("Object is a string:" + s); case FruitType f when (f == FruitType.APPLE) -> { System.out.println("Object is an apple"); } case FruitType f when (f == FruitType.AVOCADO) -> { System.out.println("Object is an avocado"); } case FruitType f when (f == FruitType.PEAR) -> { System.out.println("Object is a pear"); } case FruitType f when (f == FruitType.ORANGE) -> { System.out.println("Object is an orange"); } case null -> System.out.println("Object is null"); default -> System.out.println("Object is not recognized"); } } Enum Constants Enum types can be used in switch expressions, but the evaluation is limited to the enum constants of the specific type. What if you want to evaluate based on multiple enum constants? Introduce a new enum CarType. Java public enum CarType { SUV, CABRIO, EV } Now that it is possible to use a case refinement, you could write something like the following. Java private static void inefficientEnumConstants(Object obj) { switch (obj) { case String s -> System.out.println("Object is a string:" + s); case FruitType f when (f == FruitType.APPLE) -> System.out.println("Object is an apple"); case FruitType f when (f == FruitType.AVOCADO) -> System.out.println("Object is an avocado"); case FruitType f when (f == FruitType.PEAR) -> System.out.println("Object is a pear"); case FruitType f when (f == FruitType.ORANGE) -> System.out.println("Object is an orange"); case CarType c when (c == CarType.CABRIO) -> System.out.println("Object is a cabrio"); case null -> System.out.println("Object is null"); default -> System.out.println("Object is not recognized"); } } This code would be more readable if you would have a separate case for every enum constant instead of having a lots of guarded patterns. This turns the above code into the following, much more readable code. Java private static void enumConstants(Object obj) { switch (obj) { case String s -> System.out.println("Object is a string:" + s); case FruitType.APPLE -> System.out.println("Object is an apple"); case FruitType.AVOCADO -> System.out.println("Object is an avocado"); case FruitType.PEAR -> System.out.println("Object is a pear"); case FruitType.ORANGE -> System.out.println("Object is an orange"); case CarType.CABRIO -> System.out.println("Object is a cabrio"); case null -> System.out.println("Object is null"); default -> System.out.println("Object is not recognized"); } } JEP413: Code Snippets Code snippets allow you to simplify the inclusion of example source code in API documentation. Code snippets are now often added by means of the <pre> HTML tag. See application Snippets.java for the complete source code. Java /** * this is an example in Java 17 * <pre>{@code * if (success) { * System.out.println("This is a success!"); * } else { * System.out.println("This is a failure"); * } * } * </pre> * @param success */ public void example1(boolean success) { if (success) { System.out.println("This is a success!"); } else { System.out.println("This is a failure"); } } Generate the javadoc: Shell $ javadoc src/com/mydeveloperplanet/myjava21planet/Snippets.java -d javadoc In the root of the repository, a directory javadoc is created. Open the index.html file with your favourite browser and click the snippets URL. The above code has the following javadoc. There are some shortcomings using this approach: no source code validation; no way to add comments because the fragment is already located in a comment block; no code syntax highlighting; etc. Inline Snippets In order to overcome these shortcomings, a new @snippet tag is introduced. The code above can be rewritten as follows. Java /** * this is an example for inline snippets * {@snippet : * if (success) { * System.out.println("This is a success!"); * } else { * System.out.println("This is a failure"); * } * } * * @param success */ public void example2(boolean success) { if (success) { System.out.println("This is a success!"); } else { System.out.println("This is a failure"); } } The generated javadoc is the following. You notice here that the code snippet is visible marked as source code and a copy source code icon is added. As an extra test, you can remove in the javadoc of methods example1 and example2 a semi-colon, introducing a compiler error. In example1, the IDE just accepts this compiler error. However, in example2, the IDE will prompt you about this compiler error. External Snippets An interesting feature is to move your code snippets to an external file. Create in package com.mydeveloperplanet.myjava21planet a directory snippet-files. Create a class SnippetsExternal in this directory and mark the code snippets by means of an @start tag and an @end tag. With the region parameter, you can give the code snippet a name to refer to. The example4 method also contains the @highlight tag which allows you highlight certain elements in the code. Many more formatting and highlighting options are available, it is too much to cover them all. Java public class SnippetsExternal { public void example3(boolean success) { // @start region=example3 if (success) { System.out.println("This is a success!"); } else { System.out.println("This is a failure"); } // @end } public void example4(boolean success) { // @start region=example4 if (success) { System.out.println("This is a success!"); // @highlight substring="println" } else { System.out.println("This is a failure"); } // @end } } In your code, you refer to the SnippetsExternal file and the region you want to include in your javadoc. Java /** * this is an example for external snippets * {@snippet file="SnippetsExternal.java" region="example3" }" * * @param success */ public void example3(boolean success) { if (success) { System.out.println("This is a success!"); } else { System.out.println("This is a failure"); } } /** * this is an example for highlighting * {@snippet file="SnippetsExternal.java" region="example4" }" * * @param success */ public void example4(boolean success) { if (success) { System.out.println("This is a success!"); } else { System.out.println("This is a failure"); } } When you generate the javadoc as before, you will notice in the output that the javadoc tool cannot find the SnippetsExternal file. Shell src/com/mydeveloperplanet/myjava21planet/Snippets.java:48: error: file not found on source path or snippet path: SnippetsExternal.java * {@snippet file="SnippetsExternal.java" region="example3" }" ^ src/com/mydeveloperplanet/myjava21planet/Snippets.java:62: error: file not found on source path or snippet path: SnippetsExternal.java * {@snippet file="SnippetsExternal.java" region="example4" }" You need to add the path to the snippet files by means of the --snippet-path argument. Shell $ javadoc src/com/mydeveloperplanet/myjava21planet/Snippets.java -d javadoc --snippet-path=./src/com/mydeveloperplanet/myjava21planet/snippet-files The javadoc for method example3 contains the defined snippet. The javadoc for method example4 contains the highlighted section. JEP408: Simple Web Server Simple Web Server is a minimal HTTP server for serving a single directory hierarchy. Goal is to provide a web server for computer science students for testing or prototyping purposes. Create in the root of the repository a httpserver directory, containing a simple index.html file. HTML Welcome to Simple Web Server You can start the web server programmatically as follows (see SimpleWebServer.java). The path to the directory must refer to the absolute path of the directory. Java private static void startFileServer() { var server = SimpleFileServer.createFileServer(new InetSocketAddress(8080), Path.of("/<absolute path>/MyJava21Planet/httpserver"), SimpleFileServer.OutputLevel.VERBOSE); server.start(); } Verify the output. Shell $ curl http://localhost:8080 Welcome to Simple Web Server You can change the contents of the index.html file on the fly and it will serve the new contents immediately after a refresh of the page. It is also possible to create a custom HttpHandler in order to intercept the response and change it. Java class MyHttpHandler implements com.sun.net.httpserver.HttpHandler { @Override public void handle(HttpExchange exchange) throws IOException { if ("GET".equals(exchange.getRequestMethod())) { OutputStream outputStream = exchange.getResponseBody(); String response = "It works!"; exchange.sendResponseHeaders(200, response.length()); outputStream.write(response.getBytes()); outputStream.flush(); outputStream.close(); } } } Start the web server on a different port and add a context path and the HttpHandler. Java private static void customFileServerHandler() { try { var server = HttpServer.create(new InetSocketAddress(8081), 0); server.createContext("/custom", new MyHttpHandler()); server.start(); } catch (IOException ioe) { System.out.println("IOException occured"); } } Run this application and verify the output. Shell $ curl http://localhost:8081/custom It works! Conclusion In this blog, you took a quick look at some features added since the last LTS release Java 17. It is now up to you to start thinking about your migration plan to Java 21 and a way to learn more about these new features and how you can apply them into your daily coding habits. Tip: IntelliJ will help you with that!
The INSERT INTO ... RETURNING SQL clause inserts one or more records into a table and immediately retrieves specified values from these newly inserted rows or additional data from expressions. This is particularly useful when you need to get values generated by the database upon insertion, such as auto-incremented IDs, calculated fields, or default values. Is this useful? Are there any actual use cases for this SQL clause? Don't ORM frameworks make it obsolete? I don't have definitive answers to these questions. However, I recently found it useful when I created a demo to explain how read/write splitting works (see this article). I needed a SQL query that inserted a row and returned the "server ID" of the node that performed the write (this, to demonstrate that the primary node is always performing writes as opposed to the replicas). INSERT INTO ... RETURNING was perfect for this demo, and it got me thinking about other possible scenarios for this feature. After speaking with colleagues, it was clear that there actually are real-world use cases where INSERT INTO ... RETURNING is a good fit. These use cases include situations in which efficiency, simplicity, readability, direct access to the database, or database-specific features are needed, not to mention, when possible limitations in ORMs hit. Even though you might still feel the urge to implement this in application code, it's worth looking at how others use this SQL construct and evaluate whether it's useful in your project or not. Let's dig in. Case: E-Commerce Order Processing Scenario: Generating and retrieving an order ID during order placement. This is very likely handled by ORMs, but still useful in case of scripts, absence of ORM, or even limitations with the ORM. SQL Example: MariaDB SQL INSERT INTO orders (customer_id, product_id, quantity) VALUES (123, 456, 2) RETURNING order_id; Outcome: Instantly provides the unique order_id to the customer. Case: Inventory Management Scenario: Updating and returning the stock count after adding new inventory. SQL Example: MariaDB SQL INSERT INTO inventory (product_name, quantity_added) VALUES ('New Product', 50) RETURNING current_stock_count; Outcome: Offers real-time stock updates for effective tracking. Case: User Registration in Web Applications Scenario: Creating a new user account and returning a confirmation message plus user ID. Here, we are returning a string, but any other kind of computed data can be returned. This is similar to the use case that I found for my demo (returning MariaDB's @@server_id). SQL Example: MariaDB SQL INSERT INTO users (username, password, email) VALUES ('new_user', 'Password123!', 'user@example.com') RETURNING user_id, 'Registration Successful'; Outcome: Confirms account creation (or returns computed data instead of having to process it later in application code) and provides the user ID for immediate use. Never store passwords in plain text like in this example! Case: Personalized Welcome Messages in User Onboarding Scenario: Customizing a welcome message based on the user's profile information during account creation. This is a more elaborated use case similar to the one shown in the previous section. SQL Example: MariaDB SQL INSERT INTO users (username, favorite_genre) VALUES ('fantasyfan', 'Fantasy') RETURNING CONCAT('Welcome, ', username, '! Explore the latest in ', favorite_genre, '!'); Outcome: Produces a personalized welcome message for the user, enhancing the onboarding experience. The message (or some sort of message template) could be provided from outside the SQL sentence, of course. Case: Calculating and Displaying Order Discounts Scenario: Automatically calculating a discount on a new order based on, for example, customer loyalty points. SQL Example: MariaDB SQL INSERT INTO orders (customer_id, total_amount, loyalty_points) VALUES (123, 200, 50) RETURNING total_amount - (loyalty_points * 0.1) AS discounted_price; Outcome: Instantly provides the customer with the discounted price of their order, incentivizing loyalty. Obviously, let your boss know about this. Case: Aggregating Survey Responses for Instant Summary Scenario: Compiling survey responses and instantly providing a summary of the collective responses. It is worth mentioning at this point that even though the SQL examples show "hardcoded" values for IDs, they can be parameters for prepared statements instead. SQL Example: MariaDB SQL INSERT INTO survey_responses (question_id, response) VALUES (10, 'Very Satisfied') RETURNING ( SELECT CONCAT(COUNT(*), ' responses, ', ROUND(AVG(rating), 2), ' average rating') FROM survey_responses WHERE question_id = 10 ); Outcome: Offers a real-time summary of responses, fostering immediate insights. Case: Generating Custom Event Itineraries Scenario: Selecting sessions for a conference event and receiving a personalized itinerary. SQL Example: MariaDB SQL INSERT INTO event_selections (attendee_id, session_id) VALUES (789, 102) RETURNING (SELECT CONCAT(session_name, ' at ', session_time) FROM event_sessions WHERE session_id = 102); Outcome: Immediately create a custom itinerary for the attendees, improving the event experience right from the registration moment. Conclusion Get to know your database. In my case, the more I continue to explore MariaDB, the more I realize the many possibilities it has. The same applies to other databases. In application code, avoid implementing things at which databases excel — namely, handling data.
This is an article from DZone's 2023 Observability and Application Performance Trend Report.For more: Read the Report From cultural and structural challenges within an organization to balancing daily work and dividing it between teams and individuals, scaling teams of site reliability engineers (SREs) comes with many challenges. However, fostering a resilient site reliability engineering (SRE) culture can facilitate the gradual and sustainable growth of an SRE team. In this article, we explore the challenges of scaling and review a successful scaling framework. This framework is suitable for guiding emerging teams and startups as they cultivate an evolving SRE culture, as well as for established companies with firmly entrenched SRE cultures. The Challenges of Scaling SRE Teams As teams scale, complexity may increase as it can be more difficult to communicate, coordinate, and maintain a team's coherence. Below is a list of challenges to consider as your team and/or organization grows: Rapid growth – Rapid growth leads to more complex systems, which can outpace the capacity of your SRE team, leading to bottlenecks and reduced reliability. Knowledge-sharing – Maintaining a shared understanding of systems and processes may become difficult, making it challenging to onboard new team members effectively. Tooling and automation – Scaling without appropriate tooling and automation can lead to increased manual toil, reducing the efficiency of the SRE team. Incident response – Coordinating incident responses can become more challenging, and miscommunications or delays can occur. Maintaining a culture of innovation and learning – This can be challenging as SREs may become more focused on solving critical daily problems and less focused on new initiatives. Balancing operational and engineering work – Since SREs are responsible for both operational tasks and engineering work, it is important to ensure that these teams have enough time to focus on both areas. A Framework for Scaling SRE Teams Scaling may come naturally if you do the right things in the right order. First, you must identify what your current state is in terms of infrastructure. How well do you understand the systems? Determine existing SRE processes that need improvement. For the SRE processes that are necessary but are not employed yet, find the tools and the metrics necessary to start. Collaborate with the appropriate stakeholders, use feedback, iterate, and improve. Step 1: Assess Your Current State Understand your system and create a detailed map of your infrastructure, services, and dependencies. Identify all the components in your infrastructure, including servers, databases, load balancers, networking equipment, and any cloud services you utilize. It is important to understand how these components are interconnected and dependent on each other — this includes understanding which services rely on others and the flow of data between them. It's also vital to identify and evaluate existing SRE practices and assess their effectiveness: Analyze historical incident data to identify recurring issues and their resolutions. Gather feedback from your SRE team and other relevant stakeholders. Ask them about pain points, challenges, and areas where improvements are needed. Assess the performance metrics related to system reliability and availability. Identify any trends or patterns that indicate areas requiring attention. Evaluate how incidents are currently being handled. Are they being resolved efficiently? Are post-incident reviews being conducted effectively to prevent recurrences? Step 2: Define SLOs and Error Budgets Collaborate with stakeholders to establish clear and meaningful service-level objectives (SLOs) by determining the acceptable error rate and creating error budgets based on the SLOs. SLOs and error budgets can guide resource allocation optimization. Computing resources can be allocated to areas that directly impact the achievement of the SLOs. SLOs set clear, achievable goals for the team and provide a measurable way to assess the reliability of a service. By defining specific targets for uptime, latency, or error rates, SRE teams can objectively evaluate whether the system is meeting the desired standards of performance. Using specific targets, a team can prioritize their efforts and focus on areas that need improvement, thus fostering a culture of accountability and continuous improvement. Error budgets provide a mechanism for managing risk and making trade-offs between reliability and innovation. They allow SRE teams to determine an acceptable threshold for service disruptions or errors, enabling them to balance the need for deploying new features or making changes to maintain a reliable service. Step 3: Build and Train Your SRE Team Identify talent according to the needs of each and every step of this framework. Look for the right skillset and cultural fit, and be sure to provide comprehensive onboarding and training programs for new SREs. Beware of the golden rule that culture eats strategy for breakfast: Having the right strategy and processes is important, but without the right culture, no strategy or process will succeed in the long run. Step 4: Establish SRE Processes, Automate, Iterate, and Improve Implement incident management procedures, including incident command and post-incident reviews. Define a process for safe and efficient changes to the system. Figure 1: Basic SRE process One of the cornerstones of SRE involves how to identify and handle incidents through monitoring, alerting, remediation, and incident management. Swift incident identification and management are vital in minimizing downtime, which can prevent minor issues from escalating into major problems. By analyzing incidents and their root causes, SREs can identify patterns and make necessary improvements to prevent similar issues from occurring in the future. This continuous improvement process is crucial for enhancing the overall reliability and performance whilst ensuring the efficiency of systems at scale. Improving and scaling your team can go hand in hand. Monitoring Monitoring is the first step in ensuring the reliability and performance of a system. It involves the continuous collection of data about the system's behavior, performance, and health. This can be broken down into: Data collection – Monitoring systems collect various types of data, including metrics, logs, and traces, as shown in Figure 2. Real-time observability – Monitoring provides real-time visibility into the system's status, enabling teams to identify potential issues as they occur. Proactive vs. reactive – Effective monitoring allows for proactive problem detection and resolution, reducing the need for reactive firefighting. Figure 2: Monitoring and observability Alerting This is the process of notifying relevant parties when predefined conditions or thresholds are met. It's a critical prerequisite for incident management. This can be broken down into: Thresholds and conditions – Alerts are triggered based on predefined thresholds or conditions. For example, an alert might be set to trigger when CPU usage exceeds 90% for five consecutive minutes. Notification channels – Alerts can be sent via various notification channels, including email, SMS, or pager, or even integrated into incident management tools. Severity levels – Alerts should be categorized by severity levels (e.g., critical, warning, informational) to indicate the urgency and impact of the issue. Remediation This involves taking actions to address issues detected through monitoring and alerting. The goal is to mitigate or resolve problems quickly to minimize the impact on users. Automated actions – SRE teams often implement automated remediation actions for known issues. For example, an automated scaling system might add more resources to a server when CPU usage is high. Playbooks – SREs follow predefined playbooks that outline steps to troubleshoot and resolve common issues. Playbooks ensure consistency and efficiency during remediation efforts. Manual interventions – In some cases, manual intervention by SREs or other team members may be necessary for complex or unexpected issues. Incident Management Effective communication, knowledge-sharing, and training are crucial during an incident, and most incidents can be reproduced in staging environments for training purposes. Regular updates are provided to stakeholders, including users, management, and other relevant teams. Incident management includes a culture of learning and continuous improvement: The goal is not only to resolve the incident but also to prevent it from happening again. Figure 3: Handling incidents A robust incident management process ensures that service disruptions are addressed promptly, thus enhancing user trust and satisfaction. In addition, by effectively managing incidents, SREs help preserve the continuity of business operations and minimize potential revenue losses. Incident management plays a vital role in the scaling process since it establishes best practices and promotes collaboration, as shown in Figure 3. As the system scales, the frequency and complexity of incidents are likely to increase. A well-defined incident management process enables the SRE team to manage the growing workload efficiently. Conclusion SRE is an integral part of the SDLC. At the end of the day, your SRE processes should be integrated into the entire process of development, testing, and deployment, as shown in Figure 4. Figure 4: Holistic view of development, testing, and the SRE process Iterating on and improving the steps above will inevitably lead to more work for SRE teams; however, this work can pave the way for sustainable and successful scaling of SRE teams at the right pace. By following this framework and overcoming the challenges, you can effectively scale your SRE team while maintaining system reliability and fostering a culture of collaboration and innovation. Remember that SRE is an ongoing journey, and it is essential to stay committed to the principles and practices that drive reliability and performance. This is an article from DZone's 2023 Observability and Application Performance Trend Report.For more: Read the Report
This is an article from DZone's 2023 Observability and Application Performance Trend Report.For more: Read the Report Employing cloud services can incur a great deal of risk if not planned and designed correctly. In fact, this is really no different than the challenges that are inherit within a single on-premises data center implementation. Power outages and network issues are common examples of challenges that can put your service — and your business — at risk. For AWS cloud service, we have seen large-scale regional outages that are documented on the AWS Post-Event Summaries page. To gain a broader look at other cloud providers and services, the danluu/post-mortems repository provides a more holistic view of the cloud in general. It's time for service owners relying (or planning) on a single region to think hard about the best way to design resilient cloud services. While I will utilize AWS for this article, it is solely because of my level of expertise with the platform and not because one cloud platform should be considered better than another. A Single-Region Approach Is Doomed to Fail A cloud-based service implementation can be designed to leverage multiple availability zones. Think of availability zones as distinct locations within a specific region, but they are isolated from other availability zones in that region. Consider the following cloud-based service running on AWS inside the Kubernetes platform: Figure 1: Cloud-based service utilizing Kubernetes with multiple availability zones In Figure 1, inbound requests are handled by Route 53, arrive at a load balancer, and are directed to a Kubernetes cluster. The controller routes requests to the service that has three instances running, each in a different availability zone. For persistence, an Aurora Serverless database has been adopted. While this design protects from the loss of one or two availability zones, the service is considered at risk when a region-wide outage occurs, similar to the AWS outage in the US-EAST-1 region on December 7th, 2021. A common mitigation strategy is to implement stand-by patterns that can become active when unexpected outages occur. However, these stand-by approaches can lead to bigger issues if they are not consistently participating by handling a portion of all requests. Transitioning to More Than Two With single-region services at risk, it's important to understand how to best proceed. For that, we can draw upon the simple example of a trucking business. If you have a single driver who operates a single truck, your business is down when the truck or driver is unable to fulfill their duties. The immediate thought here is to add a second truck and driver. However, the better answer is to increase the fleet by two, which allows for an unexpected issue to complicate the original situation. This is known as the "n + 2" rule, which becomes important when there are expectations set between you and your customers. For the trucking business, it might be a guaranteed delivery time. For your cloud-based service, it will likely be measured in service-level objectives (SLOs) and service-level agreements (SLAs). It is common to set SLOs as four nines, meaning your service is operating as expected 99.99% of the time. This translates to the following error budgets, or down time, for the service: Month = 4 minutes and 21 seconds Week = 1 minute and 0.48 seconds Day = 8.6 seconds If your SLAs include financial penalties, the importance of implementing the n + 2 rule becomes critical to making sure your services are available in the wake of an unexpected regional outage. Remember, that December 7, 2021 outage at AWS lasted more than eight hours. The cloud-based service from Figure 1 can be expanded to employ a multi-region design: Figure 2: Multi-region cloud-based service utilizing Kubernetes and multiple availability zones With a multi-region design, requests are handled by Route 53 but are directed to the best region to handle the request. The ambiguous term "best" is used intentionally, as the criteria could be based upon geographical proximity, least latency, or both. From there, the in-region Kubernetes cluster handles the request — still with three different availability zones. Figure 2 also introduces the observability layer, which provides the ability to monitor cloud-based components and establish SLOs at the country and regional levels. This will be discussed in more detail shortly. Getting Out of the Toil Game Google Site Reliability Engineering's Eric Harvieux defined toil as noted below: "Toil is the kind of work that tends to be manual, repetitive, automatable, tactical, devoid of enduring value, and that scales linearly as a service grows." When designing services that run in multiple regions, the amount of toil that exists with a single region becomes dramatically larger. Consider the example of creating a manager-approved change request every time code is deployed into the production instance. In the single-region example, the change request might be a bit annoying, but it is something a software engineer is willing to tolerate. Now, with two additional regions, this will translate to three times the amount of change requests, all with at least one human-based approval being required. An obtainable and desirable end-state should still include change requests, but these requests should become part of the continuous delivery (CD) lifecycle and be created automatically. Additionally, the observability layer introduced in Figure 2 should be leveraged by the CD tooling in order to monitor deployments — rolling back in the event of any unforeseen circumstances. With this approach, the need for human-based approvals is diminished, and unnecessary toil is removed from both the software engineer requesting the deployment and the approving manager. Harnessing the Power of Observability Observability platforms measure a system's state by leverage metrics, logs, and traces. This means that a given service can be measured by the outputs it provides. Leading observability platforms go a step further and allow for the creation of synthetic API tests that can be used to exercise resources for a given service. Tests can include assertions that introduce expectations — like a particular GET request will respond with an expected response code and payload within a given time period. Otherwise, the test will be marked as failed. SLOs can be attached to each synthetic test, and each test can be executed in multiple geographical locations, all monitored from the observability platform. Taking this approach allows service owners the ability to understand service performance from multiple entry points. With the multi-region model, tests can be created and performance thereby monitored at the regional and global levels separately, thus producing a high degree of certainty on the level of performance being produced in each region. In every case, the power of observability can justify the need for manual human-based change approvals as noted above. Bringing It All Together From the 10,000-foot level, the multiregion service implementation from Figure 2 can be placed onto a United States map. In Figure 3, the database connectivity is mapped to demonstrate the inner-region communication, while the observability and cloud metrics data are gathered from AWS and the observability platform globally. Figure 3: Multi-region service adoption placed near the respective AWS regions Service owners have peace of mind that their service is fully functional in three regions by implementing the n + 2 rule. In this scenario, the implementation is prepared to survive two complete region outages. As an example, the eight-hour AWS outage referenced above would not have an impact on the service's SLOs/ SLAs during the time when one of the three regions is unavailable. Charting a Plan Toward Multi-Region Implementing a multi-region footprint for your service without increasing toil is possible, but it does require planning. Some high-level action items are noted below: Understand your persistence layer – Understanding your persistence layer early on is key. If multiple-write regions are not a possibility, alternative approaches will be required. Adopt Infrastructure as Code – The ability to define your cloud infrastructure via code is critical to eliminate toil and increase the ability to adopt additional regions, or even zones. Use containerization – The underlying service is best when containerized. Build the container you wish to deploy during the continuous integration stage and scan for vulnerabilities within every layer of the container for added safety. Reduce time to deploy – Get into the habit of releasing often, as it only makes your team stronger. Establish SLOs and synthetics – Take the time to set SLOs for your service and write synthetic tests to constantly measure your service — across every environment. Automate deployments – Leverage observability during the CD stage to deploy when a merge-to-main event occurs. If a dev deploys and no alerts are emitted, move on to the next environment and continue all the way to production. Conclusion It's important to understand the limitations of the platform where your services are running. Leveraging a single region offered by your cloud provider is only successful when there are zero region-wide outages. Based upon prior history, this is no longer good enough and is certain to happen again. No cloud provider is ever going to be 100% immune from a region-wide outage. A better approach is to utilize the n + 2 rule and increase the number of regions your service is running in by two additional regions. In taking this approach, the service will still be able to respond to customer requests in the event of not only one regional outage but also any form of outage in a second region where the service is running. By adopting the n + 2 approach, there is a far better chance at meeting SLAs set with your customers. Getting to this point will certainly present challenges but should also provide the opportunity to cut down (or even eliminate) toil within your organization. In the end, your customers will benefit from increased service resiliency, and your team will benefit from significant productivity gains. Have a really great day! Resources AWS Post-Event Summaries, AWS Summary of the AWS Service Event in the Northern Virginia (US-EAST-1) Region, AWS danluu/post-mortems, GitHub "Identifying and Tracking Toil Using SRE Principles" by Eric Harvieux, 2020 "Failure Recovery: When the Cure Is Worse Than the Disease" by Guo et al., 2013 This is an article from DZone's 2023 Observability and Application Performance Trend Report.For more: Read the Report
This is an article from DZone's 2023 Observability and Application Performance Trend Report.For more: Read the Report In today's digital landscape, the growing importance of monitoring and managing application performance cannot be overstated. With businesses increasingly relying on complex applications and systems to drive their operations, ensuring optimal performance has become a top priority. In essence, efficient application performance management can mean the difference between business success and failure. To better understand and manage these sophisticated systems, two key components have emerged: telemetry and observability. Telemetry, at its core, is a method of gathering and transmitting data from remote or inaccessible areas to equipment for monitoring. In the realm of IT systems, telemetry involves collecting metrics, events, logs, and traces from software applications and infrastructure. This plethora of data is invaluable as it provides insight into system behavior, helping teams identify trends, diagnose problems, and make informed decisions. In simpler terms, think of telemetry as the heartbeat monitor of your application, providing continuous, real-time updates about its health. Observability takes this concept one step further. It's important to note that while it does share some similarities with traditional monitoring, there are distinct differences. Traditional monitoring involves checking predefined metrics or logs for anomalies. Observability, on the other hand, is a more holistic approach. It not only involves gathering data but also understanding the "why" behind system behavior. Observability provides a comprehensive view of your system's internal state based on its external outputs. It helps teams understand the overall health of the system, detect anomalies, and troubleshoot potential issues. Simply put, if telemetry tells you what is happening in your system, observability explains why it's happening. The Emergence of Telemetry and Observability in Application Performance In the early days of information systems, understanding what a system was doing at any given moment was a challenge. However, the advent of telemetry played a significant role in mitigating this issue. Telemetry, derived from Greek roots tele (remote) and metron (measure), is fundamentally about measuring data remotely. This technique has been used extensively in various fields such as meteorology, aerospace, and healthcare, long before its application in information technology. As the complexity of systems grew, so did the need for more nuanced understanding of their behavior. This is where observability — a term borrowed from control theory — entered the picture. In the context of IT, observability is not just about collecting metrics, logs, and traces from a system, but about making sense of that data to understand the internal state of the system based on the external outputs. Initially, these concepts were applied within specific software or hardware components, but with the evolution of distributed systems and the challenges they presented, the application of telemetry and observability became more systemic. Nowadays, telemetry and observability are integral parts of modern information systems, helping operators and developers understand, debug, and optimize their systems. They provide the necessary visibility into system performance, usage patterns, and potential bottlenecks, enabling proactive issue detection and resolution. Emerging Trends and Innovations With cloud computing taking the center stage in the digital transformation journey of many organizations, providers like Amazon Web Services (AWS), Azure, and Google Cloud have integrated telemetry and observability into their services. They provide a suite of tools that enable users to collect, analyze, and visualize telemetry data from their workloads running on the cloud. These tools don't just focus on raw data collection but also provide features for advanced analytics, anomaly detection, and automated responses. This allows users to transform the collected data into actionable insights. Another trend we observe in the industry is the adoption of open-source tools and standards for observability like OpenTelemetry, which provides a set of APIs, libraries, agents, and instrumentation for telemetry and observability. The landscape of telemetry and observability has come a long way since its inception, and continues to evolve with technology advancements and changing business needs. The incorporation of these concepts into cloud services by providers like AWS and Azure has made it easier for organizations to gain insights into their application performance, thereby enabling them to deliver better user experiences. The Benefits of Telemetry and Observability The world of application performance management has seen a paradigm shift with the adoption of telemetry and observability. This section delves deep into the advantages provided by these emerging technologies. Enhanced Understanding of System Behavior Together, telemetry and observability form the backbone of understanding system behavior. Telemetry, which involves the automatic recording and transmission of data from remote or inaccessible parts of an application, provides a wealth of information about the system's operations. On the other hand, observability derives meaningful insights from this data, allowing teams to comprehend the internal state of the system from its external outputs. This combination enables teams to proactively identify anomalies, trends, and potential areas of improvement. Improved Fault Detection and Resolution Another significant advantage of implementing telemetry and observability is the enhanced ability to detect and resolve faults. There are tools that allow users to collect and track metrics, collect and monitor log files, set alarms, and automatically react to changes in configuration. This level of visibility hastens the detection of any operational issues, enabling quicker resolution and reducing system downtime. Optimized Resource Utilization These modern application performance techniques also facilitate optimized resource utilization. By understanding how resources are used and identifying any inefficiencies, teams can make data-driven decisions to optimize resource allocation. An auto-scaling feature — which adjusts capacity to maintain steady, predictable performance at the lowest possible cost — is a prime example of this benefit. Challenges in Implementing Telemetry and Observability Implementing telemetry and observability into existing systems is not a straightforward task. It involves a myriad of challenges, stemming from the complexity of modern applications to the sheer volume of data that needs to be managed. Let's delve into these potential pitfalls and roadblocks. Potential Difficulties and Roadblocks The first hurdle is the complexity of modern applications. They are typically distributed across multiple environments — cloud, on-premises, hybrid, and even multi-cloud setups. This distribution makes it harder to understand system behavior, as the data collected could be disparate and disconnected, complicating telemetry efforts. Another challenge is the sheer volume, speed, and variety of data. Modern applications generate massive amounts of telemetry data. Collecting, storing, processing, and analyzing this data in real time can be daunting. It requires robust infrastructure and efficient algorithms to handle the load and provide actionable insights. Also, integrating telemetry and observability into legacy systems can be difficult. These older systems may not be designed with telemetry and observability in mind, making it challenging to retrofit them without impacting performance. Strategies To Mitigate Challenges Despite these challenges, there are ways to overcome them. For the complexity and diversity of modern applications, adopting a unified approach to telemetry can help. This involves using a single platform that can collect, correlate, and analyze data from different environments. To tackle the issue of data volume, implementing automated analytics and machine learning algorithms can be beneficial. These technologies can process large datasets in real time, identifying patterns and providing valuable insights. For legacy system integration issues, it may be worthwhile to invest in modernizing these systems. This could mean refactoring the application or adopting new technology stacks that are more conducive to telemetry and observability. Finally, investing in training and up-skilling teams on tools and best practices can be immensely beneficial. Practical Steps for Gaining Insights Both telemetry and observability have become integral parts of modern application performance management. They offer in-depth insights into our systems and applications, enabling us to detect and resolve issues before they impact end-users. Importantly, these concepts are not just theoretical — they're put into practice every day across services provided by leading cloud providers such as AWS and Google Cloud. In this section, we'll walk through a step-by-step guide to harnessing the power of telemetry and observability. I will also share some best practices to maximize the value you gain from these insights. Step-By-Step Guide The following are steps to implement performance management of a modern application using telemetry and observability on AWS, though this is also possible to implement using other cloud providers: Step 1 – Start by setting up AWS CloudWatch. CloudWatch collects monitoring and operational data in the form of logs, metrics, and events, providing you with a unified view of AWS resources, applications, and services. Step 2 – Use AWS X-Ray for analyzing and debugging your applications. This service provides an end-to-end view of requests as they travel through your application, showing a map of your application's underlying components. Step 3 – Implement AWS CloudTrail to keep track of user activity and API usage. CloudTrail enhances visibility into user and resource activity by recording AWS Management Console actions and API calls. You can identify which users and accounts called AWS, the source IP address from which the calls were made, and when the calls occurred. Step 4 – Don't forget to set up alerts and notifications. AWS SNS (Simple Notification Service) can be used to send you alerts based on the metrics you define in CloudWatch. Figure 1: An example of observability on AWS Best Practices Now that we've covered the basics of setting up the tools and services for telemetry and observability, let's shift our focus to some best practices that will help you derive maximum value from these insights: Establish clear objectives – Understand what you want to achieve with your telemetry data — whether it's improving system performance, troubleshooting issues faster, or strengthening security measures. Ensure adequate training – Make sure your team is adequately trained in using the tools and interpreting the data provided. Remember, the tools are only as effective as the people who wield them. Be proactive rather than reactive – Use the insights gained from telemetry and observability to predict potential problems before they happen instead of merely responding to them after they've occurred. Conduct regular reviews and assessments – Make it a point to regularly review and update your telemetry and observability strategies as your systems evolve. This will help you stay ahead of the curve and maintain optimal application performance. Conclusion The rise of telemetry and observability signifies a paradigm shift in how we approach application performance. With these tools, teams are no longer just solving problems — they are anticipating and preventing them. In the complex landscape of modern applications, telemetry and observability are not just nice-to-haves; they are essentials that empower businesses to deliver high-performing, reliable, and user-friendly applications. As applications continue to evolve, so will the tools that manage their performance. We can anticipate more advanced telemetry and observability solutions equipped with AI and machine learning capabilities for predictive analytics and automated anomaly detection. These advancements will further streamline application performance management, making it more efficient and effective over time. This is an article from DZone's 2023 Observability and Application Performance Trend Report.For more: Read the Report
Unleashing Greatness: Alexander the Great's Journey With Generative AI
November 28, 2023 by
5 Steps To Tame Unplanned Work
November 27, 2023
by
CORE
Ultimate Guide to Smart Agriculture Systems Using IoT
November 28, 2023 by
Data Consistency in Distributed Systems: Transactional Outbox
November 28, 2023 by
Explainable AI: Making the Black Box Transparent
May 16, 2023 by
Data Consistency in Distributed Systems: Transactional Outbox
November 28, 2023 by
Why SQL Isn’t the Right Fit for Graph Databases
November 28, 2023 by
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
Why SQL Isn’t the Right Fit for Graph Databases
November 28, 2023 by
How to Integrate Istio and SPIRE for Secure Workload Identity
November 28, 2023 by
Data Consistency in Distributed Systems: Transactional Outbox
November 28, 2023 by
Web Page Accessibility Checker: Everything You Need to Know
November 28, 2023 by
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
Building A Simple AI Application in 2023 for Fun and Profit
November 28, 2023 by
IIoT and AI: The Synergistic Symphony Transforming Industrial Landscapes
November 28, 2023 by
Five IntelliJ Idea Plugins That Will Change the Way You Code
May 15, 2023 by