A microservices architecture is a development method for designing applications as modular services that seamlessly adapt to a highly scalable and dynamic environment. Microservices help solve complex issues such as speed and scalability, while also supporting continuous testing and delivery. This Zone will take you through breaking down the monolith step by step and designing a microservices architecture from scratch. Stay up to date on the industry's changes with topics such as container deployment, architectural design patterns, event-driven architecture, service meshes, and more.
Microservices architecture has become increasingly popular in recent years due to its ability to enable flexibility, scalability, and rapid deployment of applications. However, designing and implementing microservices can be complex, and it requires careful planning and architecture to ensure the success of the system. This is where design patterns for microservices come in. Design patterns provide a proven solution to common problems in software architecture. They help to establish best practices and guidelines for designing and implementing microservices, making it easier to create scalable and maintainable systems. In this article, we will focus on three design patterns for microservices: Ambassador, Anti-Corruption Layer, and Backends for Frontends. We will discuss their definitions, implementation, advantages, and disadvantages, as well as their use cases. By the end of this article, you will have a solid understanding of these three design patterns, as well as other popular patterns for microservices. Additionally, we will provide best practices and guidelines for designing microservices with these patterns, along with common pitfalls to avoid. So, let's dive in and explore the world of design patterns for microservices. Ambassador Pattern Definition and Purpose of the Ambassador Pattern The Ambassador Pattern is a design pattern for microservices that allows communication between microservices while minimizing the complexity of that communication. The pattern involves the use of a separate service, called the "ambassador," that sits between the client and the microservices. The ambassador is responsible for handling the communication between the client and the microservices, which reduces the complexity of the client's requests. The main purpose of the Ambassador Pattern is to reduce the complexity of communication between microservices. This pattern is particularly useful when microservices use different protocols or when microservices are updated at different times, as it allows for flexibility in communication without requiring changes to the microservices themselves. Another key benefit of the Ambassador Pattern is that it can help to improve the reliability and scalability of microservices. By separating the communication responsibilities from the microservices, the pattern allows for better fault tolerance and easier scaling of individual services. In summary, the Ambassador Pattern is a design pattern that uses a separate service to handle communication between microservices and clients. Its main purpose is to reduce the complexity of communication and improve the reliability and scalability of microservices. Implementation of the Ambassador Pattern To implement the Ambassador Pattern, you must create an ambassador service between the client and the microservices. The ambassador service will act as a proxy for the microservices, handling communication and translating requests between the client and the microservices. The following steps can be followed to implement the Ambassador Pattern: Define the APIs: First, define the APIs that will be exposed to the client and implemented by the microservices. This will ensure that the client and the microservices agree on the data formats and the communication protocols. Create the Ambassador Service: The Ambassador Service should be a separate service that handles communication between the client and the microservices. This service should be responsible for routing requests from the client to the appropriate microservice and translating the data formats, if necessary. Deploy the Ambassador Service: Once the Ambassador Service is created, it should be deployed to a separate container or server that is easily accessible to the client and the microservices. Route Requests: The Ambassador Service should route requests from the client to the appropriate microservice. This can be done using various techniques, such as routing based on the request URL or using a service registry to locate the microservice. Translate Data Formats: If the microservices use different data formats, the Ambassador Service should translate the data formats as necessary. This can be done using a data transformation tool or by writing custom code. Implement Fault Tolerance: The Ambassador Service should implement fault tolerance to ensure that it can handle failures in the microservices or the client. This can be done using techniques such as retries, circuit breakers, or fallbacks. In summary, implementing the Ambassador Pattern involves creating a separate service that acts as a proxy for the microservices and handles communication between the client and the microservices. The Ambassador Service should route requests, translate data formats, and implement fault tolerance to ensure the reliability and scalability of the microservices. Advantages and Disadvantages of the Ambassador Pattern Like any design pattern, the Ambassador Pattern has its advantages and disadvantages. Understanding these pros and cons can help you determine whether the pattern is a good fit for your microservices architecture. Advantages: Reduced Complexity: The Ambassador Pattern reduces the complexity of communication between microservices by providing a separate service to handle the communication. Protocol Agnostic: The Ambassador Pattern can be used with any protocol, allowing microservices to use different protocols as needed. Better Reliability: The Ambassador Pattern can improve the reliability of microservices by providing fault tolerance and better handling of failures. Easier Scalability: The Ambassador Pattern makes it easier to scale individual microservices, as the communication between them is handled by a separate service. Disadvantages: Added Complexity: While the Ambassador Pattern reduces the complexity of communication between microservices, it does add an extra layer of complexity to the overall architecture. Performance Overhead: The use of an additional service can result in a performance overhead for the microservices architecture. Additional Management: The Ambassador Service requires additional management and maintenance, which can increase the overall cost and complexity of the microservices architecture. Potential Single Point of Failure: The Ambassador Service can become a single point of failure for the microservices architecture, which can have significant impacts on the system's reliability. In summary, the Ambassador Pattern offers several advantages, such as reduced complexity, protocol agnosticism, and better reliability and scalability. However, it also has some disadvantages, such as added complexity, potential performance overhead, additional management, and the potential for a single point of failure. Understanding these pros and cons can help you determine whether the pattern is a good fit for your microservices architecture. Use Cases for the Ambassador Pattern The Ambassador Pattern can be used in a variety of scenarios within a microservices architecture. Here are some common use cases: Service Discovery: The Ambassador Pattern can be used to help microservices discover each other in a decentralized and scalable manner. The Ambassador Service can act as a service registry, allowing microservices to register themselves and discover other services. Protocol Translation: The Ambassador Pattern can be used to translate between different protocols used by microservices. The Ambassador Service can act as a protocol translator, allowing microservices to use different protocols as needed. Load Balancing: The Ambassador Pattern can be used to balance the load across multiple instances of a microservice. The Ambassador Service can route requests to the least busy instance of a microservice, improving performance and reliability. Security: The Ambassador Pattern can be used to implement security measures such as authentication and authorization. The Ambassador Service can act as a security gateway, enforcing security policies and protecting microservices from unauthorized access. API Management: The Ambassador Pattern can be used to manage APIs exposed by microservices. The Ambassador Service can act as an API gateway, providing a single entry point for clients to access microservices APIs and enforcing API policies such as rate limiting and request throttling. In summary, the Ambassador Pattern can be used in a variety of scenarios within a microservices architecture, including service discovery, protocol translation, load balancing, security, and API management. Understanding these use cases can help you determine whether the pattern is a good fit for your microservices architecture. Anti-Corruption Layer Pattern Definition and Purpose of the Anti-Corruption Layer Pattern The Anti-Corruption Layer (ACL) Pattern is a design pattern used to isolate and insulate a microservice from another service with a different domain model or communication protocol. This pattern is used to prevent the spread of "corruption" from one service to another, where corruption refers to the introduction of foreign concepts or terminology into a microservice's domain model. The purpose of the Anti-Corruption Layer Pattern is to provide a translation layer between microservices that have different domain models or communication protocols. This layer allows microservices to communicate with each other without having to understand each other's domain models, reducing complexity and increasing maintainability. The Anti-Corruption Layer Pattern is especially useful in large-scale microservices architectures where services may be developed by different teams using different technologies and domain models. By providing a translation layer, the ACL pattern allows these services to communicate with each other without having to be modified or refactored to accommodate each other's, domain models. In summary, the Anti-Corruption Layer Pattern is a design pattern used to isolate and insulate a microservice from another service with a different domain model or communication protocol. The purpose of the ACL pattern is to provide a translation layer between microservices that have different domain models or communication protocols, reducing complexity and increasing maintainability. Implementation of the Anti-Corruption Layer Pattern The implementation of the Anti-Corruption Layer Pattern involves creating an intermediary layer between two microservices that have different domain models or communication protocols. This layer acts as a translator, allowing the two microservices to communicate with each other without having to understand each other's, domain models. To implement the Anti-Corruption Layer Pattern, you can follow these steps: Identify the services that need to communicate with each other but have different domain models or communication protocols. Create an intermediary layer between the two services. This layer can be implemented as a separate microservice, a library, or a set of classes within a microservice. Define a translation layer within the intermediary layer that maps data between the two services. This translation layer should be designed to isolate the microservices from each other, preventing corruption from spreading between them. Implement the translation layer using a combination of techniques such as data mapping, data transformation, and data validation. Test the intermediary layer to ensure that it is functioning as expected and handling all possible scenarios. Deploy the intermediary layer and configure the microservices to use it for communication. By following these steps, you can successfully implement the Anti-Corruption Layer Pattern in your microservices architecture. The ACL pattern can be implemented in various ways, and the specific implementation details may vary depending on your use case and the technologies you're using. In summary, implementing the Anti-Corruption Layer Pattern involves creating an intermediary layer that acts as a translator between microservices with different domain models or communication protocols. To implement the ACL pattern, you need to identify the services that need to communicate, create an intermediary layer, define a translation layer, implement the translation layer, test the intermediary layer, and deploy and configure the microservices to use it. Advantages and Disadvantages of the Anti-Corruption Layer Pattern Like any design pattern, the Anti-Corruption Layer Pattern has its advantages and disadvantages. Here are some of them: Advantages: Isolation: The ACL pattern provides a way to isolate microservices from each other, preventing the spread of corruption between them. This helps to maintain the integrity of each microservice's domain model. Flexibility: The ACL pattern provides a way to integrate microservices with different domain models or communication protocols. This flexibility allows you to develop microservices using different technologies and frameworks. Maintainability: By isolating microservices from each other, the ACL pattern makes it easier to maintain each microservice independently. This reduces the risk of unintended changes and makes it easier to update or replace microservices. Scalability: The ACL pattern can help to improve the scalability of your microservices architecture by allowing you to add or remove microservices without affecting the rest of the system. Disadvantages: Complexity: The ACL pattern adds an additional layer of complexity to your microservices architecture. This can make it harder to understand and maintain, especially if the translation layer is not well-designed. Performance: The ACL pattern can introduce additional latency and overhead to your microservices communication, especially if the translation layer requires significant processing. Development Time: Implementing the ACL pattern requires additional development time and effort to design, implement, and test the intermediary layer. Additional Infrastructure: The ACL pattern may require additional infrastructure to support the intermediary layer, which can add to the cost and complexity of your microservices architecture. In summary, the Anti-Corruption Layer Pattern has several advantages, including isolation, flexibility, maintainability, and scalability. However, it also has several disadvantages, including complexity, performance, development time, and additional infrastructure requirements. When deciding whether to use the ACL pattern, you should carefully consider these factors and determine if the benefits outweigh the costs in your specific use case. Use Cases for the Anti-Corruption Layer Pattern The Anti-Corruption Layer Pattern can be useful in a variety of scenarios where microservices need to communicate with each other but have different domain models or communication protocols. Here are some examples of use cases for the ACL pattern: Legacy Systems Integration: When integrating microservices with legacy systems, it's common to encounter different domain models and communication protocols. The ACL pattern can be used to translate data between the microservices and the legacy systems, allowing them to communicate without having to understand each other's models. Multi-Tenant Applications: In multi-tenant applications, different tenants may have different data models or requirements. The ACL pattern can be used to translate data between the microservices and the tenants, allowing them to communicate with each other without affecting each other's data models. Service Reusability: In some cases, a microservice may need to be reused in different contexts with different domain models or communication protocols. The ACL pattern can be used to isolate the microservice from the different contexts, allowing it to be reused without modification. System Migration: When migrating from one system to another, it's common to encounter different domain models and communication protocols. The ACL pattern can be used to translate data between the old and new systems, allowing them to communicate during the migration process. Vendor Integration: When integrating with third-party services or vendors, it's common to encounter different domain models and communication protocols. The ACL pattern can be used to translate data between the microservices and the vendors, allowing them to communicate without having to understand each other's models. These are just a few examples of use cases for the Anti-Corruption Layer Pattern. In general, the ACL pattern is useful in any scenario where microservices need to communicate with each other but have different domain models or communication protocols. By using the ACL pattern, you can achieve greater flexibility, maintainability, and scalability in your microservices architecture. Backends for Frontends Pattern Definition and Purpose of the Backends for Frontends Pattern The Backends for Frontends (BFF) Pattern is a microservices architecture pattern that involves creating multiple backends to serve different client applications, such as web or mobile applications. Each backend is tailored to a specific client application's needs, providing optimized data and functionality for that application. The purpose of the BFF pattern is to improve the performance and user experience of client applications by providing them with optimized backends. By tailoring the backends to the needs of each client application, you can ensure that the application has access to the data and functionality it needs without having to make multiple requests or process unnecessary data. This can lead to faster load times, reduced latency, and a more responsive user experience. Another benefit of the BFF pattern is that it allows you to maintain the separation of concerns between client applications and microservices. Instead of exposing the entire microservices architecture to each client application, you can create tailored backends that provide only the necessary data and functionality. This can help to improve security and reduce the risk of unauthorized access to sensitive data. Overall, the BFF pattern can be a valuable tool for improving the performance and user experience of client applications while maintaining the separation of concerns and improving security. Implementation of the Backends for Frontends Pattern To implement the BFF pattern, you will need to create multiple backends that are tailored to the needs of each client application. Here are the key steps involved in implementing the BFF pattern: Identify the client applications: The first step is to identify the client applications that will be using your microservices architecture. For each client application, you will need to create a tailored backend that provides the necessary data and functionality. Define the backend APIs: Once you have identified the client applications, you will need to define the APIs for each backend. The APIs should be designed to provide the necessary data and functionality for the specific client application. Implement the backend services: Once you have defined the APIs, you will need to implement the backend services that provide the necessary data and functionality. Each backend service should be designed to provide the specific data and functionality required by the client application. Implement the BFF layer: The BFF layer is a middleware layer that sits between the client applications and the backend services. Its role is to receive requests from the client application, process them, and forward them to the appropriate backend service. The BFF layer should be designed to provide the necessary data and functionality to the client application without exposing the entire microservices architecture. Deploy and test the BFF layer: Once you have implemented the BFF layer, you will need to deploy it and test it to ensure that it is working as expected. You should test each backend service individually, as well as test the entire BFF layer in conjunction with the client application. By following these steps, you can implement the BFF pattern and create tailored backends that provide the necessary data and functionality for each client application. This can help to improve performance and user experience while maintaining separation of concerns and improving security. Advantages and Disadvantages of the Backends for Frontends Pattern Advantages: Tailored backends: By creating tailored backends for each client application, you can ensure that the application has access to the data and functionality it needs without having to make multiple requests or process unnecessary data. This can lead to faster load times, reduced latency, and a more responsive user experience. Separation of concerns: The BFF pattern allows you to maintain the separation of concerns between client applications and microservices. Instead of exposing the entire microservices architecture to each client application, you can create tailored backends that provide only the necessary data and functionality. This can help to improve security and reduce the risk of unauthorized access to sensitive data. Improved performance: By providing optimized backends, you can improve the performance and user experience of client applications. The BFF pattern can help to reduce load times, minimize network traffic, and improve response times. Scalability: The BFF pattern can be easily scaled horizontally by adding additional instances of the BFF layer. This can help to ensure that the client applications have access to the necessary resources, even during periods of high traffic. Disadvantages: Increased complexity: The BFF pattern can increase the complexity of the microservices architecture, as it involves creating multiple tailored backends and a middleware layer. This can make the architecture harder to manage and maintain, particularly if there are multiple client applications. Increased development time: Creating tailored backends for each client application can be time-consuming and can require additional development resources. This can increase the development time and cost of the microservices architecture. Increased testing requirements: With multiple backends and a middleware layer, testing requirements can be more complex and time-consuming. Each backend service and the BFF layer will need to be tested individually and in conjunction with the client applications. Increased infrastructure requirements: Creating multiple backends and a middleware layer can require additional infrastructure resources, such as servers and databases. This can increase the infrastructure requirements and cost of the microservices architecture. Overall, the Backends for Frontends pattern can provide significant benefits for improving the performance and user experience of client applications while maintaining separation of concerns and improving security. However, it can also introduce additional complexity, development time, testing requirements, and infrastructure requirements, which should be carefully considered before implementing the pattern. Use Cases for the Backends for Frontends Pattern The Backends for Frontends pattern can be particularly useful in microservices architectures that have multiple client applications with different requirements for data and functionality. Here are some use cases for the BFF pattern: Mobile applications: Mobile applications often have specific requirements for data and functionality, such as optimized performance and reduced data usage. By creating tailored backends for each mobile application, you can provide optimized access to the necessary data and functionality while maintaining security and separation of concerns. Additionally, the BFF pattern can help to reduce the load on mobile devices, as the backend can handle processing and optimization tasks. Web applications: Web applications can also have specific requirements for data and functionality, such as personalized content and real-time updates. By creating tailored backends for each web application, you can provide access to the necessary data and functionality while maintaining security and separation of concerns. Additionally, the BFF pattern can help to reduce the load on the web browser, as the backend can handle processing and optimization tasks. Third-party integrations: Microservices architectures often need to integrate with third-party services, such as payment gateways and social media platforms. By creating tailored backends for each third-party integration, you can provide access to the necessary data and functionality while maintaining security and separation of concerns. Additionally, the BFF pattern can help to reduce the risk of exposing the entire microservices architecture to third-party services. IoT applications: IoT applications often have specific requirements for data and functionality, such as real-time data processing and device management. By creating tailored backends for each IoT application, you can provide access to the necessary data and functionality while maintaining security and separation of concerns. Additionally, the BFF pattern can help to reduce the load on IoT devices, as the backend can handle processing and optimization tasks. Overall, the Backends for Frontends pattern can be a useful approach for creating tailored backends for different client applications and use cases. By maintaining the separation of concerns and providing optimized access to the necessary data and functionality, the BFF pattern can help to improve the performance, scalability, and security of microservices architectures. Other Design Patterns for Microservices In addition to the Ambassador, Anti-Corruption Layer, and Backends for Frontends patterns, there are several other design patterns that are commonly used in microservices architectures. Here's a brief overview of some of these patterns: Circuit Breaker: The Circuit Breaker pattern is used to prevent cascading failures in distributed systems. It works by monitoring the availability of a service and, if it detects a failure, temporarily halting requests to that service. This allows the system to recover and prevents it from being overwhelmed with failed requests. Service Registry: The Service Registry pattern is used to keep track of available services in a microservices architecture. It works by having each service register itself with a central registry, which can then be used to look up and locate services as needed. This allows for more flexible and dynamic communication between services. API Gateway: The API Gateway pattern provides a single point of entry for client applications to access a microservices architecture. It works by routing requests from client applications to the appropriate services and can also handle authentication, caching, and other cross-cutting concerns. Saga: The Saga pattern manages distributed transactions in microservices architectures. It works by breaking up a transaction into a series of smaller, local transactions that are managed by each individual service. If any part of the transaction fails, the Saga can be used to roll back or compensate for the changes made by the other services. Event Sourcing: The Event Sourcing pattern is used to capture and store all changes made to a system as a sequence of events. It works by recording each change as an event, which can then be used to reconstruct the state of the system at any point in time. This can be useful for auditing, debugging, and replaying events. These are just a few examples of the many design patterns that are used in microservices architectures. Each pattern has its own strengths and weaknesses, and the appropriate pattern(s) to use will depend on the specific requirements and constraints of your system. Tips and Guidelines for Effectively Designing Microservices Using Design Patterns Designing a microservices architecture using design patterns can be complex, but there are several tips and guidelines that can help ensure a successful implementation. Here are some best practices to keep in mind: Choose the right pattern(s) for your system: Each design pattern has its own strengths and weaknesses, so it's important to choose the pattern(s) that best fit the specific requirements and constraints of your system. Be sure to consider factors such as scalability, maintainability, and performance when selecting patterns. Use a consistent set of patterns: Using a consistent set of design patterns throughout your system can help make it easier to understand and maintain. This can also help with testing and debugging, as issues in one part of the system can often be traced to patterns used elsewhere. Use automated testing and monitoring: Automated testing and monitoring are crucial for ensuring the reliability and performance of a microservices architecture. Be sure to test each service and the system as a whole, and use monitoring tools to track performance and detect issues in real time. Avoid tightly-coupled services: Tightly-coupled services can be difficult to maintain and scale and can lead to cascading failures. Instead, use design patterns such as the Anti-Corruption Layer and Circuit Breaker to help decouple services and prevent failures from spreading. Design for resilience: Microservices architectures are inherently distributed and complex, so it's important to design for resilience. Use patterns such as the Circuit Breaker and Saga to help manage failures and design services to be resilient to network latency and other issues. Ensure security and privacy: Microservices architectures can create security and privacy challenges, so it's important to ensure that each service is secure and that sensitive data is protected. Use patterns such as the API Gateway and Access Token to help control access to services, and ensure that each service follows secure coding practices. By following these tips and guidelines, you can help ensure that your microservices architecture is reliable, scalable, and secure. Common Pitfalls to Avoid While design patterns can help address many of the challenges of designing microservices architectures, there are also several common pitfalls to be aware of. Here are some pitfalls to avoid: Over-engineering: It's easy to fall into the trap of over-engineering a microservices architecture by using too many patterns or building overly complex services. This can lead to reduced performance, increased maintenance costs, and a higher risk of failure. Underestimating testing and monitoring: Testing and monitoring are critical for ensuring the reliability and performance of a microservices architecture, but they can be time-consuming and complex. Don't underestimate the effort required for testing and monitoring, and be sure to use automated tools to help manage these tasks. Ignoring security and privacy: Security and privacy should be a top priority when designing microservices architectures, but they are often overlooked. Be sure to design services to be secure by default and use patterns such as the API Gateway and Access Token to help manage access to services. Failing to consider non-functional requirements: Non-functional requirements, such as scalability, maintainability, and performance, are just as important as functional requirements when designing microservices architectures. Be sure to consider these requirements when choosing design patterns and designing services. Choosing patterns based on hype: There are many design patterns available for microservices architectures, and it can be tempting to choose patterns based on hype or popularity. However, it's important to choose patterns based on the specific requirements and constraints of your system rather than blindly following trends. By being aware of these common pitfalls, you can help ensure that your microservices architecture is well-designed, reliable, and secure. Summary of Key Takeaways In this article, we have explored several key design patterns for microservices, including the Ambassador Pattern, Anti-Corruption Layer Pattern, and Backends for Frontends Pattern. Here are some of the key takeaways from our discussion: Design patterns can help address common challenges in microservices architectures, such as service discovery, communication, and scalability. The Ambassador Pattern can be used to provide a proxy for service discovery and to add additional functionality to services, while the Anti-Corruption Layer Pattern can help isolate services from external dependencies and legacy systems. The Backends for Frontends Pattern can be used to provide customized APIs for different types of clients and to simplify communication between services. While design patterns can offer many advantages, they also have some disadvantages, such as increased complexity and potential performance issues. When designing microservices architectures, it's important to consider non-functional requirements, such as scalability, maintainability, and security, and to avoid common pitfalls such as over-engineering and failing to consider testing and monitoring. Choosing the right design patterns depends on the specific requirements and constraints of your system and should not be based solely on hype or popularity. By keeping these key takeaways in mind, you can effectively design microservices architectures that are scalable, maintainable, and secure. Final Thoughts on the Importance of Design Patterns for Microservices In today's fast-paced software development world, microservices have emerged as a popular approach to building scalable, maintainable, and flexible software systems. However, implementing a microservices architecture can be challenging, particularly as the number of services and interdependencies increase. This is where design patterns can play a crucial role. By applying design patterns, you can address common challenges in microservices architectures and make your system more robust, scalable, and maintainable. Moreover, design patterns can provide a common language for developers to communicate and share best practices. However, it's important to remember that design patterns are not a silver bullet and should be applied judiciously based on the specific requirements and constraints of your system. In addition, design patterns should be adapted and customized to suit your unique needs rather than blindly applied without consideration of the larger context. In conclusion, design patterns can be a valuable tool for designing and implementing microservices architectures. Still, they should not be seen as a replacement for sound architectural principles and good engineering practices. By combining design patterns with solid engineering practices, you can build microservices architectures that are robust, scalable, and maintainable and that can evolve over time to meet changing business needs.
Using WireMock for integration testing of Spring-based (micro)services can be hugely valuable. However, usually, it requires significant effort to write and maintain the stubs needed for WireMock to take a real service’s place in tests. What if generating WireMock stubs was as easy as adding @GenerateWireMockStub to your controller? Like this: Kotlin @GenerateWireMockStub @RestController class MyController { @GetMapping("/resource") fun getData() = MyServerResponse(id = "someId", message = "message") } What if that meant that you then just instantiate your producer’s controller stub in consumer-side tests… Kotlin val myControllerStub = MyControllerStub() Stub the response… Kotlin myControllerStub.getData(MyServerResponse("id", "message")) And verify calls to it with no extra effort? Kotlin myControllerStub.verifyGetData() Surely, it couldn’t be that easy?! Before I explain the framework that does this, let’s first look at the various approaches to creating WireMock stubs. The Standard Approach While working on a number of projects, I observed that the writing of WireMock stubs most commonly happens on the consumer side. What I mean by this is that the project that consumes the API contains the stub setup code required to run tests. The benefit of it is that it's easy to implement. There is nothing else the consuming project needs to do. Just import the stubs into the WireMock server in tests, and the job is done. However, there are also some significant downsides to this approach. For example, what if the API changes? What if the resource mapping changes? In most cases, the tests for the service will still pass, and the project may get deployed only to fail to actually use the API — hopefully during the build’s automated integration or end-to-end tests. Limited visibility of the API can lead to incomplete stub definitions as well. Another downside of this approach is the duplicated maintenance effort — in the worst-case scenario. Each client ends up updating the same stub definitions. Leakage of the API-specific information, in particular, sensitive information from the producer to the consumer, leads to the consumers being aware of the API characteristics they shouldn’t be. For example, the endpoint mappings or, sometimes even worse — API security keys. Maintaining stubs on the client side can also lead to increased test setup complexity. The Less Common Approach A more sophisticated approach that addresses some of the above disadvantages is to make the producer of the API responsible for providing the stubs. So, how does it work when the stubs live on the producer side? In a poly-repo environment, where each microservice has its own repository, this means the producer generates an artifact containing the stubs and publishes it to a common repository (e.g., Nexus) so that the clients can import it and use it. In a mono-repo, the dependencies on the stubs may not require the artifacts to be published in this way, but this will depend on how your project is set up. The stub source code is written manually and subsequently published to a repository as a JAR file The client imports the JAR as a dependency and downloads it from the repository Depending on what is in the Jar, the test loads the stub directly to WireMock or instantiates the dynamic stub (see next section for details) and uses it to set up WireMock stubs and verify the calls This approach improves the accuracy of the stubs and removes the duplicated effort problem since there is only one set of stubs maintained. There is no issue with visibility either since the stubs are written while having full access to the API definition, which ensures better understanding. The consistency is ensured by the consumers always loading the latest version of the published stubs every time the tests are executed. However, preparing stubs manually on the producer's side can also have its own shortcomings. It tends to be quite laborious and time-consuming. As any handwritten code intended to be used by 3rd parties, it should be tested, which adds even more effort to the development and maintenance. Another problem that may occur is a consistency issue. Different developers may write the stubs in different ways, which may mean different ways of using the stubs. This slows development down when developers maintaining different services need to first learn how the stubs have been written, in the worst-case scenario, uniquely for each service. Also, when writing stubs on the consumer's side, all that is required to prepare are stubs for the specific parts of the API that the consumer actually uses. But providing them on the producer's side means preparing all of them for the entire API as soon as the API is ready, which is great for the client but not so great for the provider. Overall, writing stubs on the provider side has several advantages over the client-side approach. For example, if the stub-publishing and API-testing are well integrated into the CI pipeline, it can serve as a simpler version of Consumer Driven Contracts, but it is also important to consider the possible implications like the requirement for the producer to keep the stubs in sync with the API. Dynamic Stubbing Some developers may define stubs statically in the form of JSON. This is additional maintenance. Alternatively, you can create helper classes that introduce a layer of abstraction — an interface that determines what stubbing is possible. Usually, they are written in one of the higher-level languages like Java/Kotlin. Such stub helpers enable the clients to set up stubs within the constraints set out by the author. Usually, it means using various values of various types. Hence I call them dynamic stubs for short. An example of such a dynamic stub could be a function with a signature along the lines of: Kotlin fun get(url: String, response: String } One could expect that such a method could be called like this: Kotlin get(url = "/someResource", response = "{ \"key\" = \"value\" }") And a potential implementation using the WireMock Java library: Kotlin fun get(url: String, response: String) { stubFor(get(urlPathEqualTo(url)) .willReturn(aResponse().withBody(response))) } Such dynamic stubs provide a foundation for the solution described below. Auto-Generating Dynamic WireMock Stubs I have been working predominantly in the Java/Kotlin Spring environment, which relies on the SpringMVC library to support HTTP endpoints. The newer versions of the library provide the @RestController annotation to mark classes as REST endpoint providers. It's these endpoints that I tend to stub most often using the above-described dynamic approach. I came to the realization that the dynamic stubs should provide only as much functionality as set out by the definition of the endpoints. For example, if a controller defines a GET endpoint with a query parameter and a resource name, the code enabling you to dynamically stub the endpoint should only allow the client to set the value of the parameter, the HTTP status code, and the body of the response. There is no point in stubbing a POST method on that endpoint if the API doesn't provide it. With that in mind, I believed there was an opportunity to automate the generation of the dynamic stubs by analyzing the definitions of the endpoints described in the controllers. Obviously, nothing is ever easy. A proof of concept showed how little I knew about the build tool that I have been using for years (Gradle), the SpringMVC library, and Java annotation processing. But nevertheless, in spite of the steep learning curve, I managed to achieve the following: parse the smallest meaningful subset of the relevant annotations (e.g., a single basic resource) design and build a data model of the dynamic stubs generate the source code of the dynamic stubs (in Java) and make Gradle build an artifact containing only the generated code and publish it (I also tested the published artifact by importing it into another project) In the end, here is what was achieved: The annotation processor iterates through all relevant annotations and generates the dynamic stub source code. Gradle compiles and packages the generated source into a JAR file and publishes it to an artifact repository (e.g., Nexus) The client imports the JAR as a dependency and downloads it from the repository The test instantiates the generated stubs and uses them to set up WireMock stubs and verify the calls made to WireMock With a mono-repo, the situation is slightly simpler since there is no need to package the generated code and upload it to a repository. The compiled stubs become available to the depending subprojects immediately. These end-to-end scenarios proved that it could work. The Final Product I developed a library with a custom annotation @GenerateWireMockStub that can be applied to a class annotated with @RestController. The annotation processor included in the library generates the Java code for dynamic stub creation in tests. The stubs can then be published to a repository or, in the case of a mono-repo, used directly by the project(s). For example, by adding the following dependencies (Kotlin project): Groovy kapt 'io.github.lsd-consulting:spring-wiremock-stub-generator:2.0.3' compileOnly 'io.github.lsd-consulting:spring-wiremock-stub-generator:2.0.3' compileOnly 'com.github.tomakehurst:wiremock:2.27.2' and annotating a controller having a basic GET mapping with @GenerateWireMockStub: Kotlin @GenerateWireMockStub @RestController class MyController { @GetMapping("/resource") fun getData() = MyServerResponse(id = "someId", message = "message") } will result in generating a stub class with the following methods: Java public class MyControllerStub { public void getData(MyServerResponse response) ... } public void getData(int httpStatus, String errorResponse) { ... } public void verifyGetData() { ... } public void verifyGetData(final int times) { ... } public void verifyGetDataNoInteraction() { ... } } The first two methods set up stubs in WireMock, whereas the other methods verify the calls depending on the expected number of calls — either once or the given number of times, or no interaction at all. That stub class can be used in a test like this: Kotlin //Create the stub for the producer’s controller val myControllerStub = MyControllerStub() //Stub the controller method with the response myControllerStub.getData(MyServerResponse("id", "message")) callConsumerThatTriggersCallToProducer() myControllerStub.verifyGetData() The framework now supports most HTTP methods, with a variety of ways to verify interactions. @GenerateWireMockStub makes maintaining these dynamic stubs effortless. It increases accuracy and consistency, making maintenance easier and enabling your build to easily catch breaking changes to APIs before your code hits production. More details can be found on the project’s website. A full example of how the library can be used in a multi-project setup and in a mono-repo: spring-wiremock-stub-generator-example spring-wiremock-stub-generator-monorepo-example Limitations The library’s limitations mostly come from the WireMock limitations. More specifically, multi-value and optional request parameters are not quite supported by WireMock. The library uses some workarounds to handle those. For more details, please check out the project’s README. Note The client must have access to the API classes used by the controller. Usually, it is achieved by exposing them in separate API modules that are published for consumers to use. Acknowledgments I would like to express my sincere gratitude to the reviewers who provided invaluable feedback and suggestions to improve the quality of this article and the library. Their input was critical in ensuring the article’s quality. A special thank you to Antony Marcano for his feedback and repeated reviews, and direct contributions to this article. This was crucial in ensuring that the article provides clear and concise documentation for the spring-wiremock-stub-generator library. I would like to extend my heartfelt thanks to Nick McDowall and Nauman Leghari for their time, effort, and expertise in reviewing the article and providing insightful feedback to improve its documentation and readability. Finally, I would also like to thank Ollie Kennedy for his careful review of the initial pull request and his suggestions for improving the codebase.
Building and deploying microservices with Spring Boot and Docker has become a popular approach for developing scalable and resilient applications. Microservices architecture involves breaking down an application into smaller, individual services that can be developed and deployed independently. This approach allows for faster development, easier maintenance, and better scalability. Spring Boot is a popular Java framework for building microservices. It provides a simple, efficient way to create standalone, production-grade Spring-based applications. Docker, on the other hand, is a containerization platform that allows developers to package their applications and dependencies into lightweight containers that can run on any platform. This article will discuss how to build and deploy microservices with Spring Boot and Docker. Setting Up the Environment Before we can start building our microservices, we need to set up our development environment. We will need to install the following tools: Java Development Kit (JDK) Spring Boot Docker Once we have installed these tools, we can start building our microservices. Building Microservices With Spring Boot Spring Boot provides a variety of tools and features that make it easy to build microservices. We can use Spring Initializr to generate a new Spring Boot project. We can select the dependencies we want to include in our projects, such as Spring Web, Spring Data JPA, and Spring Cloud Config. Once we have generated our project, we can start building our microservice. We can create a RESTful API using Spring Web and connect to a database using Spring Data JPA. We can also use Spring Cloud Config to manage our application configuration. Containerizing Microservices With Docker Once we have built our microservices, we can containerize them using Docker. Docker provides a simple way to package our application and its dependencies into a lightweight container that can be easily deployed. To containerize our microservice, we need to create a Dockerfile. The Dockerfile contains instructions on how to build our container image. We can specify the base image, copy our application files, and define the commands to run our application. Once we have created our Dockerfile, we can build our container image using the docker build command. We can then run our container using the docker run command. Deploying Microservices With Docker Compose Deploying microservices can be a complex task, especially if we have multiple services that need to be deployed together. Docker Compose provides a simple way to define and run multi-container Docker applications. We can create a docker-compose.yml the file that defines our microservices and their dependencies. We can specify the container images to use, the ports to expose, and any environment variables to set. Once we have defined our Docker Compose file, we can run our application using the docker-compose up command. Docker Compose will start our containers and set up any networking between them. Conclusion Building and deploying microservices with Spring Boot and Docker provides a powerful and flexible way to create scalable and resilient applications. Spring Boot provides a simple and efficient way to build microservices, while Docker provides a lightweight and portable way to package and deploy them. By following the steps outlined in this article, you can start building and deploying microservices with Spring Boot and Docker today. With the right tools and approach, you can create applications that are more scalable, more reliable, and easier to maintain.
Security was mostly perimeter-based while building monolithic applications. It means securing the network perimeter and access control using firewalls. With the advent of microservices architecture, static and network-based perimeters are no longer effective. Nowadays, applications are deployed and managed by container orchestration systems like Kubernetes, which are spread across the cloud. Zero trust network (ZTN) is a different approach to secure data across cloud-based networks. In this article, we will explore how Istio, with ZTN philosophy, can help secure microservices. What Is Zero Trust Network (ZTN)? "Zero trust network" is a security paradigm that does not grant implicit trust to users, devices, and services, and continuously verifies their identity and authorization to access resources. In a microservices architecture, if a service (server) receives a request from another service (client), the server should not assume the trustworthiness of the client. The server should continuously authenticate and authorize a client first and then allow the communication to happen securely (refer to fig. A below). Fig. A: A Zero Trust Network (ZTN) environment where continuous authentication and authorization are enforced between microservices across multicloud. Why Is a Zero Trust Network Environment Inevitable for Microservices? The importance of securing the network and data in a distributed network of services cannot be stressed enough. Below are a few challenges that point to why a ZTN environment is necessary for microservices: Lack of ownership on the network: Applications moved from perimeter-based to multiple clouds and data centers with microservices. As a result, the network has also got distributed, giving more attack surface to intruders. Increased network and security breaches: Data and security breaches among cloud providers are increasingly common since applications moved to public clouds. In 2022, nearly half of all data breaches occurred in the cloud. Managing multicluster network policies has become tedious: Organizations deploy hundreds of services across multiple Kubernetes clusters and environments. Network policies are local to clusters and do not usually work for multiple clusters. They need a lot of customization and development to define and implement security and routing policies in the multicluster and multicloud traffic. Thus, configuring and managing consistent network policies and firewall rules for each service becomes an everlasting and frustrating process. Service-to-service connection is not inherently secure in K8s: By default, one service can talk to another service inside a cluster. So, if a service pod is hacked, an attacker can quickly hack other services in that cluster easily (also known as vector attack). Kubernetes does not provide out-of-the-box encryption or authentication for communication between pods or services. Although K8s offers additional security features like enabling mTLS, it is a complex process and has to be implemented manually for each service. Lack of visibility into the network traffic: If there is a security breach, the Ops and SRE team should be able to react to the incident faster. Poor real-time visibility into the network traffic across environments becomes a bottleneck for SREs to diagnose issues in time. This impedes their ability for incident response, which leads to high mean time for recovery (MTTR) and catastrophic security risks. In theory, a zero trust network (ZTN) philosophy solves all the above challenges. In practice, Istio service mesh can help Ops and SREs to implement ZTN and secure microservices across the cloud. How Istio Service Mesh Enables ZTN for Microservices Istio is a popular open-source service mesh implementation software that provides a way to manage and secure communication between microservices. Istio abstracts the network into a dedicated layer of infrastructure and provides visibility and control over all communication between microservices. Istio works by injecting an Envoy proxy (a small sidecar daemon) alongside each service in the mesh (refer to fig. B). Envoy is an L4 and L7 proxy that helps in ensuring security connections and network connectivity among the microservices, respectively. The Istio control plane allows users to manage all these Envoy proxies, such as directly defining and cascading security and network policies. Fig B: Istio using Envoy proxy to secure connection between services across clusters and clouds. Istio simplifies enforcing a ZTN environment for microservices across the cloud. Inspired by Gartner Zero Trust Network Access, I have outlined four pillars of zero trust network that can be implemented using Istio. Four pillars of zero trust network enforced by Istio service mesh. 1. Enforcing Authentication With Istio Security teams would be required to create authentication logic for each service to verify the identity of users (humans or machines) that sent requests. The process is necessary to ensure the trustworthiness of the user. In Istio, it can be done by configuring peer-to-peer and request authentication policies using PeerAuthentication and RequestAuthentication custom resources (CRDs): Peer authentication policies involve authenticating service-to-service communication using mTLS. That is, certificates are issued for both the client and server to verify the identity of each other.Below is a sample PeerAuthentication resource that enforces strict mTLS authentication for all workloads in the foo namespace: YAML apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: foo spec: mtls: mode: STRICT Request authentication policies involve the server ensuring whether the client is even allowed to make the request. Here, the client will attach JWT (JSON Web Token) to the request for server-side authentication.Below is a sample RequestAuthentication policy created in the foo namespace. It specifies that incoming requests to the my-app service must contain JWT that is issued, and verified using public keys by entities mentioned under jwtRules. YAML apiVersion: security.istio.io/v1beta1 kind: RequestAuthentication metadata: name: jwt-example namespace: foo spec: selector: matchLabels: app: my-app jwtRules: - issuer: "https://issuer.example.com" jwksUri: "https://issuer.example.com/keys" Both authentication policies are stored in Istio configuration storage. 2. Implementing Authorization With Istio Authorization is verifying whether the authenticated user is allowed to access a server (access control) and perform the specific action. Continuous authorization prevents malicious users from accessing services, which ensures their safety and integrity. AuthorizationPolicy is another Istio CRD that provides access control for services deployed in the mesh. It helps in creating policies to deny, allow, and also perform custom actions against an inbound request. Istio allows setting multiple policies with different actions for granular access control to the workloads. The following AuthorizationPolicy denies POST requests from workloads in the dev namespace to workloads in the foo namespace. YAML apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: httpbin namespace: foo spec: action: DENY rules: - from: - source: namespaces: ["dev"] to: - operation: methods: ["POST"] 3. Multicluster and Multicloud Visibility With Istio Another important pillar of ZTN is network and service visibility. SREs and Ops teams would require real-time monitoring of traffic flowing between microservices across cloud and cluster boundaries. Having deep visibility into the network would help SREs quickly identify the root cause of anomalies, develop resolution, and restore the applications. Istio provides visibility into traffic flow and application health by collecting the following telemetry data from the mesh from the data and control plane. Logs: Istio collects all kinds of logs such as services logs, API logs, access logs, gateway logs, etc., which will help to understand the behavior of an application. Logs also help in faster troubleshooting and diagnosis of network incidents. Metrics: They help to understand the real-time performance of services for identifying anomalies and fine-tuning them in the runtime. Istio provides many metrics apart from the 4 golden ones, which are error rates, traffic, latency, and saturation. Distributed tracing: It is the tracing and visualizing of requests flowing through multiple services in a mesh. Distributed tracing helps understand interactions between microservices and provides a holistic view of service-to-service communication in the mesh. 4. Network Auditing With Istio Auditing is analyzing logs of a process over a period with the goal to optimize the overall process. Audit logs provide auditors with valuable insights into network activity, including details on each access, the methods used, traffic patterns, etc. This information is useful to understand the communication process in and out of the data center and public clouds. Istio provides information about who accessed (or requested), when, and onto what resources, which is important for auditors to investigate faulty situations. The information is required for the auditors to suggest steps to improve the overall performance of the network and security of cloud-native applications. Deploy Istio for a Better Security Posture The challenges around securing networks and data in a microservices architecture are going to be increasingly complex. Attackers are always ahead in finding vulnerabilities and exploiting them before anyone in the SRE team gets time to notice. Implementing a zero trust network will provide visibility and secure Kubernetes clusters from internal or external threats. Istio service mesh can lead this endeavor from the front, with its ability to implement zero trust out of the box.
Why Do You Need Istio Ambient Mesh? It is given that Istio is a bit resource intensive due to sidecar proxy. Although there are a lot of compelling security features that can be used, the whole Istio (the sidecar) has to be deployed from day one. Recently, the Istio community has reimagined a new data plane — ambient mode — which will be far less resource-intensive. Istio ambient mesh is a modified and sidecar-less data plane developed for enterprises that want to deploy mTLS and other security features first and deploy an advanced network later. Ambient mesh has two layers: L4 secure overlay layer or Ztunnel for implementing mTLS for communication between (services) nodes. Note that Ztunnel is a rust-based proxy. L7 processing layer or waypoint proxy for accessing advanced L7 processing for security and networking, thus unlocking the full range of Istio capabilities. In this blog, we will explain how to implement Isito ambient mesh (with L4 and L7 authorization policies) in Google Kubernetes Engine and/or Azure AKS. Prerequisite Please ensure you have the following software or infrastructure in your machine (I’ve used the following): Kubernetes 1.23 or later. The version used for implementation: 1.25.6 Istio 1.18.0-alpha.0 Note: The current version of Istio Ambient mesh (1.18.0v) is in alpha, and a few features might not work, and it may not 100% be stable for production. At this time of the blog, the current version of Ambient mesh is not working with Calico CNI, so accordingly, make your change in Google Kubernetes and Azure Kubernetes (refer to the images below). Steps To Implement Istio Ambient Mesh We will achieve the implementation of Istio ambient mesh with five major steps: Installation of Istio ambient mesh Creating and configuring services in the Kubernetes cluster Implement Istio ambient mode and verify Ztunnel and HBONE Enabling L4 authorization for services using ambient mesh Enabling L7 authorization for services using ambient mesh Steps for Installing Istio Ambient Mesh Step #1: Download and Extract Istio Ambient Mesh From the Git Repo You can go to the Git repo and download and extract the Istio ambient mesh setup in your local system. (I've used the Windows version). Add <extracted path of Istio installation package>/bin path to the environment path variable. Step #2: Install Istio Ambient Mesh Use the following command to install Istio ambient mesh to your cluster. Shell istioctl install -set profile=ambient Istio will install the following components: Istio core, Istiod, Istio CNI, Ingress gateways, Ztunnel. Step #3: Check if Ztunnel and Istio CNI Are Installed at the Node Level After installation, there will be a new namespace created named istio-system. You can check the pods by running the below command. Shell kubectl get pods -n istio-system -o wide Since I have created two nodes, there are two Ztunnel pods (daemonset) running here. Similarly, you can use the following command to verify if Istio CNI is installed at the node level. Shell kubectl get pods -n kube-system Note: istio-cni is deployed in istio-system namespace in the case of AKS. Steps To Create and Configure Services in Kubernetes Cluster Step #1: Create Namespace, Named Ambient for Deployments Shell kubectl create namespace ambient Step #2: Create Two Services in Separate Nodes I have used the following YAML for creating deployment.yaml, service.yaml and service-account.yaml. You can refer to the files in the Github repo. Code for demo-deployment-1.yaml: YAML apiVersion: apps/v1 kind: Deployment metadata: name: echoserver-depl-1 namespace: ambient labels: app: echoserver-depl-1 spec: replicas: 2 selector: matchLabels: app: echoserver-app-1 template: metadata: labels: app: echoserver-app-1 spec: serviceAccountName: echo-service-account-1 containers: - name: echoserver-app-1 image: imeshai/echoserver ports: - containerPort: 80 Code for demo-service-1.yaml: YAML apiVersion: v1 kind: Service metadata: name: echoserver-service-1 namespace: ambient spec: selector: app: echoserver-app-1 ports: - port: 80 targetPort: 80 Code for demo-service-account-1.yaml: YAML apiVersion: v1 kind: ServiceAccount metadata: name: echo-service-account-1 namespace: ambient labels: account: echo-one Similarly, you can create deployments, services, and service-account files for creating the 2nd service. Deploy two services in the Kubernetes cluster by using the command: Shell kubectl apply -f demo-service-account-1.yaml kubectl apply -f demo-deployment-1.yaml kubectl apply -f demo-service-1.yaml You can verify if your pods and svc are running by executing the following commands: Shell kubectl get pods -n <<namespace>> kubectl get svc -n <<namespace>> Note: Since I have selected two replicas for each service, Kubernetes automatically created the pods in each node to balance the loads. However, you can explicitly mention in the deployment YAML to create pods in two different nodes as well. Step #3: Create Istio Gateway and Virtual Services To Allow External Traffic to the Newly Created Services Once the two services are created, we can create an ingress gateway to allow internet traffic to the newly created services. (The names of my services are echoserver-service-1 and echoserver-service-2 respectively). I have created a demo-gateway.yaml file (code below) to link to the Istio ingress gateway. YAML apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: echoserver-gateway namespace: ambient spec: selector: istio: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" Code for Istio VirtualService YAML file to route the traffic to service1 and service2 if the URL would match /echo1 and /echo2 respectively. YAML apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: echoserver-virtual-service namespace: ambient spec: hosts: - "*" gateways: - echoserver-gateway http: - match: - uri: exact: /echo1 route: - destination: host: echoserver-service-1 port: number: 80 - match: - uri: exact: /echo2 route: - destination: host: echoserver-service-2 port: number: 80 Apply the YAML files in the Kubernetes cluster to create an Istio ingress gateway and virtual service objects. You can check the status of the Istio Ingress gateway resource in the Istio-system namespace by running the command. Shell kubectl get service -n istio-system Step #4: Access the Services From the Browser You can use the external IP address of the Istio gateway to access the services. By default, the communication will not go through the Ztunnel of the Istio ambient mesh. So we have to make it active by applying certain commands. Steps To Verify Communication Through Ztunnel (mTLS) In Ambient Mesh Step #0 (Optional): Log the Ztunnel and Istio CNI This is an optional step you can use to observe the logs of Ztunnel and Istio CNI while transitioning service communication to Istio ambient mode. You can apply these commands: Shell kubectl logs -f <<istio-cni-pod-name>> -n kube-system kubectl logs -f <<ztunnel-pod-name>> -n istio-system Step #1: Apply Ambient Mesh to the Namespace You need to apply Istio Ambient mesh to the namespace by using the following command: Shell kubectl label namespace ambient istio.io/dataplane-mode=ambient Both services would be a part of the Istio ambient service mesh now. You can verify by accessing them again from the browser. Step #2: Verify the Communication Through Ztunnel of External Traffic If you login to the browser and try to access the services (echoserver-service-1 and echoserver-service-2 for me), you will see the communication is already happening through the Ztunnel. Step #3: Verify the HBONE of Service-To-Service Communication You can also verify if your service-to-service communication is secured by letting one pod to communicate with another (and then check the logs of ztunnel pods). Log into one of the pods of service (say echoserver-service-1) and use bash to send requests to another service (say echoserver-service-2). You can use the following command to go to bash of one pod: Shell kubectl exec -it <<pod name of service-1>> -n <<namespace>> –- bash Use curl to send the request to another service. Shell curl <<service-2>> You will see in the logs of one of Ztunnel pods that the communication is already happening over the HBONE (a secure overlay tunnel for communication between two pods in different nodes). Step #4: Verification of mTLS-Based Communication in Service-To-Service Communication Connect to ssh of one of the nodes to dump TCP packets and analyze the traffic request; we will understand if the communication between two nodes is going through the secure channel or not. Execute the following command in the node-ssh: (15008 port is used for HBONE communication in Istio ambient mesh). We will write the logs into node1.pcap Shell sudo tcpdump -nAi ens4 port 9080 or port 15008 -w node1.pcap You can curl a service from one pod and check the node logs (download node1.pcap file), and when you open the file in the network analyzer, it would show something like the below: You will observe that all the application data exchanged between the two nodes are secured and using mTLS encryption. Steps To Create L4 Authorization Policies in Istio Ambient Mesh Step #1: Create an Authorization Policy Yaml in Istio Create a demo-authorization-L4.yaml file to write policies that would allow public traffic to the service-1 containers only and not from any other services. We have mentioned in the rules to allow traffic from the Istio ingress controller. YAML apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: echoserver-policy namespace: ambient spec: selector: matchLabels: app: echoserver-app-1 action: ALLOW rules: - from: - source: principals: ["cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"] Use the command to apply the YAML file. Shell kubectl apply -f demo-authorization-L4.yaml Note: Once you try to reach our service-1 (echoserver-service-1) from the browser, then you can access it without any problem. But if you curl from one of the pod of service-2, it would fail (refer to the screenshot). Steps To Create L7 Authorization Policies Using Waypoint Proxy For L7 authorization policies, we have to create a way-point proxy. The waypoint proxy can be configured using K8s gateway API. Note: by default, the gateway API CRDs might not be available in most of the cloud providers, so we need to install them. Step #1: Download Kubernetes Gateway API CRDs Use the command to download gateway API CRDs using Kustomize. Shell kubectl kustomize “github.com/kubernetes-sigs/gateway-api/crd?ref=v0.6.1” > gateway-api.yaml Step #2: Apply Kubernetes Gateway API Use the command to apply gateway API CRDs. Shell kubectl apply -f gateway-api.yaml Step #3: Create Waypoint Proxy of Kubernetes Gateway API Kind We can create a waypoint proxy of the gateway API with a YAML file. You can use the demo-waypoint-1.yaml. We have basically created a waypoint proxy for service-1 (echoserver-service-1). YAML apiVersion: gateway.networking.k8s.io/v1beta1 kind: Gateway metadata: name: echoserver-gtw-1 namespace: ambient annotations: istio.io/for-service-account: echo-service-account-1 spec: gatewayClassName: istio-waypoint listeners: - allowedRoutes: namespaces: from: Same name: imesh.ai port: 15008 protocol: ALL And apply this to the K8s cluster. Shell kubectl apply -f demo-waypoint-1.yaml Step #4: Create L7 Authorization Policy To Declare the Waypoint Proxy for Traffic Create an L7 authorization policy to define rules for when to apply the waypoint proxy (echoserver-gtw-1) for traffic. You can use the following demo-authorization-L7.yaml file to write the policy. YAML apiVersion: security.istio.io/v1beta1 kind: AuthorizationPolicy metadata: name: echoserver-policy namespace: ambient spec: selector: matchLabels: istio.io/gateway-name: echoserver-gtw-1 action: ALLOW rules: - from: - source: principals: ["cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"] to: - operation: methods: ["GET"] Use the command to apply the YAML file. Shell kubectl apply -f demo-authorization-L7.yaml Step #5: Verify the L7 Authorization Policy As we have created a waypoint proxy for service-1 and applied a policy to allow all traffic from the Istio ingress gateway, you will see you can still access service-1 (echoserver-service-1) from the browser. However, if you want to access service-1 from one of the pods of service-2 (echoserver-service-2), the waypoint proxy will not allow the traffic as per the policy (refer to the screenshot below). Ambient Mode: The Future of Istio Service Mesh I feel that ambient mesh will drive Istio adoption to new heights in the coming years. With its ability to simplify application onboarding to Istio and reduce infrastructure costs, ambient data plane mode is poised to become the future of Istio service mesh.
Multi-tiered architecture is an architectural pattern that divides an application into separate logical layers or tiers, each with a distinct responsibility and function. The layers typically include a presentation layer (or user interface), an application layer, and a data storage layer. The presentation layer is responsible for presenting data to the user and receiving input from the user. This layer often includes web or mobile interfaces, and it communicates with the application layer to retrieve or submit data. The application layer encapsulates the business logic and processes the user requests. This layer often includes middleware and application servers, which provide the necessary infrastructure to manage and process user requests. The data storage layer functions for storing and retrieving data used by the application. This layer often includes databases, file systems, and other storage technologies. By separating the application into these logical layers, multi-tiered architecture provides several benefits, including: Scalability: Each layer can be scaled independently, allowing the application to handle more users and data as needed. Maintainability: Changes to one layer do not affect the other layers, making it easier to modify or update the application. Security: By using firewalls and other security measures to separate the layers, multi-tiered architecture can provide an additional layer of security for the application. Multi-tiered architecture is a popular and effective way to design complex applications that are scalable, maintainable, and secure. Developing firewalls and multi-tiered architectures using WebSphere involves several of the following details. Identifying the system requirements: This step involves identifying the requirements of the system that you are building, including the number of servers needed, the type of database required, and the number of users that will be accessing the system. Designing the architecture: Based on the requirements identified in the first step, design the multi-tiered architecture of the system, including the number of tiers, the functions of each tier, and the communication protocol between the tiers. Installing WebSphere: Install the WebSphere application server on the server machines that you will be using for the system. WebSphere is a powerful application server that supports multiple programming languages and provides several features that are required for building complex systems. Configuring the firewall: Configure the firewall to ensure that only authorized traffic is allowed to enter the system. This involves creating rules that allow traffic based on the source, destination, and port number. Configuring the multi-tiered architecture: Configure the different tiers of the architecture to ensure that they can communicate with each other. This involves configuring the application server, the web server, and the database server. Testing the system: Test the system to ensure that it is functioning as expected. This involves testing the different tiers of the architecture to ensure that they are communicating properly and that the firewall is working as intended. Monitoring and maintenance: Monitor the system to ensure that it is running smoothly and perform regular maintenance tasks to keep it running efficiently. Multi-tiered architecture using WebSphere is a powerful and flexible approach to building enterprise applications. However, there are some limitations and challenges that you may encounter when using this approach: Complexity: Multi-tiered architecture using WebSphere can be complex to design, develop, deploy, and maintain, especially for large and complex applications. It requires expertise in several technologies and platforms, including Java, web servers, application servers, databases, and security protocols. Performance: Multi-tiered architecture using WebSphere can be slower than other architectures, especially if there is a high volume of data or transactions between the tiers. This is due to the additional overhead and latency introduced by the communication between the tiers. Scalability: Although multi-tiered architecture using WebSphere provides flexibility in scaling and managing the application, it can be challenging to scale the application horizontally across multiple servers or clusters. This is due to the complexity of managing the application state, data consistency, and load balancing. Cost: Multi-tiered architecture using WebSphere can be expensive to develop, deploy, and maintain, especially for small or medium-sized businesses. It requires licenses, hardware, and expertise, which can be a significant investment. Vendor lock-in: Multi-tiered architecture using WebSphere can lead to vendor lock-in, as it requires expertise in IBM technologies and platforms. It can be challenging to switch to another platform or vendor without significant investments in re-architecture and retraining. Multi-tiered architecture using WebSphere provides a powerful and flexible approach to building enterprise applications.
In this blog, you will learn more about JHipster and how it can help you with developing modern web applications. Enjoy! 1. Introduction JHipster is a development platform that helps you quickly set up an application. It goes beyond setting up a project structure: JHipster will generate code, database tables, CRUD (Create, Read, Update, Delete) webpages, unit and integration tests, etc. You are not bound to a specific framework or technology. Many options are available for server-side technologies (Spring Boot, Micronaut, Quarkus, etc.), databases (MySQL, MariaDB, PostgreSQL, MongoDB, etc.), build tooling (Maven, Gradle, Jenkins, GitLab CI, etc.), client-side technologies (Angular, React, Vue, etc.) and deployment (Docker, Kubernetes, AWS, GCP, etc.). The complete list can be found on the JHipster website. The list is huge and impressive. JHipster exists since 2013, so it is definitely not a new kid on the block. Enough for the introduction; let’s get started in order to see how it works and what it looks like! The sources used in this blog are available on GitHub. 2. Prerequisites It is very useful when you have basic knowledge of how applications are built with Spring Boot, Vue.js, PostgreSQL, Liquibase, and Maven. If you do not have knowledge of these technologies, JHipster can be a stepping stone for you to see what such an application looks like. The installation instructions for JHipster are executed based on the instructions provided on the JHipster website. It is good to check the official instructions in addition to the ones mentioned below. Installation of JHipster requires Java (in this blog Java 17 is used, you can use SDKMAN to install Java), Git, and Node.js (Node.js v18.15.0 is used in this blog). See the respective websites for installation instructions. Install JHipster: Shell $ sudo npm install -g generator-jhipster Create a new directory myjhipsterplanet and navigate into the directory: Shell $ mkdir myjhipsterplanet $ cd myjhipsterplanet 3. Create JHipster Application Creating a JHipster application requires answering some questions after running the jhipster command. However, at the time of writing, executing the jhipster command resulted in an error. Shell $ jhipster INFO! Using bundled JHipster node:internal/modules/cjs/loader:571 throw e; ^ Error [ERR_PACKAGE_PATH_NOT_EXPORTED]: Package subpath './lib/util/namespace' is not defined by "exports" in /usr/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/package.json at new NodeError (node:internal/errors:399:5) at exportsNotFound (node:internal/modules/esm/resolve:361:10) at packageExportsResolve (node:internal/modules/esm/resolve:697:9) at resolveExports (node:internal/modules/cjs/loader:565:36) at Module._findPath (node:internal/modules/cjs/loader:634:31) at Module._resolveFilename (node:internal/modules/cjs/loader:1061:27) at Module._load (node:internal/modules/cjs/loader:920:27) at Module.require (node:internal/modules/cjs/loader:1141:19) at require (node:internal/modules/cjs/helpers:110:18) at Object.<anonymous> (/usr/lib/node_modules/generator-jhipster/utils/blueprint.js:19:25) { code: 'ERR_PACKAGE_PATH_NOT_EXPORTED' } An issue exists for this problem, and a workaround is provided. Open the package.json file with a text editor. Shell $ sudo vi /usr/lib/node_modules/generator-jhipster/node_modules/yeoman-environment/package.json Add the following line to the exports section: JSON "exports": { ..., "./lib/util/namespace": "./lib/util/namespace.js" } Executing the jhipster command will be successful now. A list of questions will be asked in order for you to choose the technologies you want to use in the application. In bold, the answers are provided that are used for this blog. May JHipster anonymously report usage statistics to improve the tool over time? n Which type of application would you like to create? Monolithic application What is the base name of your application? (myjhipsterplanet) Do you want to make it reactive with Spring WebFlux? n What is your default Java package name? com.mydeveloperplanet.myjhipsterplanet Which type of authentication would you like to use? HTTP Session Authentication (stateful, default Spring Security mechanism) Which type of database would you like to use? SQL (H2, PostgreSQL, MySQL, MariaDB, Oracle, MSSQL) Which production database would you like to use? PostgreSQL Which development database would you like to use? H2 with disk-based persistence Which cache do you want to use? (Spring cache abstraction) No cache – Warning, when using an SQL database, this will disable the Hibernate 2nd level cache! Would you like to use Maven or Gradle for building the backend? Maven Do you want to use the JHipster Registry to configure, monitor and scale your application? Yes Which other technologies would you like to use? API first development using OpenAPI-generator Which Framework would you like to use for the client? Vue Do you want to generate the admin UI? Yes Would you like to use a Bootswatch theme (https://bootswatch.com/)? Default JHipster Would you like to enable internationalization support? No Please choose the native language of the application English Besides JUnit and Jest, which testing frameworks would you like to use? Do not choose any Would you like to install other generators from the JHipster Marketplace? No In the end, the application is generated after a few minutes, and the changes are committed in Git. Shell $ git log commit f03bf340c15315ffbeb59c56eec2b4da777f4e53 Author: mydeveloperplanet <gunter@mydeveloperplanet.com> Date: Sun Mar 19 13:53:18 2023 +0100 Initial version of myjhipsterplanet generated by generator-jhipster@7.9.3 Build the project. Shell $ ./mvnw clean verify Run the application. Shell $ java -jar target/myjhipsterplanet-0.0.1-SNAPSHOT.jar ... ---------------------------------------------------------- Application 'myjhipsterplanet' is running! Access URLs: Local: http://localhost:8080/ External: http://127.0.1.1:8080/ Profile(s): [dev, api-docs] ---------------------------------------------------------- 4. Create Entities The above-generated application is quite empty because it contains no domain entities. JHipster provides the JDL-Studio for creating the domain entities. There are also IDE plugins available for JDL-Studio. JDL stands for JHipster Domain Language, and it is quite intuitive how to model your domain. In the example application, the domain consists out of a Company with a Location. A Company has one or more Customers and a Customer also has an Address. The JDL for this domain is the following. As you can see, it is not very complicated. Plain Text entity Company { companyName String required } entity Location { streetAddress String, postalCode String, city String } entity Customer { customerName String required } entity Address { streetAddress String, postalCode String, city String } relationship OneToOne { Location{company} to Company } relationship OneToMany { Company to Customer{company} } relationship OneToOne { Address{customer} to Customer } Add the customer.jdl file to the root of the repository and generate the code with JHipster. During generating the files, you will be asked to overwrite some files, you can answer these questions with yes every time. Shell $ jhipster jdl customer.jdl As a result, a .jhipster directory is created containing JSON files for the domain entities. Shell .jhipster/ ├── Address.json ├── Company.json ├── Customer.json └── Location.json Besides that, quite some code is generated. Shell $ git status On branch master Changes not staged for commit: (use "git add <file>..." to update what will be committed) (use "git restore <file>..." to discard changes in working directory) modified: .yo-rc.json modified: src/main/resources/config/liquibase/master.xml modified: src/main/webapp/app/entities/entities-menu.vue modified: src/main/webapp/app/entities/entities.component.ts modified: src/main/webapp/app/router/entities.ts Untracked files: (use "git add <file>..." to include in what will be committed) .jhipster/ customer.jdl src/main/java/com/mydeveloperplanet/myjhipsterplanet/domain/Address.java src/main/java/com/mydeveloperplanet/myjhipsterplanet/domain/Company.java src/main/java/com/mydeveloperplanet/myjhipsterplanet/domain/Customer.java src/main/java/com/mydeveloperplanet/myjhipsterplanet/domain/Location.java src/main/java/com/mydeveloperplanet/myjhipsterplanet/repository/AddressRepository.java src/main/java/com/mydeveloperplanet/myjhipsterplanet/repository/CompanyRepository.java src/main/java/com/mydeveloperplanet/myjhipsterplanet/repository/CustomerRepository.java src/main/java/com/mydeveloperplanet/myjhipsterplanet/repository/LocationRepository.java src/main/java/com/mydeveloperplanet/myjhipsterplanet/web/rest/AddressResource.java src/main/java/com/mydeveloperplanet/myjhipsterplanet/web/rest/CompanyResource.java src/main/java/com/mydeveloperplanet/myjhipsterplanet/web/rest/CustomerResource.java src/main/java/com/mydeveloperplanet/myjhipsterplanet/web/rest/LocationResource.java src/main/resources/config/liquibase/changelog/20230319134139_added_entity_Company.xml src/main/resources/config/liquibase/changelog/20230319134140_added_entity_Location.xml src/main/resources/config/liquibase/changelog/20230319134140_added_entity_constraints_Location.xml src/main/resources/config/liquibase/changelog/20230319134141_added_entity_Customer.xml src/main/resources/config/liquibase/changelog/20230319134141_added_entity_constraints_Customer.xml src/main/resources/config/liquibase/changelog/20230319134142_added_entity_Address.xml src/main/resources/config/liquibase/changelog/20230319134142_added_entity_constraints_Address.xml src/main/resources/config/liquibase/fake-data/ src/main/webapp/app/entities/address/ src/main/webapp/app/entities/company/ src/main/webapp/app/entities/customer/ src/main/webapp/app/entities/location/ src/main/webapp/app/shared/model/address.model.ts src/main/webapp/app/shared/model/company.model.ts src/main/webapp/app/shared/model/customer.model.ts src/main/webapp/app/shared/model/location.model.ts src/test/java/com/mydeveloperplanet/myjhipsterplanet/domain/ src/test/java/com/mydeveloperplanet/myjhipsterplanet/web/rest/AddressResourceIT.java src/test/java/com/mydeveloperplanet/myjhipsterplanet/web/rest/CompanyResourceIT.java src/test/java/com/mydeveloperplanet/myjhipsterplanet/web/rest/CustomerResourceIT.java src/test/java/com/mydeveloperplanet/myjhipsterplanet/web/rest/LocationResourceIT.java src/test/javascript/spec/app/entities/ What has been generated/updated? Liquibase database migration scripts, including fake test data; Domain objects for the entities; Repositories for the domain objects; REST endpoints for the entities; Entities and pages for the frontend; Integration tests for the REST endpoints; Test for the frontend pages. 5. Generated Application You have generated an application with JHipster based on a basic domain model. What does this look like? Build and run the application with the above-generated code and open the application. A welcome page is shown. Click at the right top bottom and choose Sign In. Sign in with user admin and password admin. In the menu at the top, you choose Entities and you will see items for each entity you created. Select Companies. A page is shown with the fake data and as you can see, CRUD operations are available. Navigate to the Customer entity and edit a Customer. An edit screen is shown. A Customer belongs to a Company. However, the generated code lets you select the Company based on the database identifier. This is not very user-friendly of course, so you will need to change that. Because you are logged in as an administrator user, you also have the Administration pages. Let’s take a look at the different administration pages. User Management is used for managing the users in the application. Metrics shows all kinds of metrics, for example, the counts for each request. Health gives information about the state of the application. Configuration shows information how the application is configured, basically the application properties. Logs provides easy access to the logs and their respective log levels. API shows the Swagger documentation of the API. Database provides access to the database. In this application, you do not need to provide any credentials. Just click the Connect button. And now you can browse the database. The Administration pages provide quite some information out-of-the-box. You did not need to develop anything for this. 6. Sonar Analysis Sonar analysis is also provided. Therefore, you need to start a local Sonar server. Shell $ docker compose -f src/main/docker/sonar.yml up -d And run the Sonar analysis. Shell $ ./mvnw -Pprod clean verify sonar:sonar As a result, you can browse the Sonar results. As you can see, the result for the overall code is quite good, and there is a test coverage of 72.9%. That’s not bad at all. 7. Update Dependencies The generated JHipster code used Java 11 and some older dependencies. The versions are listed in the pom file. XML <properties> <!-- Build properties --> <maven.version>3.2.5</maven.version> <java.version>11</java.version> <node.version>v16.17.0</node.version> <npm.version>8.19.1</npm.version> ... </properties> Change these to a more recent version. XML <properties> <!-- Build properties --> <maven.version>3.8.7</maven.version> <java.version>17</java.version> <node.version>v18.5.0</node.version> <npm.version>8.19.4</npm.version> ... </properties> Build and run the application (a local Maven installation was used for this instead of the Maven wrapper). Everything still worked fine after this update. 8. Conclusion In this blog, you learned how to create a basic application generated with JHipster. Quite some code and functionality is generated, which gives you a good start for changing the application in order that it fits your needs. You only scratched the surface of JHipster in this blog, so do read the official documentation in order to get more acquainted with JHipster.
The system design of the Presence Platform depends on the design of the Real-Time Platform. I highly recommend reading the related article to improve your system design skills. What Is the Real-Time Presence Platform? The presence status is a key feature to make the real-time platform engaging and interactive for the users (clients). In layman’s terms, the presence status shows whether a particular client is currently online or offline. The presence status is popular on real-time messaging applications and social networking platforms such as LinkedIn, Facebook, and Slack [2]. The presence status represents the availability of the client for communication on a chat application or a social network. Figure 1: Online presence status; Offline presence status Usually, a green colored circle is shown adjacent to the profile image of the client to indicate the client’s presence status as online. The presence status can also show the last active timestamp of the client [1], [9]. The presence status feature offers enormous value on multiple platforms by supporting the following use cases [10]: Enabling accurate virtual waiting rooms for efficient staffing and scheduling in telemedicine Logging and viewing real-time activity in a logistics application Identify the online users in a chat application or a multi-player game Enable monitoring of the Internet of Things (IoT) devices Terminology The following terminology might be helpful for you: Node: a server that provides functionality to other services Data replication: a technique of storing multiple copies of the same data on different nodes to improve the availability and durability of the system High availability: the ability of a service to remain reachable and not lose data even when a failure occurs Connections: list of friends or contacts of a particular client How Does the Real-Time Presence Platform Work? The real-time presence platform leverages the heartbeat signal to check the status of the client in real-time. The presence status is broadcast to the clients using the persistent server-sent events (SSE) connections on the real-time platform. Questions to Ask the Interviewer Candidate What are the primary use cases of the system? Are the clients distributed across the globe? What is the total count of clients on the platform? What is the average amount of concurrent online clients? How many times does the presence status of a client change on average during the day? What is the anticipated read: write ratio of a presence status change? Should the client be able to see the list of all online connections? Interviewer Clients can view the presence status of their friends (connections) in real-time Yes 700 million 100 million 10 10: 1 Yes, the connections should be grouped into lists, and the online connections should be displayed at the top of the list. Requirements Functional Requirements Display the real-time presence status of a client Display the last active timestamp of an offline client The connections should be able to see the presence status of the client The client should be able to view the list of online clients (connections) Non-Functional Requirements Scalable Reliable High availability Low latency Real-Time Presence Platform Data Storage The timestamp of the latest heartbeat signal received must be stored in the presence database to identify the last active timestamp of the client. The relational database with support for transactions and atomicity, consistency, isolation, and durability (ACID) compliance can be an overkill for keeping presence status data. The NoSQL database, such as Apache Cassandra, offers high write throughput at the expense of slower read operations due to the usage of an LSM-based storage engine. Hence, Cassandra cannot be used to store the presence status data. Figure 2: Data schema for user presence status A distributed key-value store that can support both extremely high read and extremely high write operations must be used for the real-time presence database [1]. Redis is a fast, open-source, and in-memory key-value data store that offers high throughput read-write operations. Redis can be provisioned as the presence database. The hash data type in Redis will efficiently store the presence status of a client. The hash key will be the user ID, and the value will be the last active timestamp. Real-Time Presence Platform High-Level Design A trivial approach to implementing the presence platform is to take advantage of clickstream events in the system. The presence service can track the client status through clickstream events and change the presence status to offline when the server has not received any clickstream events from the client for a defined time threshold. The downside of this approach is that clickstream events might not be available on every system. Besides, the change in the client’s presence status will not be accurate due to the dependency on clickstream events. Prototyping the Presence Platform With Redis Sets The sets data type in Redis is an unordered collection of unique members with no duplicates. The sets data type can be used to store the presence status of the clients at the expense of not showing the last active timestamp of the client. The user IDs of the connections of a particular client can be stored in a set named connections, and the user IDs of every online user on the platform can be stored in a set named online. The sets data type in Redis supports intersection operation between multiple sets. The intersection operation between the set online and set connections can be performed to identify the list of connections of a particular client who is currently online. The set operations, such as adding, removing, or checking whether an item is a set member, take constant time complexity, O(1). The time complexity of the set intersection is O(n*m), where n is the cardinality of the smallest set, and m is the number of sets. Alternatively, the bloom filter or cuckoo filter can reduce memory usage at the expense of approximate results [4]. Figure 3: Key expiration pattern with sliding window The client-side failures or jittery client connections can be handled through the key expiration pattern. A sliding window of sets with time-scoped keys can be used to implement the key expiration pattern. In layman’s terms, a new set is created periodically to keep track of online clients. In addition, two sets named current and next with distinct expiry times are kept simultaneously in the Redis server. When a client changes the status to online, the user ID of the particular client is added to both the current set and the next set. The presence status of the client is identified by querying only the current set. The current set is eventually removed on expiry as time elapses. The trivial implementation of the system is the primary benefit of the current architecture with the sliding window key expiration. The limitation of the current prototype is that the status of a client who gets disconnected abruptly is not reflected in real time because the change in presence status depends on the sliding window length [5]. Figure 4: Presence platform with Redis sets The Redis server can make use of Redis keyspace notifications to notify the clients (subscribers) connected to the real-time platform when the presence status changes. The server can subscribe to any data change events in Redis in near real-time through Redis keyspace notifications. The key expiration in Redis might not occur in real-time because Redis uses either lazy expiration on read operation or through a background cleanup process. The keyspace notification gets only triggered when Redis removes the key-value pair. The limitations with keyspace notifications for detecting changes in presence status are the following [12]: Redis keyspace notifications consume CPU power key expiration by Redis is not real-time subscribing to keyspace notifications on the Redis cluster is relatively complex The heartbeat signal updates the expiry time of a key in the Redis set. The real-time platform can broadcast the change in the status of a particular client (publisher) to subscribers over SSE. In conclusion, do not use the Redis sets approach for implementing the presence platform. Presence Platform With Pub-Sub Server The publisher (client) can broadcast the presence status to multiple subscribers through a publish-subscribe (pub-sub) server. The subscriber who was disconnected during the broadcast operation should not see the status history of a publisher when the subscriber reconnects later to the platform. Figure 5: Presence platform with pub-sub server The message bus in the pub-sub server should be configured in fire-and-forget (ephemeral) mode to ensure that the presence status history is not stored to reduce storage needs. There is a risk with the fire-and-forget mode that some subscribers might not receive the changes in client status. Redis pub-sub or Apache Kafka can be configured as the message bus. The limitations of using the pub-sub server in the ephemeral mode are the following: No guaranteed at least one-time message delivery Degraded latency because consumers use a pull-based model The operational complexity of message bus such as Apache Kafka is relatively high In summary, do not use the pub-sub approach for implementing the presence platform. An Abstract Presence Platform The real-time platform is a critical component for the implementation of the presence feature. Both the publisher and the subscriber maintain a persistent SSE connection with the real-time platform. The bandwidth usage to fan out the client’s presence status can be reduced by reusing the existing SSE connection. Simply put, the real-time platform is a publish-subscribe service for streaming the client’s presence status to the subscribers over the persistent SSE connection [1], [7], [8], [15]. The presence platform should track the following events to identify any change in the status of the client [9], [10]: Online: published when a client connects to the platform Offline: published when a client disconnects from the platform Timeout: published when a client is disconnected from the platform for over a minute Figure 6: Presence platform; High-level design The presence status of a client connected to the real-time platform must be shown online. The client should also subscribe to the real-time platform for notifications on the status of the client’s connections (friends). At a very high level, the following operations are executed by the presence platform [1]: The subscriber (client) queries the presence service to fetch the status of a publisher over the HTTP GET method The presence service queries the presence database to identify the presence status The client subscribes to the status of a publisher through the real-time platform and creates an SSE connection The publisher comes online and makes an SSE connection with the real-time platform The real-time platform sends a heartbeat signal to the presence service over UDP The presence service queries the presence database to check if the publisher just came online The presence service publishes an online event to the real-time platform over the HTTP PUT method The real-time platform broadcasts the change in the presence status of the publisher to subscribers over SSE The presence service should return the last active timestamp of an offline publisher by querying the presence database. In synopsis, the current architecture can be used to implement a real-time presence platform. Design Deep Dive How Does the Presence Platform Identify Whether a User Is Online? The real-time platform can be leveraged by the presence platform for streaming the change in status of a particular client to the subscribers in real-time [1], [7], [8], [15]. The subscriber establishes an SSE connection with the real-time platform and also subscribes to any change in the status of the connections (clients). The heartbeat signal is used by the presence platform to detect the current status of a client (publisher). The presence platform publishes an online event to the real-time platform for notifying the subscribers when the client status changes to online [10]. The client who just came online can query the presence platform through the Representational state transfer (REST) API to check the presence status of a particular client. Figure 7: Presence platform checking whether a user is online The following operations are executed by the presence platform for notifying the subscribers when a client changes the status to online [1]: The publisher (client) creates an SSE connection with the real-time platform The real-time platform sends a heartbeat signal to the presence service over UDP The presence service queries the presence database to check whether an unexpired record for the publisher exists in the database. The presence service infers that the publisher just changed the status to online if there is no database record or if the previous record has expired. The presence platform publishes an online event to the real-time platform over the HTTP PUT method. The real-time platform broadcasts the change in the presence status to subscribers over SSE. The presence service subsequently inserts a record in the presence database with an expiry value slightly greater than the timestamp for the successive heartbeat. Figure 8: Flowchart; Presence platform processing a heartbeat signal The presence service only updates the last active timestamp of the publisher in the presence database when an unexpired record already exists in the presence database because there was no change in the status of the publisher. How Does the Presence Platform Identify When a User Goes Offline? When the publisher doesn’t reconnect to the real-time platform within a defined time interval, the presence platform should detect the absence of the heartbeat signals. The presence platform will subsequently publish an offline event over HTTP to the real-time platform for broadcasting the change in presence status to all the subscribers. The offline event must include the last active timestamp of the publisher [1]. Figure 9: Presence platform checking whether a user is offline The web browser can trigger an unload event to change the presence status when the publisher closes the application [10]. A delayed trigger can be configured on the presence service to identify the absence of a heartbeat signal. The delayed trigger will guarantee the accuracy of detection in the status changes. The delayed trigger must schedule a timer that gets executed when the time interval for the successive heartbeat elapses. The delayed trigger execution should query the presence database to check whether the database record for a specific publisher has expired. The following operations are executed by the presence platform for notifying the subscribers when a client changes the status to offline [1]: The delayed trigger queries the presence database to check whether the database record of the publisher has expired The presence service publishes an offline event to the real-time platform over HTTP when the database record has expired. The real-time platform broadcasts the change in status along with the last active timestamp to the subscribers over SSE. Figure 10: Flowchart; Presence platform using a delayed trigger The presence service creates a delayed trigger if the trigger doesn’t already exist when the heartbeat is processed. The delayed trigger should be reset in case the trigger already exists [1]. Figure 11: Actor model in the presence platform The actor model can be used to implement the presence service for improved performance. An actor is an extremely lightweight object that can receive messages and take actions to handle the messages. A thread will be assigned to an actor when a message must be processed. The thread is released once the message is processed, and the thread is subsequently assigned to the next actor. The total count of actors in the presence platform will be equal to the total count of online users. The lifecycle of an actor depends on the online status of the corresponding client. The following operations are executed when the presence service receives a heartbeat signal [1]: Create an actor in the presence service if an actor doesn’t already exist for the particular client. Set a delayed trigger on the actor for publishing an offline event when the timeout interval elapses. The actor publishes an offline event when the delayed trigger gets executed. Every delayed trigger should be drained before decommissioning the presence service for improved reliability of the real-time presence platform. How to Handle Jittery Connections of the Client The client signing off or timing out will likely have the same status on a chat application. Therefore, the offline and timeout actions of a client can be indicated by the offline event. In IoT at transportation companies, a longer time interval must be set for the timeout to prevent excessive offline events from being published because the region of IoT operation might have poor network connectivity. On the contrary, the IoT in a home security system needs a very short timeout interval for alerts when the monitoring service is down. The offline event can be published by the presence platform for the following reasons [10]: The client lost internet connectivity. The client left the platform abruptly. The clients connected to the real-time platform through mobile devices are often on unpredictable networks. The client might disconnect and reconnect to the platform randomly. The presence platform should be able to handle jittery client connections gracefully to prevent constant fluctuations in the client’s presence status, which might result in a poor user experience and unnecessary bandwidth usage [1]. Figure 12: Presence platform; Heartbeat signal The real-time platform sends periodic heartbeat signals to the presence platform with the user ID of the connected publisher and a timestamp of the heartbeat in the payload. The presence platform will show the status of the client online when periodic heartbeats are received. The presence status can be kept online, although the client gets disconnected from the network as long as the successive heartbeat is received by the presence platform within the defined timeout interval [1], [3]. Scalability The serverless functions can be used to implement presence service for scalability and reduced operational complexity. The REST API endpoints of the platform can also be implemented using serverless functions for easy horizontal scaling [10], [11]. Scaling the presence platform The presence status, including the last active timestamp of the clients, is stored in the distributed presence database. The presence service should be replicated for scalability and high availability. Consistent hashing can be used to redirect the heartbeats from a particular client to the same set of nodes (sticky routing) of the presence service to prevent the creation of duplicate delayed triggers [1]. The presence platform should be replicated across data centers for scalability, low latency, and high availability. The presence database can make use of conflict-free replicated data type (CRDT) for active-active geo-distribution. Reliability The presence database (Redis) should not lose the current status of the clients on a node failure. The following methods can be used to persist Redis data on persistent storage such as solid-state disk (SSD) [13], [14]: Redis Database (RDB) persistence performs point-in-time snapshots of the dataset at periodic intervals Append Only File (AOF) persistence logs every write operation on the server for fault-tolerance The RDB method is optimal for disaster recovery. However, there is a risk of data loss on unpredictable node failure because the snapshots are taken periodically. The AOF method is relatively more durable through an append-only log at the expense of larger storage needs. The general rule of thumb for improved reliability with Redis is to use both RDB and AOF persistence methods simultaneously [13]. Latency The network hops in the presence platform are very few because the client SSE connections on the real-time platform are reused for the implementation of the presence feature. On top of that, the pipelining feature in Redis can be used to batch the query operations on the presence database to reduce the round-trip time (RTT) [16]. Summary The real-time presence platform might seem conceptually trivial. However, orchestrating the real-time presence platform at scale and maintaining accuracy and reliability can be challenging.
This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report Designing an application architecture is never complete. Regularly, all decisions and components need to be reviewed, validated, and possibly updated. Stakeholders require that complex applications be delivered more quickly. It's a challenge for even the most senior technologists. A strategy is required, and it needs to be nimble. Strategy combines processes, which aid in keeping a team focused, and principles and patterns, which provide best practices for implementation. Regardless, it's a daunting task requiring organizational commitment. Development, Design, and Architectural Processes Applications developed without any process is chaos. A team that invents their own process and sticks to it is much better off than a team using no process. At the same time, holding a project hostage to a process can be just as detrimental. Best practices and patterns are developed over multiple years of teams looking for better ways to produce quality software in a timely manner. Processes are the codification of the best practices and patterns. By codifying best practices and patterns into processes, the processes can be scaled out to more organizations and teams. For example, when an organization selects a development process, a senior leader may ascribe to a test-first development pattern. It becomes much easier for an organization to adopt a pattern by finding a process that outlines how the pattern is organizationally implemented. In the case of the test-first development pattern, test-driven development (TDD) may be selected as the development process. Another technical leader in the same organization may choose to lead their team using domain-driven design (DDD), a pattern by which software design is communicated across technical teams as well as other stakeholders. Can these two design philosophies coexist? Yes. They can. Here, TDD defines how software is constructed while DDD defines the concepts that describe the software. Software architecture works to remain neutral to specific development and design processes, and it is the specification on how an abstract pattern is implemented. The term, "abstract pattern," is used as most software architecture patterns can be applied across any development process and across any tech stack. For example, many architectures employ the use of inversion of control (or dependency injection). How Java, JavaScript, C#, etc. implement inversion of control is specific to the tech stack, but it accomplishes the same goal. Avoiding Dogmatic Adherence Regardless of development, design, or architectural process, it's key that strict adherence to a given process does not become the end goal. Unfortunately, this happens more often than it should. Remember that the intent of a process is to codify best practices in a way that allows teams to scale using the same goals and objectives. To that end, when implementing processes, here are some points to consider: There's no one size fits all. Allow culture to mold the process. Maturity takes time. Keep focused on what you're really doing — building quality software in a timely manner. Cross-Cutting Concerns Software architecture can be designed, articulated, and implemented in several ways. Regardless of approach, most software architecture plans address two key points: simplicity and evolution. Simplicity is a relative term in that an architectural approach needs to be easily understood within the context of the business domain. Team members should look at an architectural plan and say, "Of course, that's the obvious design." It may have taken several months to develop the plan, but a team responding in this manner is a sign that the plan is on the right track. Evolution is very important and can be the trickiest aspect of an architectural plan. It may sound difficult, but an architectural plan should be able to last ten-plus years. That may be challenging to comprehend, but with the right design principles and patterns in place, it's not as challenging as one might think. At its core, good software architecture does its best to not paint itself into a corner. Figure 1 below contains no new revelations. However, each point is critical to a lasting software architecture: Building architecture that endures. This is the end goal. It entails using patterns that support the remaining points. Multiple platform and deployment support. The key here is that what exists today will very likely look different five years from now. An application needs to be readily able to adapt to changes in platform and deployment models, wherever the future takes it. Enforceable, standard patterns and compliance. Not that there's nothing new, but the software industry has decades of patterns to adopt and compliance initiatives to adhere to. Changes in both are gradual, so keeping an eye on the horizon is important. Reuse and extensibility from the ground up. Implementation patterns for reuse and extensibility will vary, but these points have been building blocks for many years. Collaboration with independent, external modules. The era of microservices helps enforce this principle. Watch for integrations that get convoluted. That is a red flag to the architecture. Evolutionary, module compatibility and upgrade paths. Everything in a software's architecture will evolve. Consider how compatibility and upgrades are managed. Design for obsolescence. Understand that many components within a software's architecture will eventually need to be totally replaced. At the beginning of each project or milestone, ask the question, "How much code are we getting rid of this release?" The effect of regular code pruning is no different than the effect of pruning plants. Figure 1: Key architectural principles Developing microservices is a combination of following these key architectural principles along with segmenting components into areas of responsibility. Microservices provide a unit of business functionality. Alone, they provide little value to a business. It's in the assembly of and integration with other microservices that business value is realized. Good microservices assembly and integration implementations follow a multi-layered approach. Horizontal and Vertical Slices Simply stated, slicing an application is about keeping things where they belong. In addition to adhering to relevant design patterns in a codebase, slicing an application applies the same patterns at the application level. Consider an application architecture as depicted by a Lego® brick structure in the figure below: Figure 2: Microservices architecture Each section of bricks is separated by that thin Lego® brick, indicating a strict separation of responsibility between each layer. Layers interact only through provided contracts/interfaces. Figure 2 depicts three layers with each having a distinct purpose. Whether it be integration with devices such as a laptop or tablet, or microservices integrating with other microservices, the point at which service requests are received remains logically the same. Here, there are several entry points ranging from web services and messaging services to an event bus. Horizontal Slices Horizontal slices of an application architecture are layers where, starting from the bottom, each layer provides services to the next layer. Typically, each layer of the stack refines the scope of underlying services to meet business use case logic. There can be no assumptions by services in lower layers on how above services interact with them. As mentioned, this is done with welldefined contracts. In addition, services within a layer interact with one another through that layer's contracts. Maintaining strict adherence to contracts allows components at each layer to be replaced with new or enhanced versions with no disruption in interoperability. Figure 3: Horizontal slices Vertical Slices Vertical slices are where everything comes together. A vertical slice is what delivers an application business objective. A vertical slice starts with an entry point that drills through the entire architecture. As depicted in Figure 4, business services can be exposed in multiple ways. Entry points are commonly exposed through some type of network protocol. However, there are cases where a network protocol doesn't suffice. In these cases, a business service may offer a native library supporting direct integration. Regardless of the use case, strict adherence to contracts must be maintained. Figure 4: Vertical slices Obvious, Yet Challenging Microservices have become a predominant pattern by which large applications are assembled. Each microservice is concerned with a very specific set of functionalities. By their very nature, microservices dictate that well-defined contracts are in place, with which other microservices and systems can integrate. Microservices that are designed and implemented for cloud-native deployments can leverage cloud-native infrastructure to support several of the patterns discussed. The patterns and diagrams presented here will look obvious to most. As mentioned, good architecture is "obvious." The challenge is adhering to it. Often, the biggest enemy to adherence is time. The pressure to meet delivery deadlines is real and where cracks in the contracts appear. Given the multiple factors in play, there are times when compromises need to be made. Make a note, create a ticket, add a comment, and leave a trail so that the compromise gets addressed as quickly as possible. Well-designed application architecture married with good processes supports longevity, which from a business perspective provides an excellent return on investment. Greenfield opportunities are fewer than evolving existing applications. Regardless, bringing this all to bear can look intimidating. The key is to start somewhere. As a team, develop a plan and "make it so"! This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report
This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report In recent years, the rise of microservices has drastically changed the way we build and deploy software. The most important aspect of this shift has been the move from traditional API architectures driven by monolithic applications to containerized microservices. This shift not only improved the scalability and flexibility of our systems, but it has also given rise to new ways of software development and deployment approaches. In this article, we will explore the path from APIs to containers and examine how microservices have paved the way for enhanced API development and software integration. The Two API Perspectives: Consumer and Provider The inherent purpose of building an API is to exchange information. Therefore, APIs require two parties: consumers and providers of the information. However, both have completely different views. For an API consumer, an API is nothing more than an interface definition and a URL. It does not matter to the consumer whether the URL is pointing to a mainframe system or a tiny IoT device hosted on the edge. Their main concern is ease of use, reliability, and security. An API provider, on the other hand, is more focused on the scalability, maintainability, and monetization aspects of an API. They also need to be acutely aware of the infrastructure behind the API interface. This is the place where APIs actually live, and it can have a lot of impact on their overall behavior. For example, an API serving millions of consumers would have drastically different infrastructure requirements when compared to a single-consumer API. The success of an API offering often depends on how well it performs in a production-like environment with real users. With the explosion of the internet and the rise of always-online applications like Netflix, Amazon, Uber, and so on, API providers had to find ways to meet the increasing demand. They could not rely on large monolithic systems that were difficult to change and scale up as and when needed. This increased focus on scalability and maintainability, which led to the rise of microservices architecture. The Rise of Microservices Architecture Microservices are not a completely new concept. They have been around for many years under various names, but the official term was actually coined by a group of software architects at a workshop near Venice in 2011/2012. The goal of microservices has always been to make a system flexible and maintainable. This is an extremely desirable target for API providers and led to the widespread adoption of microservices architecture styles across a wide variety of applications. The adoption of microservices to build and deliver APIs addressed several challenges by providing important advantages: Since microservices are developed and deployed independently, they allow developers to work on different parts of the API in parallel. This reduces the time to market for new features. Microservices can be scaled up or down to meet the varying demands of specific API offerings. This helps to improve resource use and cost savings. There is a much better distribution of API ownership as different teams can focus on different sets of microservices. By breaking down an API into smaller and more manageable services, it becomes theoretically easier to manage outages and downtimes. This is because one service going down does not mean the entire application goes down. The API consumers also benefit due to the microservices-based APIs. In general, consumer applications can model better interactions by integrating a bunch of smaller services rather than interfacing with a giant monolith. Figure 1: APIs perspectives for consumer and provider Since each microservice has a smaller scope when compared to a monolith, there is less impact on the client application in case of changes to the API endpoints. Moreover, testing for individual interactions becomes much easier. Ultimately, the rise of microservices enhanced the API-development landscape. Building an API was no longer a complicated affair. In fact, APIs became the de facto method of communication between different systems. Nonetheless, despite the huge number of benefits provided by microservices-based APIs, they also brought some initial challenges in terms of deployments and managing dependencies. Streamlining Microservices Deployment With Containers The twin challenges of deployment and managing dependencies in a microservices architecture led to the rise in container technologies. Over the years, containers have become increasingly popular, particularly in the context of microservices. With containers, we can easily package the software with its dependencies and configuration parameters in a container image and deploy it on a platform. This makes it trivial to manage and isolate dependencies in a microservices-based application. Containers can be deployed in parallel, and each deployment is predictable since everything that is needed by an application is present within the container image. Also, containers make it easier to scale and load balance resources, further boosting the scalability of microservices and APIs. Figure 2 showcases the evolution from monolithic to containerized microservices: Figure 2: Evolution of APIs from monolithic to containerized microservices Due to the rapid advancement in cloud computing, container technologies and orchestration frameworks are now natively available on almost all cloud platforms. In a way, the growing need for microservices and APIs boosted the use of containers to deploy them in a scalable manner. The Future of Microservices and APIs Although APIs and microservices have been around for numerous years, they have yet to reach their full potential. Both are going to evolve together in this decade, leading to some significant trends. One of the major trends is around API governance. Proper API governance is essential to make your APIs discoverable, reusable, secure, and consistent. In this regard, OpenAPI, a language-agnostic interface to RESTful APIs, has more or less become the prominent and standard way of documenting APIs. It can be used by both humans and machines to discover and understand an API's capabilities without access to the source code. Another important trend is the growth in API-powered capabilities in the fields of NLP, image recognition, sentiment analysis, predictive analysis, chatbot APIs, and so on. With the increased sophistication of models, this trend is only going to grow stronger, and we will see many more applications of APIs in the coming years. The rise of tools like ChatGPT and Google Bard shows that we are only at the beginning of this journey. A third trend is the increased use of API-driven DevOps for deploying microservices. With the rise of cloud computing and DevOps, managing infrastructure is an extremely important topic in most organizations. API-driven DevOps is a key enabler for Infrastructure as Code tools to provision infrastructure and deploy microservices. Under the covers, these tools rely on APIs exposed by the platforms. Apart from major ones, there are also other important trends when it comes to the future of microservices and APIs: There is a growing role of API enablement on the edge networks to power millions of IoT devices. API security practices have become more important than ever in a world of unprecedented integrations and security threats. API ecosystems are expanding as more companies develop a suite of APIs that can be used in a variety of situations to build applications. Think of API suites like Google Maps API. There is an increased use of API gateways and service meshes to improve reliability, observability, and security of microservices-based systems. Conclusion The transition from traditional APIs delivered via monolithic applications to microservices running on containers has opened up a world of possibilities for organizations. The change has enabled developers to build and deploy software faster and more reliably without compromising on the scalability aspects. They have made it possible to build extremely complex applications and operate them at an unprecedented scale. Developers and architects working in this space should first focus on the key API trends such as governance and security. However, as these things become more reliable, they should explore cutting-edge areas such as API usage in the field of artificial intelligence and DevOps. This will keep them abreast with the latest innovations. Despite the maturity of the API and microservices ecosystem, there is a lot of growth potential in this area. With more advanced capabilities coming up every day and DevOps practices making it easier to manage the underlying infrastructure, the future of APIs and microservices looks bright. References: "A Brief History of Microservices" by Keith D. Foote "The Future of APIs: 7 Trends You Need to Know" by Linus Håkansson "Why Amazon, Netflix, and Uber Prefer Microservices over Monoliths" by Nigel Pereira "Google Announces ChatGPT Rival Bard, With Wider Availability in 'Coming Weeks'" by James Vincent "Best Practices in API Governance" by Janet Wagner "APIs Impact on DevOps: Exploring APIs Continuous Evolution," xMatters Blog This is an article from DZone's 2023 Software Integration Trend Report.For more: Read the Report
Nuwan Dias
VP and Deputy CTO,
WSO2
Christian Posta
VP, Global Field CTO,
Solo.io
Rajesh Bhojwani
Development Architect,
Sap Labs
Ray Elenteny
Solution Architect,
SOLTECH