AWS CDK: Infrastructure as Abstract Data Types, Part 3
How To Implement a Gateway With Spring Cloud
Database Systems
In 2024, the focus around databases is on their ability to scale and perform in modern data architectures. It's not just centered on distributed, cloud-centric data environments anymore, but rather on databases built and managed in a way that allows them to be used optimally in advanced applications. This modernization of database architectures allows for developers and organizations to be more flexible with their data. With the advancements in automation and the proliferation of artificial intelligence, the way data capabilities and databases are built, managed, and scaled has evolved at an exponential rate.This Trend Report explores database adoption and advancements, including how to leverage time series databases for analytics, why developers should use PostgreSQL, modern, real-time streaming architectures, database automation techniques for DevOps, how to take an AI-focused pivot within database systems practices, and more. The goal of this Trend Report is to equip developers and IT professionals with tried-and-true practices alongside forward-looking industry insights to allow them to modernize and future-proof their database architectures.
Data Pipeline Essentials
Open Source Migration Practices and Patterns
When teams get their variance right, everything else falls into place. Variance is a measure of whether teams are doing what they say they are going to do. A team with high variance is over-committing or under-delivering. A team with low variance is delivering on its plans. In this case, stakeholders can feel confident in the team, the team can celebrate at the end of each sprint, and longer-term planning is likely to be accurate. In this article, we will look at what variance is, how it is calculated, and, most importantly, what Scrum Masters, coaches, and team members can do to get it right. What Is Variance? Variance is the difference between story points Committed and story points Done. A rule of thumb is to aim at a variance of no more than 20%. Points committed are calculated on starting the sprint. The points on all work items in the sprint are summed. Points done are calculated when the sprint ends. All points, attached to items that have been done, are summed. The two counts can then be displayed on a velocity chart (see below): Typically, Scrum Masters will do an at-a-glance analysis of variance. If the two bars are close together, all is good. If Committed is, say, double the Done, then some action is required. So in the example above, we can see that variance looks too high in the first three sprints, but much better in the following three. See the Appendix for how to calculate variance. Why Is Variance So Important? Variance is so important for Agile teams. Here is why: 1. Barometer of Team Health If a team has low variance — i.e., it is doing what it is committing to do — it is an indicator that many other things are going well. For example: Backlog refinement and estimation: A low variance suggests the team is effectively managing the backlog and doing accurate estimation. Sprint planning: If the team has low variance, they are doing what they are committing to do. So during sprint planning, they must be taking in a manageable amount of work into the upcoming sprint. They must be managing their capacity. Definition of Ready (DoR): Low variance suggests that the team has a solid definition of DoR and is applying it to all items put into a sprint (more on DoR below). Conversely, high variance suggests there are issues with some of the above. 2. Impact on Other Metrics Burndowns If a sprint has low variance, it means the team has done what it planned to do. Of course, it won’t determine the shape of the burndown line, but it does mean the line should end up close to zero. Cycle Times I am thinking here about the average time a work item takes to get done; i.e., from the time it gets put into a sprint to the time it is put into a done state. If variance is low, it means that all or most of the work items in a sprint are getting done by the end of it. So, cycle times will be around the same as the duration of the sprint. This is normally the target for cycle times. 3. Longer Term Planning Increasingly across the industry, organizations are making efforts to align the work of the scrum teams to longer-term plans. This can be done in a variety of ways: Product goals: To align the goals to long-term plans, they can be set to be time-bound and specific. PI planning: Organizations doing some form of scaled agile are often using the idea of PI (Product Increment) Planning. This is where the team and stakeholders (who are working on the same product) meet periodically (typically quarterly) to align to a shared vision, plan deliverables, identify risks and dependencies, etc. Roadmaps: Some organizations are aligning scrum teams to roadmaps. While this may feel a bit non-agile, I would argue that as long as these roadmaps are based on empirical data, coming from the squads, it is most likely beneficial for the organization. No matter what form of long-term planning is being used, teams with low variance will be able to accurately plan for the future. Four Ways to Improve Variance 1. Definition of Ready (DoR) When a team has high variance, they are typically taking items into the sprint that are not ready. In this short video by Jeff Sutherland, he walks us through some of the key aspects of DoR. They include: Immediately actionable: The team should be able to start work on this item on Day 1 of the sprint. If they are waiting on some input from a stakeholder or a deliverable from another team, it is not ready and does not belong in a sprint. Doable in one sprint: If it is too big to get Done in one sprint, it is not ready and should be broken down into smaller items. Understood: Discussions with the PO and the stakeholder should have already happened. The team should know exactly what needs to get done to deliver the item. Otherwise, it is not ready. When teams take in items that are not ready they are likely to get blocked, delayed, and put on hold. Variance will be impacted. The best practice is to rigorously check all work items against DoR during sprint planning. 2. Push Back on Pressure From Stakeholders As deadlines loom and pressure builds, Agile teams are sometimes pressured into taking on too much work in a sprint. They will often justify their actions with heroic phrases: “We will pull out all the stops," “We will go the extra mile," etc. Sadly, it doesn’t often happen. The team cannot deliver on an unrealistic workload and has to admit to stakeholders that they are not going to get what was promised – an uncomfortable situation for all concerned. There are many concerns about over-committing: Stability In Agile, we are aiming for stability. One of the principles of the Agile Manifesto states: "Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely." The False Promise of Longer Working Hours There have been numerous studies that have suggested an increase in working hours actually reduces productivity. Encouraging people to spend long hours working actually decreases the work they get done. The Downward Spiral Typically, working long hours sets up a downward spiral: work/life balance is impacted, physical and mental health are impacted, people get demotivated and frustrated with their work, productivity goes down, quality goes down and people start looking for other jobs. High staff turnover always has a major negative impact on productivity. One solution lies in the fundamentals of Agile. One of the pillars of scrum is transparency. Teams must make their work visible. They must be open and honest about what they can do, and also what they can’t do. One of the scrum values is courage. It takes some courage to say to managers: “I am sorry, but we don’t have the capacity to complete these items this sprint." But it is far better than committing to doing something and then failing to deliver on it. 3. Avoid Overly Optimistic Estimation There is a tendency for team members to be overly optimistic when estimating and deciding how much they can get done in a sprint. There are several root causes: Happy Day Scenario: They are only considering the Happy Day Scenario. They assume that everything will go according to plan and they will not face any issues with the work. Entirety of work item: They are not considering the entirety of the work item. Most work items consist of doing – reviewing/testing – re-working – re-reviewing/acceptance. There is a tendency to only consider the doing bit. Historical data: They are not considering historical data and looking at how long it took to do similar items in previous sprints. Scrum Masters have a role to play here. They can coach the team on estimation, helping them to take a holistic approach and consider the overall sizing of the item, not just the doing bit. They can bring in historical data to inspect how long similar items have taken to do in the past. 4. Do Capacity Planning It is dangerous to assume that all team members will be working on all the days of the sprint. In distributed teams, there may be public holidays in various locations, and team members may be taking personal time off. It is important to track this data as it will impact the capacity of the team in the upcoming sprint. This was a mistake I made when I first became a Scrum Master. We would meticulously do our sprint planning basing the committed story points on the velocity of previous sprints. Then midway through the sprint, I would realize that several team members had gone on vacation and forgotten to tell me or there were public holidays in other countries that I didn’t know about. I very quickly implemented a team calendar (I used the one in Confluence, but there are many tools that will do the job). I regularly reminded the team members to put in personal leave and any public holidays in their country or region. During sprint planning, one of the first activities was to review the calendar and determine the capacity of the upcoming sprint. Conclusions We have seen that a team with low variance is most likely a high-performing team who delivers on their plans. And we have looked at several techniques teams can use to reduce their variance. As variance decreases, stakeholders gain confidence in the team; they know they are likely to get what has been planned. Best of all, at the end of each sprint, the team can celebrate delivering what they set out to deliver. Appendix: Variance Calculation Variance can be calculated using the following formula: Variance = (Committed – Done) x 100/Committed In the above example in Sprint 1: Committed = 56 Done = 20 Variance = (56 – 20) x 100/56 = 64% As mentioned above, a good rule of thumb is to aim to keep variance below 20%.
There is a great post post on c2.com. c2.com is one of those golden blogs of the past just like codinghorror and Joel on software. You might have stumbled upon them before especially if you have been around for a long time. In the past, it was the norm to encourage individuals to read the source code and be able to figure out how things work. I see a trend against it from time to time including ranting on open source software and its documentation, which feels weird since having the source code available is essentially the ultimate form of documentation. Apart from being something that is encouraged as a good practice, I believe it’s the natural way for your troubleshooting to evolve. Over the last years, I’ve caught myself residing mostly on reading the source code, instead of StackOverflow, a Generative AI solution, or a Google search. Going straight to the repository of interest has been waaaaaay faster. There are various reasons for that. Your Problems Get More Niche One of the reasons we get the search results we get is popularity. More individuals are searching for Spring Data JPA repositories instead of NameQueries on Hibernate. As the software product you develop advances, the more specific issues you need to tackle. If you want to understand how the Pub/Sub thread pool is used, chances are you will get tons of search results on getting started with Pub/Sub but none answering your question. And that’s ok, the more things advance the more niche a situation gets. The same thing applies to Gen AI-based solutions. These solutions have been of great help, especially the ones that crunched vast amounts of open-source repositories, but still, the results are influenced by the average data they have ingested. We could spend hours battling with search and prompts but going for the source would be way faster. Buried Under Search Engine Optimization The moment you go for the second page on a search engine you know it’s over. The information you are looking for is nowhere to be found. On top of that, you get bombarded with sites popping up with information irrelevant to your request. This affects your attention span but also it’s frustrating since a hefty amount of time is spent sorting out the results with the hope of maybe getting your answer. You Want the Truth LLMs are great. We are privileged to have this technology in this era. Getting a result from an LLM is based on the training data used. Since GhatGPT has crunched GitHub, the results can be way closer to what I am looking for. This can get me far in certain cases. Not in cases where accuracy is needed. LLMs make stuff up and that’s ok, we are responsible adults and it’s our duty to validate the output of a prompt’s response as well as extract the value that is there. If you are interested in how many streams the BigQuery connector for Apache Beam opens on the stream API, there’s no alternative to reading the source code. The source code is the source of truth. The same applies to that exotic tool you recently found out, which synchronizes data between two cloud buckets. When you want to know how many operations occur so you can keep the bills low, you have no alternative to checking the source code. The Quality of the Code Is Great It’s mind-blowing how easy it is to navigate the source code of open-source projects nowadays. The quality of the code and the practices employed are widespread. Most projects have a structure that is pretty much predictable. Also, the presence of extensive testing assists a lot since the test cases act as a specification of how a component should behave. If you think about it on the one hand I have the choice of issuing multiple search requests or various prompts and then refining them until I get the result of choice, on the other hand, all I have to do is search a project with a predictable structure. There’s Too Much Software Out There Overall, there is way too much software out there that would be a Herculean effort to document fully. Also no matter how many software blogs are out there they won’t focus on that specific need of yours. The more specialized a software is, the less likely to be widely documented. I don’t see this as a negative, actually, it’s a positive that we can have software components available to tackle niche use cases. Having that software is already a win, having to read its source is part of using it. Devil Is in the Details It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so. It is common to assume that a software component operates in a specific way and operates based on this assumption. The same assumption can also be found in other sources. But what if that module that you thought was thread-safe is not? What if you have to commit a transaction while you have the assumption that the transaction is auto-committed once you exit a block? Usually, if something is not in the documentation with bold letters we can rely on certain assumptions. Checking the source is the one thing that can protect you from false assumptions. It’s all about understanding how things work and respecting their peculiarities. Overall, the more I embraced checking the source code the less frustrating things became. Somehow it is my shortcut of choice. Tools and search can fail you, but the source code can’t let you down, it’s the source of truth after all.
Imagine inheriting a codebase where classes are clean and concise, and developers don't have to worry about boilerplate code because they can get automatically generated getters, setters, constructors, and even builder patterns. Meet Lombok, a library used for accelerating development through "cleaning" boilerplate code and injecting it automatically during compile time. But, is Lombok a hero or a villain in disguise? Let's explore the widespread perceived benefits and potential drawbacks of its adoption in enterprise Java solutions. Overview Enterprise software is designed as a stable and predictable solution. When adopting Lombok, a framework that modifies code at compile time, we are navigating towards the opposite direction, through seas of unpredictable results and hidden complexities. It's a choice that may put an enterprise application's long-term success at risk. Architects have the tough responsibility of making decisions that will reflect throughout a software's life cycle - from development to sustaining. During development phases, when considering ways to improve developer productivity, it's crucial to balance the long-term impacts of each decision on the code complexity, predictability, and maintainability, while also considering that software will rely on multiple frameworks that must be able to correctly function with each other without incompatibilities that directly interfere on each other's behaviors. Let's have a close look at different ways that Lombok is used, the common thoughts around it, and explore associated trade-offs. In-Depth Look Let's explore practical use cases and some developer's statements I've heard over the years, while we explore the aspects around the ideas. “Lombok Creates Getters and Constructors, Saving Me Time on Data Classes” Nowadays, we can use powerful IDEs and their code-generation features to create our getters, setters, and builders. It's best to use it to zealously and consciously generate code. Lombok's annotations can lead to unexpected mutability: @Data, by default, generates public setters, which violates encapsulation principles. Lombok offers ways to mitigate this through the usage of annotations such as @Value and @Getters(AccessLevel.NONE), although this is an error-prone approach, as now your code is "vulnerable by default," and it's up to you to remember to adjust this every time. Given the fact that code generation to some degree reduces the thought processes during the implementation, these configurations can be overseen by developers who might happen to forget, or maybe who don't know enough about Lombok to be aware of this need. @Builder generates a mutable builder class, which can lead to inconsistent object states. Remember the quote from Joshua Bloch in his book, Effective Java: "Classes should be immutable unless there's a very good reason to make them mutable." See an example of an immutable class, which is not an anemic model: Java public final class Customer { //final class private final String id; private final String name; private final List<Order> orders; private Customer(String id, String name, List<Order> orders) { this.id = Objects.requireNonNull(id); this.name = Objects.requireNonNull(name); this.orders = List.copyOf(orders); // Defensive copy } public static Customer create(String id, String name, List<Order> orders) { return new Customer(id, name, orders); } // Getters (no setters,for immutability) public String getId() { return id; } public String getName() { return name; } public List<Order> getOrders() { return List.copyOf(orders); } // Explicit methods for better control @Override public boolean equals(Object o) { if (this == o) return true; if (!(o instanceof Customer)) return false; Customer customer = (Customer) o; return id.equals(customer.id); } @Override public int hashCode() { return Objects.hash(id); } } “Utility Classes and Exceptions Are a Breeze With Lombok” Developers may often use Lombok to accelerate exception class creation: Java @Getter public class MyAppGenericException extends RuntimeException { private final String error; private final String message; } While this approach reduces boilerplate code, you may end up with overly generic exceptions and add difficulties for those wanting to create proper exception handling. A suggestion for a better approach is to create specific exception classes with meaningful constructors. In this case, it's essential to keep in mind that, as discussed before, hidden code leads to reduced clarity and creates uncertainty on how exceptions should be used and extended properly. In the example, if the service was designed to use the MyAppGenericException as the main parent exception, developers would now rely on a base class that can be confusing since all constructors and methods are hidden. This particular characteristic may result in worse productivity in larger teams, as the level of understanding of Lombok will differ across developers, not to mention the increased complexity of new developers or code maintainers to understand how everything fits together. For the reasons presented so far, Lombok's @UtilityClass can also be misleading: Java @UtilityClass public class ParseUtils { public static CustomerId parseCustomerId(String CustomerIdentifier) { //... } } Instead, a standard-based approach is recommended: Java public final class ParseUtils { private ParseUtils() { throw new AssertionError("This class should not be instantiated."); } public static CustomerId parseCustomerId(String CustomerIdentifier) { //... } } "Logging Is Effortless With @Slf4j in My Classes" Another usage of the auto-generation capabilities of Lombok is for boilerplate logging setup within classes through the @Slf4j annotation : Java @Slf4j public class MyService { public void doSomething() { log.info("log when i'm doing something"); } } You have just tightly coupled the implementation of logging capabilities using a particular framework (Slf4j) with your code implementation. Instead, consider using CDI for a more flexible approach: Java public class SomeService { private final Logger logger; public SomeService(Logger logger) { this.logger = Objects.requireNonNull(logger); } public void doSomething() { logger.info("Doing something"); } } “Controlling Access and Updating an Attribute To Reflect a DB Column Change, for Example, Is Way Simpler With Lombok” Developers argue that addressing some types of changes in the code can be way faster when not having boilerplate code. For example, in Hibernate entities, changes in database columns could reflect updating the code of attributes and getters/setters. Instead of tightly coupling the database and code implementation (e.g., attribute name and column name), consider alternatives that provide proper abstraction between these two layers, such as the Hibernate annotations for customizing column names. Finally, you may also want better control over the persistence behaviors instead of hidden generated code. Another popular annotation in Lombok is @With. It's used to create a deep copy of an object and may result in excessive object creation, without any validation of business rules. “@Builder Simplifies Creating and Working With Complex Objects” Oversimplified domain models and anemic models are expected results for projects that rely on Lombok. On the generation of the equals, hashcode and toString methods, be aware of the following: @EqualsAndHashCode may conflict with entity identity in JPA, resulting in unexpected behaviors on the comparison between detached entities, or collections' operations. Java @Entity @EqualsAndHashCode public class Order { @Id @GeneratedValue private Long id; private String orderNumber; // Other fields... } @Data automatically creates toString() methods that by default expose all attributes, including, sensitive information. Consider carefully implementing these methods based on domain requirements: Java @Entity public class User { @Id private Long id; private String username; private String passwordHash; // Sensitive information @Override public boolean equals(Object o) { if (this == o) return true; if (!(o instanceof User)) return false; User user = (User) o; return Objects.equals(id, user.id); } @Override public int hashCode() { return Objects.hash(id); } @Override public String toString() { return "User{id=" + id + ", username='" + username + "'}"; // Note: passwordHash is deliberately excluded } } “Lombok Lets Me Use Inheritance, Unlike Java Records” It's true that when using records, we can't use hierarchy. However, this is a limitation that often has us delivering better code design. Here's how to address this need through the use of composition and interfaces: Java public interface Vehicle { String getRegistrationNumber(); int getNumberOfWheels(); } public record Car(String registrationNumber, String model) implements Vehicle { @Override public int getNumberOfWheels() { return 4; } } public record Motorcycle(String registrationNumber, boolean hasSidecar) implements Vehicle { @Override public int getNumberOfWheels() { return hasSidecar ? 3 : 2; } } “Lombok Streamlines My Build Process and Maintenance” Lombok's magic comes at the cost of code clarity and goes against SOLID principles: Hidden implementation: You can not see the generated methods in the source code. Developers may face challenges to fully understanding all the class' behaviors without dedicating time to learn how Lombok works behind the scenes. Debugging complications: Debugging the code may not work consistently as the source code you have often is not a reflection of the behavior in runtime. Final Thoughts "The ratio of time spent reading versus writing is well over 10 to 1... Making it easy to read makes it easier to write." - Robert C. Martin, Clean Code While Lombok offers short-term productivity gains, its use in enterprise Java development introduces significant risks to code maintainability, readability, and long-term project health. To avoid the challenges we've explored that derive from Lombok usage, consider alternative options that give you much higher chances of creating more stable, maintainable, and predictable code. Developers who seek to deliver successful, long-term, enterprise software projects in critical domains have higher chances to succeed in their endeavors by embracing best practices and good principles of Java development for creating robust, maintainable, and secure software. Learn More "Unraveling Lombok’s Code Design Pitfalls: Exploring the Pros and Cons," Otavio Santana
Managing database connection strings securely for any microservice is critical; often, we secure the username and password using the environment variables and never factor in masking or hiding the database hostname. In reader and writer database instances, there would be a mandate in some organizations not to disclose the hostname and pass that through an environment variable at runtime during the application start. This article discusses configuring the hostname through environment variables in the properties file. Database Configurations Through Environment Variables We would typically configure the default connection string for Spring microservices in the below manner, with the database username and password getting passed as the environment variables. Java server.port=8081 server.servlet.context-path=/api/e-sign/v1 spring.esign.datasource.jdbc-url=jdbc:mysql://localhost:3306/e-sign?allowPublicKeyRetrieval=true&useSSL=false spring.esign.datasource.username=${DB_USER_NAME} spring.esign.datasource.password=${DB_USER_PASSWORD} spring.esign.datasource.driver-class-name=com.mysql.cj.jdbc.Driver spring.esign.datasource.minimumIdle=5 spring.esign.datasource.maxLifetime=120000 If our microservice connects to a secure database with limited access and the database administrator or the infrastructure team does not want you to provide the database hostname, then we have an issue. Typically, the production database hostname would be something like below: Java spring.esign.datasource.jdbc-url=jdbc:mysql://prod-db.fabrikam.com:3306/e-sign?allowPublicKeyRetrieval=true&useSSL=false spring.esign.datasource.username=${DB_USER_NAME} spring.esign.datasource.password=${DB_USER_PASSWORD} Using @Configuration Class In this case, the administrator or the cloud infrastructure team wants them to provide the hostname as an environment variable at runtime when the container starts. One of the options is to build and concatenate the connection string in the configuration class as below: Java @Configuration public class DatabaseConfig { private final Environment environment; public DatabaseConfig(Environment environment) { this.environment = environment; } @Bean public DataSource databaseDataSource() { String hostForDatabase = environment.getProperty("ESIGN_DB_HOST", "localhost:3306"); String dbUserName = environment.getProperty("DB_USER_NAME", "user-name"); String dbUserPassword = environment.getProperty("DB_USER_PASSWORD", "user-password"); String url = String.format("jdbc:mysql://%s/e-sign?allowPublicKeyRetrieval=true&useSSL=false", hostForDatabase); DriverManagerDataSource dataSource = new DriverManagerDataSource(); dataSource.setDriverClassName("com.mysql.cj.jdbc.Driver"); dataSource.setUrl(url); dataSource.setUsername(dbUserName); // Replace with your actual username dataSource.setPassword(dbUserPassword); // Replace with your actual password return dataSource; } } The above approach would work, but we need to use the approach with application.properties, which is easy to use and quite flexible. The properties file allows you to collate all configurations in a centralized manner, making it easier to update and manage. It also improves readability by separating configuration from code. The DevOps team can update the environment variable values without making code changes. Environment Variable for Database Hostname Commonly, we use environment variables for database username and password and use the corresponding expression placeholder expressions ${} in the application properties file. Java spring.esign.datasource.username=${DB_USER_NAME} spring.esign.datasource.password=${DB_USER_PASSWORD} However, for the database URL, we need to use the environment variable only for the hostname and not for the connection string, as each connection string for different microservices would have different parameters. So, to address this, Spring allows you to have the placeholder expression within the connection string shown below; this gives flexibility and the ability to stick with the approach of using the application.properties file instead of doing it through the database configuration class. Java spring.esign.datasource.jdbc-url=jdbc:mysql://${ESIGN_DB_HOST}:3306/e-sign?allowPublicKeyRetrieval=true&useSSL=false Once we have decided on the above approach and if we need to troubleshoot any issue for whatever reason in lower environments, we can then use the ApplicationListener interface to see the resolved URL: Java @Component public class ApplicationReadyLogger implements ApplicationListener<ApplicationReadyEvent> { private final Environment environment; public ApplicationReadyLogger(Environment environment) { this.environment = environment; } @Override public void onApplicationEvent(ApplicationReadyEvent event) { String jdbcUrl = environment.getProperty("spring.esign.datasource.jdbc-url"); System.out.println("Resolved JDBC URL: " + jdbcUrl); } } If there is an issue with the hostname configuration, it will show as an error when the application starts. However, after the application has been started, thanks to the above ApplicationReadyLogger implementation, we can see the database URL in the application logs. Please note that we should not do this in production environments where the infrastructure team wants to maintain secrecy around the database writer hostname. Using the above steps, we can configure the database hostname as an environment variable in the connection string inside the application.properties file. Conclusion Using environment variables for database hostnames to connect to data-sensitive databases can enhance security and flexibility and give the cloud infrastructure and DevOps teams more power. Using the placeholder expressions ensures that our configuration remains clear and maintainable.
In today's rapidly evolving go-to-market landscape, organizations with diverse product portfolios face intricate pricing and discounting challenges. The implementation of a robust, scalable pricing framework has become paramount to maintaining competitive edge and operational efficiency. This study delves into the strategic utilization of Salesforce CPQ's advanced features, specifically price rules and Quote Calculator Plugins (QCP), to address complex dynamic pricing scenarios. This guide presents an in-depth analysis of ten sophisticated use cases, demonstrating how these automation tools can be harnessed to create agile, responsive pricing models. By emphasizing low-code and declarative configuration methodology, this comprehensive guide provides software developers and solution architects with a blueprint to accelerate development cycles and enhance the implementation of nuanced pricing strategies. What Are Price Rules and QCP? Price Rules in Salesforce CPQ Price Rules are a feature in Salesforce CPQ that allows users to define automated pricing logic. They apply discounts, adjust prices, or add charges based on specified conditions, enabling complex pricing scenarios without custom code. To implement these complex rules in Salesforce CPQ, you'll often need to combine multiple features such as Price Rules, Price Conditions, Price Actions, Custom Fields, Formula Fields, Product Rules, and Lookup Query objects. Adequately set Evaluation event (Before/On/After calculation) and the evaluation order of price rules to avoid any row-lock or incorrect updates. QCP (Quote Calculator Plugin) QCP is a JavaScript-based customization tool in Salesforce CPQ that allows for advanced, custom pricing calculations. It provides programmatic access to the quote model, enabling complex pricing logic beyond standard CPQ features. First, you'll need to enable the QCP in your Salesforce CPQ settings. Then, you can create a new QCP script or modify an existing one. When needed, make sure QCP has access to the quote, line items, and other CPQ objects. QCP has a character limit; therefore, it is advised that it should only be used for logic which cannot be implemented with any declarative CPQ method. Additionally, you may need to use Apex code for more complex calculations or integrations with external systems. Use Case Examples Using Price Rules and QCP Use Case 1: Volume-Based Tiered Discounting Apply different discount percentages based on quantity ranges. For example: Label Minimum_Quantity__c Maximum_Quantity__c Discount_Percentage__c Tier 1 1 10 0 Tier 2 11 50 5 Tier 3 51 100 10 Tier 4 101 999999 15 Price Rule Implementation Use Price Rules with Lookup Query objects to define tiers and corresponding discounts. Create New Price Rule: Name: Volume-Based Tiered Discount Active: True Evaluation Event: On Calculate Calculator: Default Calculator Conditions Met: All Add Lookup Query to Price Rule: Name: Volume Discount Tier Lookup Lookup Object: Volume Discount Tier (the above table represents this Lookup Object) Match Type: Single Input Field: Quantity Operator: Between Low-Value Field: Minimum_Quantity__c High-Value Field: Maximum_Quantity__c Return Field: Discount_Percentage__c Add Price Action to Price Rule: Type: Discount (Percent) Value Source: Lookup Lookup Object: Volume Discount Tier Lookup Source Variable: Return Value Target Object: Line Target Field: Discount With this configuration, any number of discount tiers could be supported as per the volume being ordered. Lookup tables/objects provide a great way to handle a dynamic pricing framework. QCP Implementation Now, let's see how the same use case can be implemented with the QCP script. The code can be invoked with Before/On/After calculating events as per the need of the use case. JavaScript function applyVolumeTieredDiscount(lineItems) { lineItems.forEach(item => { let discount = 0; if (item.Quantity > 100) { discount = 15; } else if (item.Quantity > 50) { discount = 10; } else if (item.Quantity > 10) { discount = 5; } item.Discount = discount; }); } Use Case 2: Bundle Pricing Offer special pricing when specific products are purchased together. For instance, a computer, monitor, and keyboard might have a lower total price when bought as a bundle vs individual components. Price Rule Implementation Create Product Bundles and use Price Rules to apply discounts when all components are present in the quote. Create a new Price Rule: Name: Bundle Discount Active: True Evaluation Event: On Calculate Calculator: Default Calculator Conditions Met: All Add Price Conditions: Condition 1: Field: Product Code Operator: Equals Filter Value: PROD-A Condition 2: Field: Quote.Line Items.Product Code Operator: Contains Filter Value: PROD-B Condition 3: Field: Quote.Line Items.Product Code Operator: Contains Filter Value: PROD-C Add Price Action: Type: Discount (Absolute) Value: 100 // $100 discount for the bundle Apply To: Group Apply Immediately: True QCP Implementation JavaScript function applyBundlePricing(lineItems) { const bundleComponents = ['Product A', 'Product B', 'Product C']; const allComponentsPresent = bundleComponents.every(component => lineItems.some(item => item.Product.Name === component) ); if (allComponentsPresent) { const bundleDiscount = 100; // $100 discount for the bundle lineItems.forEach(item => { if (bundleComponents.includes(item.Product.Name)) { item.Additional_Discount__c = bundleDiscount / bundleComponents.length; } }); } } Use Case 3: Cross-Product Conditional Discounting Apply discounts on one product based on the purchase of another. For example, offer a 20% discount on software licenses if the customer buys a specific hardware product. Price Rule Implementation Use Price Conditions to check for the presence of the conditional product and Price Actions to apply the discount on the target product. Create a new Price Rule: Name: Product Y Discount Active: True Evaluation Event: On Calculate Calculator: Default Calculator Conditions Met: All Add Price Conditions: Condition 1: Field: Product Code Operator: Equals Filter Value: PROD-Y Condition 2: Field: Quote.Line Items.Product Code Operator: Contains Filter Value: PROD-X Add Price Action: Type: Discount (Percent) Value: 20 Apply To: Line Apply Immediately: True QCP Implementation JavaScript function applyCrossProductDiscount(lineItems) { const hasProductX = lineItems.some(item => item.Product.Name === 'Product X'); if (hasProductX) { lineItems.forEach(item => { if (item.Product.Name === 'Product Y') { item.Discount = 20; } }); } } Use Case 4: Time-Based Pricing Adjust prices based on subscription length or contract duration. For instance, offer a 10% discount for 2-year contracts and 15% for 3-year contracts. Price Rule Implementation Use Quote Term fields and Price Rules to apply discounts based on the contract duration. This use case demonstrates the use of another important feature, the Price Action Formula. Create a new Price Rule: Name: Contract Duration Discount Active: True Evaluation Event: On Calculate Calculator: Default Calculator Conditions Met: All Add Price Condition: (to avoid invocation of price action for every calculation) Type: Custom Advanced Condition: Quote.Subscription_Term__c >= 24 Add Price Action: Type: Discount (Percent) Value Source: Formula Apply To: Line Apply Immediately: True Formula: JavaScript CASE( FLOOR(Quote.Subscription_Term__c / 12), 2, 10, 3, 15, 4, 20, 5, 25, 0 ) This approach offers several advantages: It combines multiple tiers into a single price rule, making it easier to manage. It's more flexible and can easily accommodate additional tiers by adding more cases to the formula. It uses a formula-based approach, which can be modified without needing to create multiple price rules for each tier. QCP Implementation JavaScript function applyTimeBasedPricing(quote, lineItems) { const contractDuration = quote.Contract_Duration_Months__c; let discount = 0; if (contractDuration >= 36) { discount = 15; } else if (contractDuration >= 24) { discount = 10; } lineItems.forEach(item => { item.Additional_Discount__c = discount; }); } Use Case 5: Customer/Market Segment-Specific Pricing Set different prices for various customer categories. For example, enterprise customers might get a 25% discount, while SMBs get a 10% discount. Price Rule Implementation Use Account fields to categorize customers and Price Rules to apply segment-specific discounts. Create a new Price Rule: Name: Customer Segment Discount Active: True Evaluation Event: On Calculate Calculator: Default Calculator Conditions Met: All Add Price Condition: Type: Custom Advanced Condition: Quote.Account.Customer_Segment__c is not blank Add Price Action: Type: Discount (Percent) Value Source: Formula Apply To: Line Apply Immediately: True Formula: JavaScript CASE( Quote.Account.Customer_Segment__c, 'Enterprise', 25, 'Strategic', 30, 'SMB', 10, 'Startup', 5, 'Government', 15, 0 ) QCP Implementation JavaScript function applyCustomerSegmentPricing(quote, lineItems) { const customerSegment = quote.Account.Customer_Segment__c; let discount = 0; switch (customerSegment) { case 'Enterprise': discount = 25; break; case 'SMB': discount = 10; break; } lineItems.forEach(item => { item.Additional_Discount__c = discount; }); } Use Case 6: Competitive Pricing Rules Automatically adjust prices based on competitors' pricing data. For instance, always price your product 5% below a specific competitor's price. Price Rule Implementation Create custom fields to store competitor pricing data on the product object and use Price Rules with formula fields to calculate and apply the adjusted price. Create a new Price Rule: Name: Competitive Pricing Active: True Evaluation Event: On Calculate Calculator: Default Calculator Conditions Met: All Add Price Condition: Field: Competitor_Price__c Operator: Is Not Null Add Price Actions: Action 1: Type: Custom Value Field: Competitor_Price__c * 0.95 Target Field: Special_Price__c Action 2 (to ensure price doesn't go below floor price): Type: Price Value Source: Formula Formula: MAX(Special_Price__c, Floor_Price__c) Target Field: Special_Price__c QCP Implementation JavaScript function applyCompetitivePricing(lineItems) { lineItems.forEach(item => { if (item.Competitor_Price__c) { const ourPrice = item.Competitor_Price__c * 0.95; // 5% below competitor const minimumPrice = item.Floor_Price__c || item.ListPrice * 0.8; // 20% below list price as floor item.Special_Price__c = Math.max(ourPrice, minimumPrice); } }); } Use Case 7: Multi-Currency Pricing Apply different pricing rules based on the currency used in the transaction. For example, offer a 5% discount for USD transactions but a 3% discount for EUR transactions. The discounted prices can be maintained directly in the Pricebook entry of a particular product however, the price rules can extend the conditional logic further to add a dynamic pricing element based on various conditions based on quote and quote line-specific data. Price Rule Implementation Use the Multi-Currency feature in Salesforce and create Price Rules that consider the Quote Currency field. The lookup table approach will provide further flexibility to the approach. Label Currency_Code__c Discount_Percentage__c USD USD 5 EUR EUR 3 GBP GBP 4 JPY JPY 2 CAD CAD 4.5 AUD AUD 3.5 CHF CHF 2.5 Create Price Rule Name: Multi-Currency Discount Active: True Evaluation Event: On Calculate Calculator: Default Calculator Conditions Met: All Add Lookup Query to Price Rule (above table represents the structure of Currency Discount object) Name: Currency Discount Lookup Lookup Object: Currency Discount Match Type: Single Input Field: CurrencyIsoCode Operator: Equals Comparison Field: Currency_Code__c Return Field: Discount_Percentage__c Add Price Action to Price Rule Type: Discount (Percent) Value Source: Lookup Lookup Object: Currency Discount Lookup Source Variable: Return Value Target Object: Line Target Field: Discount QCP Implementation JavaScript function applyMultiCurrencyPricing(quote, lineItems) { const currency = quote.CurrencyIsoCode; let discount = 0; switch (currency) { case 'USD': discount = 5; break; case 'EUR': discount = 3; break; } //add more currencies as needed lineItems.forEach(item => { item.Additional_Discount__c = discount; }); } Use Case 8: Margin-Based Pricing Dynamically adjust prices to maintain a specific profit margin. For instance, ensure a minimum 20% margin on all products. Price Rule Implementation Create custom fields for cost data and use Price Rules with formula fields to calculate and enforce minimum prices based on desired margins. Create a new Price Rule: Name: Minimum Margin Active: True Evaluation Event: On Calculate Calculator: Default Calculator Conditions Met: All Add Price Condition: Field: (List Price - Cost__c) / List Price Operator: Less Than Filter Value: 0.20 Add Price Action: Type: Custom Value Field: Cost__c / (1 - 0.20) Target Field: Special_Price__c QCP Implementation JavaScript function applyMarginBasedPricing(lineItems) { const desiredMargin = 0.20; // 20% margin lineItems.forEach(item => { if (item.Cost__c) { const minimumPrice = item.Cost__c / (1 - desiredMargin); if (item.NetPrice < minimumPrice) { item.Special_Price__c = minimumPrice; } } }); } Use Case 9: Geolocation-Based Pricing Set different prices based on the customer's geographical location. Geolocation-based pricing with multiple levels. Apply different pricing adjustments based on the following hierarchy. Price Rule Implementation Use Account, User, or Quote fields to store location data and create Price Rules that apply location-specific adjustments. Label Sales_Region__c Area__c Sub_Area__c Price_Adjustment__c NA_USA_CA North America USA California 1.1 NA_USA_NY North America USA New York 1.15 NA_Canada North America Canada null 1.05 EU_UK_London Europe UK London 1.2 EU_Germany Europe Germany null 1.08 APAC_Japan Asia-Pacific Japan null 1.12 Create the Price Rule: Name: Geolocation Based Pricing Active: True Evaluation Event: On Calculate Calculator: Default Calculator Conditions Met: All Add Lookup Query to Price Rule Name: Geo Pricing Lookup Lookup Object: Geo Pricing Match Type: Single Input Field 1: Quote.Account.Sales_Region__c Operator: Equals Comparison Field: Sales_Region__c Input Field 2: Quote.Account.BillingCountry Operator: Equals Comparison Field: Area__c Input Field 3: Quote.Account.BillingState Operator: Equals Comparison Field: Sub_Area__c Return Field: Price_Adjustment__c Add Price Action to Price Rule Type: Percent Of List Value Source: Lookup Lookup Object: Geo Pricing Lookup Source Variable: Return Value Target Object: Line Target Field: Special Price QCP Implementation JavaScript export function onBeforeCalculate(quote, lines, conn) { applyGeoPricing(quote, lines); } function applyGeoPricing(quote, lines) { const account = quote.record.Account; const salesRegion = account.Sales_Region__c; const area = account.BillingCountry; const subArea = account.BillingState; // Fetch the geo pricing adjustment const geoPricing = getGeoPricing(salesRegion, area, subArea); if (geoPricing) { lines.forEach(line => { line.record.Special_Price__c = line.record.ListPrice * geoPricing.Price_Adjustment__c; }); } } function getGeoPricing(salesRegion, area, subArea) { // This is a simplified version. In a real scenario, you'd query the Custom Metadata Type. // For demonstration, we're using a hardcoded object. const geoPricings = [ { Sales_Region__c: 'North America', Area__c: 'USA', Sub_Area__c: 'California', Price_Adjustment__c: 1.10 }, { Sales_Region__c: 'North America', Area__c: 'USA', Sub_Area__c: 'New York', Price_Adjustment__c: 1.15 }, { Sales_Region__c: 'North America', Area__c: 'Canada', Sub_Area__c: null, Price_Adjustment__c: 1.05 }, { Sales_Region__c: 'Europe', Area__c: 'UK', Sub_Area__c: 'London', Price_Adjustment__c: 1.20 }, { Sales_Region__c: 'Europe', Area__c: 'Germany', Sub_Area__c: null, Price_Adjustment__c: 1.08 }, { Sales_Region__c: 'Asia-Pacific', Area__c: 'Japan', Sub_Area__c: null, Price_Adjustment__c: 1.12 } ]; // Find the most specific match return geoPricings.find(gp => gp.Sales_Region__c === salesRegion && gp.Area__c === area && gp.Sub_Area__c === subArea ) || geoPricings.find(gp => gp.Sales_Region__c === salesRegion && gp.Area__c === area && gp.Sub_Area__c === null ) || geoPricings.find(gp => gp.Sales_Region__c === salesRegion && gp.Area__c === null && gp.Sub_Area__c === null ); } Use Case 10: Usage-Based Pricing Implement complex calculations for pricing based on estimated or actual usage. For instance, price cloud storage based on projected data volume and access frequency. Price Rule Implementation A tiered pricing model for a cloud storage service based on the estimated monthly usage. The pricing will have a base price and additional charges for usage tiers. This implementation has another variety approach of leveraging custom metadata and configuration settings along with native price rule functionalities. Pricing Model: Base Price: $100 per month 0-1000 GB: Included in base price 1001-5000 GB: $0.05 per GB 5001-10000 GB: $0.04 per GB 10001+ GB: $0.03 per GB Step 1: Create Custom Metadata Type in Salesforce setup: Go to Setup > Custom Metadata Types Click "New Custom Metadata Type" Label: Usage Pricing Tier Plural Label: Usage Pricing Tiers Object Name: Usage_Pricing_Tier__mdt Add custom fields: Minimum_Usage__c (Number) Maximum_Usage__c (Number) Price_Per_GB__c (Currency) Step 2: Add records to the Custom Metadata Type: Label Minimum_Usage__c Maximum_Usage__c Price_Per_GB__c Tier 1 0 1000 0 Tier 2 1001 5000 0.05 Tier 3 5001 10000 0.04 Tier 4 10001 999999999 0.03 Create the Price Rule: Name: Usage-Based Pricing Active: True Evaluation Event: On Calculate Calculator: Default Calculator Conditions Met: All Add Price Condition Field: Product.Pricing_Model__c Operator: Equals Filter Value: Usage-Based Add Lookup Query to Price Rule Name: Usage Pricing Tier Lookup Lookup Object: Usage Pricing Tier Match Type: Single Input Field: Estimated_Monthly_Usage__c Operator: Between Low-Value Field: Minimum_Usage__c High-Value Field: Maximum_Usage__c Return Field: Price_Per_GB__c Add Price Action to Price Rule Type: Custom Value Source: Formula Target Object: Line Target Field: Special_Price__c Formula: JavaScript 100 + (MAX(Estimated_Monthly_Usage__c - 1000, 0) * Usage_Pricing_Tier_Lookup.Price_Per_GB__c) QCP Implementation JavaScript export function onBeforeCalculate(quote, lines, conn) { applyUsageBasedPricing(quote, lines); } function applyUsageBasedPricing(quote, lines) { lines.forEach(line => { if (line.record.Product__r.Pricing_Model__c === 'Usage-Based') { const usage = line.record.Estimated_Monthly_Usage__c || 0; const basePrice = 100; let additionalCost = 0; if (usage > 1000) { additionalCost += calculateTierCost(usage, 1001, 5000, 0.05); } if (usage > 5000) { additionalCost += calculateTierCost(usage, 5001, 10000, 0.04); } if (usage > 10000) { additionalCost += calculateTierCost(usage, 10001, usage, 0.03); } line.record.Special_Price__c = basePrice + additionalCost; } }); } function calculateTierCost(usage, tierStart, tierEnd, pricePerGB) { const usageInTier = Math.min(usage, tierEnd) - tierStart + 1; return Math.max(usageInTier, 0) * pricePerGB; } // Optional: Add a function to provide usage tier information to the user export function onAfterCalculate(quote, lines, conn) { lines.forEach(line => { if (line.record.Product__r.Pricing_Model__c === 'Usage-Based') { const usage = line.record.Estimated_Monthly_Usage__c || 0; const tierInfo = getUsageTierInfo(usage); line.record.Usage_Tier_Info__c = tierInfo; } }); } function getUsageTierInfo(usage) { if (usage <= 1000) { return 'Tier 1: 0-1000 GB (Included in base price)'; } else if (usage <= 5000) { return 'Tier 2: 1001-5000 GB ($0.05 per GB)'; } else if (usage <= 10000) { return 'Tier 3: 5001-10000 GB ($0.04 per GB)'; } else { return 'Tier 4: 10001+ GB ($0.03 per GB)'; } } Likewise, there are a plethora of use cases that can be implemented using price rule configuration. The recommendation is to always use a declarative approach before turning to QCP, which is specifically available as an extension to the price rule engine. Note: The rules and scripts above are not compiled. They are added as a demonstration for explanation purposes. Conclusion Salesforce CPQ's Price Rules and Quote Calculator Plugin (QCP) offer a powerful combination for implementing dynamic pricing strategies. Price Rules provide a declarative approach for straightforward pricing logic, while QCP enables complex, programmatic calculations. When used with Custom Metadata Types/Custom lookup objects, these tools create a flexible, scalable, and easily maintainable pricing system. Together, they can address a wide range of pricing needs, from simple to highly sophisticated, allowing businesses to adapt quickly to market changes and implement nuanced pricing strategies. This versatility enables organizations to optimize sales processes, improve profit margins, and respond effectively to diverse customer needs within the Salesforce CPQ ecosystem.
Imagine millions of customers trying to book last-minute deals on a hotel or flight during one of the biggest sale events of the year, and while some customers can book, others see failures while making their bookings. This inconsistency results in frustrated customers and logistical nightmares. This typical scenario highlights a fundamental challenge in distributed systems and databases: how do you balance consistency and availability? This article aims to highlight the nuances of this balancing act along with the complexities and trade-offs that are in play. CAP Theorem, Consistency, and Availability To understand the nuances better, it’s important to understand the CAP theorem. As there are several other articles on the internet explaining this, we will refrain from going into details. However, to highlight what the CAP theorem is per Eric Brewer (who formulated the CAP theorem), a system can achieve only two of the three guarantees: Consistency, Availability, and Partition Tolerance. In simple words, during a network partition (when communication between nodes is disrupted), a system must choose between being consistent (all the nodes showing the same data) versus all the nodes being available (all the requests will receive a response). Consistency in distributed databases means that each read receives the most recent write. This ensures that the data is accurate and reliable, especially if it’s a system that is being built for financial transactions where even a slight discrepancy can lead to a major issue. However, as highlighted by the CAP theorem, strong consistency results in increased latency and complexity. A perfect example of a database that prioritizes high consistency is Google Spanner, as it covers scenarios that require high data integrity. Google Spanner achieves this with the help of an innovative API called TrueTime API, which provides globally synchronized timestamps and bounded uncertainty. It achieves globally consistent timestamps with the help of GPS and atomic clocks for time synchronizations across different availability zones and data centers. It also offers synchronous replication and strong consensus protocols to achieve high consistency. Availability, as we already know, ensures that the system continues to operate even if certain nodes fail. This is crucial for system reliability and for high-traffic applications. Prioritizing this again, as mentioned earlier, will result in eventual consistency problems, where different nodes show different data. Cassandra and DynamoDB (DDB) exemplify this approach, helping you handle massive and distributed workloads efficiently. Trade-Off: Real-World Applications It’s crucial to understand the implications of either consistency or availability based on the needs of your application. Financial institutions or organizations that handle payment data often need to prioritize consistency to ensure transactional accuracy, whereas social media applications or organizations that cater to continuous engagement might want to tilt toward availability. A consistency-first system might leverage a globally synchronized clock to ensure strong consistency across data centers regardless of high latency, whereas an availability-first system might leverage Amazon’s DDB’s tunable consistency levels to allow developers to pick between high and low consistency based on their requirements. It’s equally important to understand what consistency models can be used to make design decisions: Strong consistency: Guarantees that all nodes see the same data at one time Eventual consistency: Ensures that all nodes will eventually converge at the same state; This works well where immediate consistency is not crucial. Other models: Models like causal consistency (which maintains the order of operations), read-your-writes (which ensures users see every update), and session consistency (maintaining state in the single session) Strategies for Achieving the Right Balance Hybrid Approaches Many systems are designed to adopt a hybrid model to use tunable consistency levels. For example, Amazon’s DDB allows users to select consistency levels based on their needs. Applications can specify using a parameter in DDB — ConsistentRead — in the read requests; thereby, offering more flexibility. Context-Driven Decisions The choice of designing the systems towards consistency and availability must be based on the requirements of your application. Prioritize strong consistency if you need to ensure accurate and reliable transactions. Otherwise, prioritize availability of your application if it requires high user engagement and high transaction volumes. CRDTs and Advanced Consensus Algorithms There are emerging technologies like Conflict-Free Replicated Data Types and advanced consensus algorithms that offer promising solutions that can mitigate this tradeoff between consistency and availability. CRDTs allow for concurrent updates that can be made without conflicts, thereby achieving both availability and consistency and becoming an ideal choice for applications like live document editing or distributed file systems. Raft and Multi-Paxos are two consensus algorithms that enhance fault tolerance and consistency in distributed systems. These algorithms make sure that all nodes agree on the same value even in case of network failures or node partitions. Google Spanner, mentioned earlier, leverages a combination of Multi-Paxos and TrueTime (globally synchronized clock) to provide strong consistency and data integrity across different geographical regions. Conclusion Balancing the battle for consistency and availability requires a good understanding of your application needs and its trade-offs. By leveraging the correct strategy of using a hybrid approach, context-driven decision, or using an emerging technology, you can optimize your system to meet performance, scalability, and data integrity requirements. At the end of the day, whether you pick between strong and eventual consistency in DDB or leverage an advanced consensus algorithm, the goal is to design a reliable and robust system that meets customer needs and experience.
In my previous article, Managing Architectural Tech Debt, I talked about understanding and managing architectural technical debt. Architectural technical debt is the often ignored, but ironically one of the most damaging, categories of technical debt. In this article, I want to dive deeper into one way to manage architectural technical debt (and technical debt as a whole) — architectural observability (AO). AO is a new category of observability that I believe is just as important, if not more so, as application performance management (APM). I believe we need to shift left observability — to the architectural stage — where we can not just see symptoms, but fix core problems. Let’s take a look. APM Is (Half) the Answer You already know that APM is important. Gartner defines it as “software that enables the observation and analysis of application health, performance, and user experience.” IDC reports that companies using APM solutions see a 2.5x improvement in mean time to resolution (MTTR) and a 50% reduction in the number of incidents. APM and observability: Helps to ensure better user experiences by monitoring, in real-time, performance and responsiveness. Can help identify defects. Can provide data to teams, such as usage patterns, bottlenecks, and overall health to keep systems healthy. Overall, APM has become a necessary tool for troubleshooting and fixing issues in enterprise environments. APM has become table stakes. And APM works! In my current role, we use APM to observe the usage of our APIs to understand the breakdown of URI (uniform resource identifier) requests across all of our consumers. When our APIs are not functioning as expected, we lean to APM in order to gain visibility into performance bottlenecks. In cases where an alert is triggered, the same interface can be utilized for initial troubleshooting efforts in order to help pin down the root cause. But even though APM works, there’s a problem. APM identifies the symptoms of the defects, but not the actual defects themselves. It’s up to the team to track down why the problems are occurring. And with the pressure we often feel in prod to “just fix the problem as fast as you can,” I often see that while symptoms may be addressed, teams don’t have the time (or organizational support) to find and fix the actual core problems. Imagine taking aspirin because you get a headache every night, but never taking the time to figure out why you keep having headaches. To find — and address — the why of our defects, we need architectural observability. Architectural Observability: Getting to the Real Answers We need to shift our processes left, stop focusing on symptoms, and instead focus on the root cause of these problems and actually reduce the number of incidents caught with APM. That’s where architectural observability comes in. Architectural observability is the ability to analyze an application’s architecture (both statically and dynamically), understand how it works, observe changes, and identify and fix architectural technical debt. Architectural observability is the next step in observability tools. Architectural observability gives you visibility into your application architecture, helping you solve problems (not just identify symptoms) earlier in the SDLC by identifying architectural issues. You probably already have the data you need to implement AO — it uses the same data sources as APM (for example, OpenTelemetry (OTel). But AO takes that data and applies a layer of intelligence that focuses on analyzing the architecture and the sources of architectural technical debt. For example, an AO tool might analyze: Architectural complexity: The interdependence and relationships of services within the architecture, the number of flows in a service, identifying multi-hop flows and circular flows. Dependency mappings: Relationships among services including circular dependencies. Architectural drift: What has changed since your last release, what new domains/services were added, and what new dependencies and flows were introduced? Technical debt: Such as resource exclusivity, service dependencies, duplicate services that should be merged, and complexity. Technical debt is a huge problem in the industry. 70% of organizations say that technical debt is a major obstacle to innovation. Database-related issues: Examine if multiple services are accessing tables. Architectural observability is proactive and strategic. Where APM tools alert on the leaks in the roof when it is already raining, AO identifies architectural issues that can lead to those leaks, way before they actually occur. Using Tools To Gain Architectural Observability Used well, AO doesn’t just help you find issues earlier, but helps you: Truly discover and understand your architecture and its relationships and dependencies. Prevent issues caused by architectural changes. Make systems more resilient and scalable by continually monitoring, modernizing, and strengthening your architecture. Minimize technical debt. I love that last one. As I pointed out in my last article, architectural debt is a foe of mine that I have been battling for over a decade. Architectural observability is a new field and is starting to gain traction as something teams must have. There aren’t many tools yet built around the concept, but let’s look at how your team might use one of the first AO tools — vFunction — to gain AO. Once you’ve connected to your applications (through the OTel connector or similar), the tool analyzes your system (in this case with vFunction, it’s using AI to understand and analyze your architecture). Then you get a report on the current state of your architecture. You’ll see details such as: A visualization of the architecture across your entire app portfolio A map of services and entry points, cross services APIs, and external APIs Exclusivity of database tables (Kafka, Redis, MongoDB) and other resources Complexity scores And more… And you can use architectural observability to monitor your architecture not just in the present state, but dynamically as it changes. Changes to your architecture (a.k.a., architectural drift)? You’ll know right away. Add a dependency impacting resiliency? You’ll catch it before migrating. Did you create a circular flow? Did you significantly increase the complexity of your system? Find out early. Incurring even more technical debt? You can’t hide from it now. With architectural observability, you now have hard proof of how architectural debt is affecting your systems and how by using AO you can identify and prioritize the actual problems in your systems early and often, rather than only addressing the symptoms in production. AO makes applications more resilient, and more scalable and helps your team move faster. And AO can help proactively. Is your team moving from a monolith to a microservice? AO can give you a plan to move your architecture forward. With AO, you could analyze your monolith, understand how its domains and functionality are structured and connected, and then get actionable steps on how to modularize and move the functionality to microservices. And once you have a distributed architecture (microservices or distributed monoliths) you’ll want to make sure your architecture doesn’t drift or get more complex to the point that things start to break, you lose control, and you have to slow down your engineering velocity. AO keeps your applications in check whether monolith or microservices. Architectural Observability Gives You Better Systems My readers may recall that I have been focused on the following mission statement, which I feel can apply to any IT professional: “Focus your time on delivering features/functionality that extends the value of your intellectual property. Leverage frameworks, products, and services for everything else.” — J. Vester Architectural observability adheres to my mission statement perfectly. By moving from plain vanilla observability to architectural observability, we can shift left for more reliable, more resilient, and better-performing systems. This shift affords us more time to spend on the business problems that our teams understand best. As John F. Kennedy once said: “The best time to repair the roof is when the sun is shining.” Architectural observability should be placed on the same level (or higher) as APM. With AO, teams can truly understand their architecture, have a broader and more insightful view of their applications than with APM alone, and fix problems earlier. Have a really great day!
In today's software development and deployment, maintaining agility, stability, and security is crucial. Centralized configuration management and feature flags are tools that help in achieving these goals. Integrating these into an organization’s DevSecOps process provides the flexibility needed to respond to changes quickly and effectively. Feature Flag Driven Development (FFDD) Feature Flag Driven Development (FFDD) is a software development approach that uses feature flags or toggles. These flags help to control the new feature deployments. The main goal of FFDD is to separate the development and release of new features. This provides flexibility to development teams and allows them to roll out features gradually and choose specific user groups to test them. Key Principles of FFDD FFDD provides control of feature visibility at the individual user level. This control enables the possibility of targeted testing and a gradual rollout of new features with more controlled deployment. With FFDD, teams can continuously deploy code to production without revealing the new features to end users. This approach allows flexible deployments with faster feedback loops. FFDD allows teams to quickly turn off the released features, in case of issues or negative user feedback. This reduces the risk associated with deploying un-tested/unstable new features. FFDD provides a faster way to bring back the applications to their previous working state. FFDD allows incremental rollout of new features, by releasing the features to a smaller percentage of users. Based on the performance and feedback gathered, the feature can be gradually rolled out to all users. With FFDD, features can be released to a subset of users to gather feedback. This allows teams to make decisions about the improvements to be made to the feature. Benefits of FFDD FFDD separates development and deployment, allowing teams to roll out features more quickly. By testing features directly in production, teams can identify and resolve issues, ensuring high-quality software and faster releases. Quickly toggling off features minimizes the need for rollbacks and re-deployments. This reduces the downtime and negative impact on users. FFDD improves the collaboration between development, QA, and product teams, helping everyone gain a better understanding of feature requirements. Feature flags provide the flexibility to adjust features at runtime without needing code changes or redeployment. Centralized Configuration Management (CCM) Centralized configuration management (CCM) is a practice that involves managing configuration settings for applications from a central location. Here are the key principles and benefits of CCM: Key Principles CCM ensures that all application instances use the same configuration settings by acting as a single source of truth. CCM includes version control (GitOps), and this helps developers to track changes to configurations easily. CCM guarantees consistent configurations across all environments, significantly reducing the risk of errors. CCM automates the management of configurations, minimizing manual effort and reducing the risk of human error. Benefits CCM improves the deployment process by providing centralized management, reducing human errors, and ensuring consistency. CCM allows quicker deployments, reducing the time to market. With centralized configurations, maintenance becomes easier and more efficient. CCM makes the collaboration between development, operations, and QA teams easier, by offering a centralized location for managing configurations, ensuring consistency across teams. CCM often includes features for change management and tracking, ensuring compliance with regulatory requirements like GDPR or PCI DSS. By reducing the manual effort needed to manage configurations, CCM results in significant cost savings. Advanced Deployment Strategies Rolling Updates FFDD and CCM play a major role in enabling rolling updates, by providing control and flexibility to the deployment process. With FFDD, teams can roll out changes gradually. This allows monitoring of performance and user feedback before the team performs a full rollout. CCM ensures that the configurations are consistent across various environments, and makes it easier to control the deployments. The combination of FFDD and CCM enables teams to deploy updates in a safer and more effective way. It reduces the risk of manual errors and downtimes. It also speeds up the debugging process when errors are faced by the users, as the configuration is centralized. Case Study Here is a hypothetical case study on how rolling updates with feature flags and centralized configuration management can be implemented: Scenario: An e-commerce platform is planning to release a new feature that allows users to save items to their wish list directly from the product listing page. The development team wants to roll out this feature gradually to their users. Implementation: The team uses feature flags to enable the new wish-list feature for a small percentage of users. They also utilize centralized configuration management to manage the configuration settings such as the button's appearance and behaviour. Rollout strategy: The team starts with a 5% rollout and monitors the performance of the feature. After gathering the user feedback they gradually increase the rollout percentage until it's rolled to all users. The team monitors the user feedback at every stage of the rollout. Benefits: By using feature flags and centralized configuration management, the team can roll out the new feature in a controlled way, identify and address any issues, and are able to deliver a better experience to users. Canary Releases FFDD and CCM facilitate canary releases by providing controlled, flexible, and consistent deployments. FFDD allows teams to gradually release new features to a small subset of users. This provides control to the teams to identify any issues or bugs early on, reducing the impact of the issues on the entire user base. CCM ensures that the configurations are centrally managed, making it easier to roll out the changes faster and easier. This also ensures consistency across environments. This combination enables teams to deploy the canaries efficiently, and mitigate the risks associated with new releases early, with feedback from the subset of users. Case Study Here is a hypothetical case study on how canary release with feature flags and centralized configuration management can be implemented: Scenario: A social media platform wants to introduce a new algorithm that displays the posts in users’ feeds. The platform wants to make sure that the new algorithm improves user engagement. Implementation: The development team uses feature flags to enable the new algorithm for a subset of users, while the rest of the users continue with the existing algorithm. The team also uses centralized configuration management to manage new configuration settings applicable to the new algorithm, such as the weightage of different factors that decide the post visibility. Rollout strategy: The team monitors the user engagement, post visibility metrics, and user feedback, if any. Based on these, the team decided to increase the percentage of users who see the posts based on the new algorithm. Benefits: By using feature flags and centralized configuration management, the team can test the new algorithm with a subset of users, before rolling out the new algorithm to all users. This helps minimize the risk of negative user impact and helps in a smooth transition to the new algorithm. A/B Testing FFDD and CCM play a key role in conducting A/B testing and help control the experiments efficiently. FFDD allows teams to easily enable or disable different versions of a feature for various user groups. Performance and feedback results of the A/B tests can be compared for different implementations. CCM makes sure that the configuration for the A/B testing environments is managed centrally, so that it’s easier to adjust the test parameters, if any. Together, FFDD and CCM enable teams to conduct A/B tests with precision and to gather feedback. This helps the teams to make data-driven decisions to improve their products/services. Case Study Here is a hypothetical case study on how A/B testing can be implemented with feature flags and centralized configuration management: Scenario: A news website’s mobile app is testing two different headline styles to see which one attracts more readers to their site. The team wants to test the impact of headline style on user engagement. Implementation: The team uses feature flags to create two variations of headline styles: Style A & Style B. They also use centralized configuration management to manage the configuration settings for each headline style, such as font size, color, and placement. A/B Testing Strategy: The team displays the headlines in either style A or style B to users. The team tracks the click-through rates, and other metrics like time spent on the article, and social shares for each headline style. Based on this data, the impact of each headline style is measured and the final style to be rolled out is decided. Benefits: By using feature flags and centralized configuration management, the team can easily test different headline styles and gather data on user engagement. This helps the team to make informed decisions to use in the future. Dark Launch FFDD and CCM can help Dark Launching new features by providing a controlled environment to test in production without exposing them to end users. FFDD allows teams to deploy new features which can be toggled on or off at runtime only to the testers. CCM controls the configurations for the Dark Launch environment centrally, making it easier to do the feature rollout. This approach allows teams to test the new features in the production-like setting, and address any issues before fully releasing the feature to all users. Case Study Here is a hypothetical case study on how Dark Launch with feature flags and centralized configuration management can be implemented: Scenario: A financial services platform is developing a new feature that allows users to apply for loans online. The platform wants to test this feature in production without exposing it to the end users until it is fully tested. Implementation: The development team uses feature flags to enable the loan application feature for internal testing only. They also use centralized configuration management to manage the configuration settings for the feature like the approval process. Dark Launch Strategy: The team tests the loan application feature internally in production. Once it is fully tested and approved, they enable it for external users. Benefits: By using feature flags and centralized configuration management, the team can test the loan application feature in a production-like environment, and minimize the risk of exposing users to an unstable feature. User Segmentation FFDD and CCM enable the delivery of personalized experiences to different user groups based on specific criteria. FFDD allows teams to enable or disable features for different segments of users and ensures that each group receives a personalized experience. CCM ensures that the configurations for each user segment are managed centrally. FFDD and CCM enable teams to deliver personalized experiences based on user behavior and preferences. Case Study Here is a hypothetical case study on how User Segmentation with feature flags and centralized configuration management can be implemented: Scenario: A streaming platform plans to introduce a new subscription plan, with exclusive content to their users. The platform wants to test the new plan with a select group of users before making it available to all users. Implementation: The development team uses feature flags to enable the new premium plan for a smaller group of users who meet the specific criteria. They also use centralized configuration management to manage the configuration settings for the new plan, such as pricing and content availability. User segmentation strategy: The team monitors user engagement and determines the effectiveness of the new plan. Based on the metrics gathered, they roll out the plan to a wider audience. Benefits: By using feature flags and centralized configuration management, the team can test the new premium plan with a select group of users. Based on the feedback, the feature is rolled out to all users. This approach helps to understand the user expectations and thereby increases the revenue. Tools for Implementing FFDD and CCM Centralized Configuration Management IBM cloud app configuration helps in centralized configuration management by providing a centralized, unified platform to store, manage, and deploy application configurations across different environments. It offers centralized configuration settings and environment-specific parameters including secrets. This also supports exporting the configurations to a version-controlled repository like Git. Azure app configuration helps centrally manage application settings and feature flags, enabling seamless updates and configurations across distributed applications. HashiCorp consul is a service mesh networking solution that provides service discovery, configuration, and segmentation capabilities. It allows centralized configuration storage and distribution of the configuration to your services. Spring cloud config provides server and client-side support for externalized configuration in a distributed system. It allows you to store configuration data in a version-controlled repository (e.g. Git) and distribute it to your applications. AWS systems manager parameter store provides secure, hierarchical storage for configuration data and secrets management. It integrates seamlessly with other AWS services and provides versioning and encryption capabilities. Etcd is a distributed key-value store that is often used for service discovery and configuration management in Kubernetes clusters. It provides a reliable, distributed data store for storing configuration data. Feature Flags IBM cloud app configuration helps in feature flag management to create, manage, and deploy feature flags in your applications. With IBM Cloud App Configuration, you can easily create feature flags to control the rollout of new features and experiment with different variations. It allows you to define different segments of users and target specific groups with different feature flag configurations, enabling you to perform A/B testing and measure the impact of changes based on user behavior. LaunchDarkly is a feature management platform that allows you to control feature lifecycles, target specific user segments, and perform A/B testing. It provides SDKs for various programming languages and integrations with popular development tools. Split.io is a feature flagging and experimentation platform that allows you to roll out features gradually, target specific user segments, and measure the impact of changes on key metrics. It provides SDKs for multiple languages and integrations with CI/CD pipelines. Flagsmith is an open-source feature flagging and experimentation platform that allows you to manage feature flags, remote configurations, and user segmentation. It provides a user-friendly dashboard and SDKs for multiple languages. Optimizely is an experimentation platform that allows you to run A/B tests, multivariate tests, and personalization campaigns. It provides a visual editor for creating experiments and integrations with popular analytics tools. Rollout.io is a feature flagging and experimentation platform that allows you to control feature releases, target specific user segments, and monitor feature performance in real time. It provides SDKs for various platforms and languages. Conclusion Advanced deployment strategies like Centralized Configuration Management and Feature Flags provide significant benefits and controls for deployment. By implementing these strategies organizations can achieve flexibility, agility, and control over their deployments.
As a tech leader in the custom software development industry for over a decade, I’ve seen methodologies evolve and change. One that has particularly caught my attention in recent years is Adaptive Software Development (ASD). This approach is designed to help teams thrive in uncertain and rapidly changing environments. In this guide, I’ll walk you through the principles, benefits, and practical steps for implementing ASD in your projects. Understanding Adaptive Software Development Jim Highsmith developed Adaptive Software Development (ASD). Unlike traditional methods that rely on strict planning and prediction, ASD recognizes that change is certain and welcomes it. ASD consists of three main phases: speculate, collaborate, and learn. The Phases of ASD Speculate: Instead of detailed upfront planning, ASD encourages teams to create a mission statement and high-level plan, accepting that requirements will evolve. Collaborate: Continuous communication and teamwork are crucial. Developers, testers, and stakeholders work together throughout the process. Learn: After each iteration, teams reflect on their work, gather feedback, and adjust their approach. Why Choose ASD? Adaptive Software Development offers numerous benefits that can help you succeed. As an experienced tech leader, I can confirm that ASD not only allows teams to navigate uncertainty but also drives innovation and excellence. By choosing it, you can ensure that your software development process is agile, responsive, and aligned with your business goals, helping you achieve success in an ever-changing market. Flexibility and Responsiveness From experience, I say that flexibility and responsiveness are the foundations for success. Traditional methodologies often need to be revised because they require detailed plans that assume requirements will remain stable. However, we all know that change is inevitable. Whether it's a shift in market trends, new customer demands, or unexpected obstacles, your project needs to adapt quickly to stay relevant. Adaptive Software Development (ASD) is designed to grasp change rather than resist it. This approach allows you to respond to new information and evolving customer needs without derailing the entire project. You can also ensure that your software remains aligned with your business goals and market demands, helping you stay ahead of the competition. Enhanced Collaboration Another benefit that will surely put you in the top spot is that it fosters a collaborative environment. It has given my team a space where all team members and stakeholders have a voice. This is not just about getting everyone on the same page but about applying your team's collective expertise and creativity. In an ASD environment, developers, testers, and stakeholders work together throughout the project lifecycle. Plus, Regular meetings and open communication channels facilitate the exchange of ideas and feedback, ensuring that everyone is aligned and working towards the same goals. Involving your team in planning and decision-making translates to a sense of ownership and accountability that drives motivation and productivity. Continuous Improvement One of the core principles of ASD that I have learned is the focus on continuous learning and improvement. After each iteration, my team conducts review sessions to gather feedback, analyze what went well, and identify areas for improvement. What this process does is that it allows teams to refine their approach and enhance their performance over time. By regularly assessing your work and making adjustments, you can continuously improve the quality of your software and the efficiency of your development process. As leaders, we should be able to build a culture of excellence among our teams. ASD helps with just that; every team member will be committed to delivering the best possible results. Customer-Centric Development In traditional development methodologies, customers often see the final product only at the end of the project, which can lead to misunderstandings and dissatisfaction. We never want to be in distress situations, right? Here comes another benefit of ASD: Customers are involved throughout the development process. Regular iterations and feedback sessions ensure that the product aligns with their needs and expectations. This customer-centric approach has helped me and will help you build trust and satisfaction. This will lead to a better user experience and higher customer retention rates. You will also be able to create software that truly meets your customers' needs and adds value to their lives. Risk Management Managing risk is the most critical aspect of any software development project. When things go differently than planned, it can lead to significant delays and cost overruns. I always go for a more predictable and controlled development process. On the other hand, this methodology helps mitigate risk by breaking the project into smaller, manageable iterations. Each iteration produces a working piece of software that can be reviewed and tested. This approach allows you to identify and address potential issues early, reducing the risk of costly mistakes and rework. Better Quality and Higher Productivity I commend the iterative nature of ASD the most. It promotes continuous testing and quality assurance. Each iteration involves reviewing and testing the working software, which helps us identify and fix defects early. This leads to higher-quality software and reduces the risk of critical issues in the final product. The focus on collaboration, continuous improvement, and customer feedback also helps my team increase productivity. By leveraging every team member's strengths and expertise, teams can work more efficiently and deliver better results. By choosing ASD, you can ensure higher-quality software and increased productivity, helping you achieve your project goals faster and more effectively. Adaptability to Market Changes In a constantly evolving market, adapting to changes is one of the most crucial aspects I have witnessed. This new method gives you the ability to change and adapt to new information and market trends. This has also helped us stay relevant and competitive in the market. We were also able to ensure that our software development process was agile and responsive, helping us achieve business success. Empowering Teams Last but not least, adaptive software development facilitates team empowerment by engaging them in the processes of planning, decision-making, and problem-solving. This empowerment cultivates a sense of ownership and responsibility, which should be the end goal of every leader. When team members feel valued and included, they are more likely to be engaged and steadfast in delivering their optimal work. The selection also enables the establishment of a driven and high-performing team dedicated to achieving project objectives. Implementing ASD in Your Projects Here is how my team has implemented Adaptive Software Development: Step 1: Embrace the Mindset The first step in implementing ASD is to embrace its mindset. This means accepting that change is not only inevitable but also beneficial. Encourage your team to be open to new ideas and ready to adapt their plans as new information emerges. Step 2: Establish a High-Level Plan During the speculation phase, create a mission statement and high-level plan. This plan should outline the project’s goals and deliverables but remain flexible enough to accommodate changes. Remember, the goal is to set a direction, not to create a detailed roadmap. Step 3: Foster Collaboration To foster a collaborative environment, ensure that all team members are involved in planning and decision-making. Regular meetings, open communication channels, and collaborative tools can facilitate this. Step 4: Iterative Development Break the project into smaller, manageable iterations. Each iteration should produce a working piece of software that can be reviewed and tested. This approach allows for frequent feedback and adjustments, ensuring that the project stays on track. Step 5: Continuous Learning Gather feedback from team members and stakeholders, analyze what went well and what didn’t, and make necessary adjustments. This learning phase is crucial for continuous improvement. Step 6: Use Adaptive Practices Incorporate adaptive practices such as pair programming, test-driven development (TDD), and continuous integration (CI) to enhance flexibility and responsiveness. These practices align well with the principles of ASD and can significantly improve the development process. Overcoming Challenges in ASD Implementation If this is new for you, you might face some challenges. I have listed the most common ones below: Resistance To Change One of the most common challenges in implementing ASD is resistance to change. My Team members, who are used to traditional methodologies, were hesitant to embrace a more flexible approach. To overcome this, I provided them with training and support to help them understand the benefits of ASD. You also need to Encourage a culture of experimentation and learning. Maintaining Momentum In an adaptive environment, it can be challenging to maintain momentum and keep the project moving forward. I overcame this through Clear communication, regular check-ins, and a focus on short-term goals. This helped me keep the team aligned and motivated. Balancing Flexibility and Structure While flexibility is a core principle of ASD, it’s also important to maintain a certain level of structure. I Established clear roles and responsibilities by using project management tools to track progress and manage tasks. This balance helped me ensure that the project remained organized and focused. Best Practices for ASD As we cannot cover everything in just one guide, I will quickly go through some of the best practices that might help you: Encourage Open Communication Open communication is essential for effective collaboration. Please Encourage team members to share their ideas, concerns, and feedback openly. You can also use communication tools like Slack or Microsoft Teams to facilitate discussions and keep everyone informed. Focus on Customer Feedback As discussed above, Customer feedback is invaluable in an adaptive environment. Involve customers in the development process and gather their feedback regularly. This feedback will guide your decisions and ensure that the final product meets their needs. Prioritize Quality I really emphasize this: Quality should never be compromised. Even in a flexible environment, implement best practices. Some of these can be code reviews, automated testing, and continuous integration. This helps you maintain high standards of quality throughout the development process. Invest in Training As we talked about training sessions for overcoming challenges, invest in training and development to help your team adapt to the methodology. Provide resources and support to help them understand the principles and practices of ASD. This will encourage continuous learning and improvement. Use the Right Tools I use Project management tools like Jira or Trello to manage tasks and track progress. You can also use Collaboration tools like Confluence to facilitate knowledge sharing and documentation. Conclusion Implementing Adaptive Software Development can be a game-changer for your projects. By embracing flexibility, fostering collaboration, and focusing on continuous improvement, you can deliver high-quality software that meets your customers' evolving needs. As a tech leader with over a decade of experience in custom software development, I’ve seen firsthand the benefits of ASD. It’s a methodology that not only helps teams navigate uncertainty but also drives innovation and excellence. Whether you’re developing a retail mobile app, a healthcare management system, or any other type of software, ASD provides a framework that can help you succeed. By following the practical steps and best practices outlined in this guide, you can implement ASD effectively and take your projects to new heights.
Hello, DZone Community! We have several surveys in progress as part of our research for upcoming Trend Reports. We would love for you to join us by sharing your experiences and insights (anonymously if you choose) — readers just like you drive the content that we cover in our Trend Reports. check out the details for each research survey below Over the coming months, we will compile and analyze data from hundreds of respondents; results and observations will be featured in the "Key Research Findings" of our Trend Reports. Security Research Security is everywhere; you can’t live with it, and you certainly can’t live without it! We are living in an entirely unprecedented world — one where bad actors are growing more sophisticated and are taking full advantage of the rapid advancements in AI. We will be exploring the most pressing security challenges and emerging strategies in this year’s survey for our August Enterprise Security Trend Report. Our 10-12-minute Enterprise Security Survey explores: Building a security-first organization Security architecture and design Key security strategies and techniques Cloud and software supply chain security At the end of the survey, you're also able to enter the prize drawing for a chance to receive one of two $175 (USD) e-gift cards! Join the Security Research Data Engineering Research As a continuation of our annual data-related research, we're consolidating our database, data pipeline, and data and analytics scopes into a single 12-minute survey that will guide help the narratives of our July Database Systems Trend Report and data engineering report later in the year. Our 2024 Data Engineering Survey explores: Database types, languages, and use cases Distributed database design + architectures Data observability, security, and governance Data pipelines, real-time processing, and structured storage Vector data and databases + other AI-driven data capabilities Join the Data Engineering Research You'll also have the chance to enter the $500 raffle at the end of the survey — five random people will be drawn and will receive $100 each (USD)! Cloud and Kubernetes Research This year, we're combining our annual cloud native and Kubernetes research into one 10-minute survey that dives further into these topics as they relate to both one another and at the intersection of security, observability, AI, and more. DZone's research will be informing these Trend Reports: May – Cloud Native: Championing Cloud Development Across the SDLC September – Kubernetes in the Enterprise Our 2024 Cloud Native Survey covers: Microservices, container orchestration, and tools/solutions Kubernetes use cases, pain points, and security measures Cloud infrastructure, costs, tech debt, and security threats AI for release management + monitoring/observability Join the Cloud Native Research Don't forget to enter the $750 raffle at the end of the survey! Five random people will be selected to each receive $150 (USD). Your responses help inform the narrative of our Trend Reports, so we truly cannot do this without you. Stay tuned for each report's launch and see how your insights align with the larger DZone Community. We thank you in advance for your help! —The DZone Publications team
How To Implement a Gateway With Spring Cloud
July 25, 2024 by
Explainable AI: Making the Black Box Transparent
May 16, 2023
by
CORE
How To Implement a Gateway With Spring Cloud
July 25, 2024 by
How To Implement a Gateway With Spring Cloud
July 25, 2024 by
OpenTelemetry: Unifying Application and Infrastructure Observability
July 25, 2024
by
CORE
Low Code vs. Traditional Development: A Comprehensive Comparison
May 16, 2023 by
How To Implement a Gateway With Spring Cloud
July 25, 2024 by
Five IntelliJ Idea Plugins That Will Change the Way You Code
May 15, 2023 by