Celebrate a decade of Kubernetes. Explore why K8s continues to be one of the most prolific open-source systems in the SDLC.
With the guidance of FinOps experts, learn how to optimize AWS containers for performance and cost efficiency.
Agile, Waterfall, and Lean are just a few of the project-centric methodologies for software development that you'll find in this Zone. Whether your team is focused on goals like achieving greater speed, having well-defined project scopes, or using fewer resources, the approach you adopt will offer clear guidelines to help structure your team's work. In this Zone, you'll find resources on user stories, implementation examples, and more to help you decide which methodology is the best fit and apply it in your development practices.
Intricacies of Zero-to-One Software Projects
10 Things to Avoid in Domain-Driven Design (DDD)
TL; DR: Transformed and Scrum Despite criticism from the product community regarding Scrum as a framework for effective product creation, namely Marty Cagan himself, I believe that it is worthwhile to compare the principles that help form successful product teams with those of Scrum. Let’s delve into an analysis of “Transformed” and how its principles align with Scrum’s. Matching Transformed’s Product Success Principles with Scrum Principles Last week, Paweł Huryn summarized the product success principles from Marty Cagan’s book Transformed. (Please find his post on LinkedIn and consider following him.) I took those principles and compared them step-by-step with the principles derived from the Scrum Guide. As an agile practitioner, you are familiar with the ongoing debate about the best frameworks and principles that drive success. Marty Cagan's Transformed offers a set of product principles that are widely respected in the product community, but can these principles coexist with Scrum, a framework often critiqued by the same community? Despite the criticism, a closer look reveals that Scrum aligns with and enhances these principles when implemented effectively. Scrum’s values and practices effectively support each principle, from empowering teams and focusing on outcomes to fostering innovation and continuous learning. Scrum's emphasis on self-managing teams, iterative progress, and servant leadership creates a culture that prioritizes value, innovation, and adaptability—key ingredients for building successful products. When applied correctly, Scrum is a robust framework for driving product success. So, let’s delve into how Scrum matches the five key product principles identified by Paweł Huryn in Transformed, demonstrating that Scrum remains a powerful approach for building successful products: I. Product Team Principles 1. Principle: Empowered With Problems to Solve Scrum principle match: Self-managing teams Explanation: In Scrum, teams are self-managing, meaning they have the autonomy to decide how to tackle the problems. This aligns with the principle of being empowered with problems to solve, as the Scrum Team, particularly the Developers, are empowered to decide how to approach their work within the Sprint to achieve the Sprint Goal. 2. Principle: Outcomes Over Output Scrum principle match: Focus on delivering value Explanation: Scrum emphasizes delivering value over simply completing tasks. The Sprint Goal and the focus on delivering potentially releasable Increments during each Sprint align with the principle of prioritizing outcomes over output. Scrum teams are encouraged to focus on what brings the most value, not just on doing more work. 3. Principle: Sense of Ownership Scrum principle match: Commitment Explanation: Scrum instills a strong sense of ownership through the commitment to Sprint and Product Goals and the collective responsibility of the Scrum Team. For example, each team member is committed to achieving the Sprint Goal, which fosters a sense of ownership over their work. 4. Principle: Collaboration Scrum principle match: Collaboration and cross-functionality Explanation: Scrum’s framework is built on collaboration, particularly within the Scrum Team, which is cross-functional and works together closely to deliver the Sprint Goals. Daily Scrums, Sprint Planning, and Retrospectives all emphasize and facilitate collaboration. Unsurprisingly, no one on a Scrum team has any authority to tell anyone else what to do or how to do things. Selected Insight "The most fundamental of all product concepts is the notion of an empowered, cross-functional product team." Scrum alignment: Scrum’s very foundation is built upon cross-functional teams empowered to manage their work. This directly aligns with Scrum’s principle of forming cross-functional teams with all the skills necessary to deliver Product Increments every Sprint. II. Transformed: Product Strategy Principles 1. Principle: Focus Scrum principle match: Sprint Goal Explanation: In Scrum, the Sprint Goal focuses on the entire team during a Sprint. It is a singular objective that the Scrum Team commits to achieving, which aligns perfectly with the principle of focus in product strategy. This focus ensures the team works towards a common, clearly defined goal. 2. Principle: Powered by insights Scrum principle match: Empiricism Explanation: Scrum is built on the pillars of empiricism: transparency, inspection, and adaptation. The process of inspecting the Increments, gathering feedback, and adapting the Product Backlog accordingly is driven by insights gained through these activities. This ensures that decisions are based on actionable data and information rather than assumptions. 3. Principle: Transparency Scrum principle match: Transparency Explanation: Transparency is one of Scrum’s core pillars. All aspects of the Scrum process are visible to those responsible for the outcome. This ensures that all team members and stakeholders clearly understand the project's current state, which directly aligns with the principle of transparency in product strategy. 4. Principle: Placing Bets Scrum principle match: Product Backlog refinement Explanation: In Scrum, the Product Backlog represents at any moment what the team considers necessary to accomplish the next step. Based on new insights, it is continuously refined and prioritized based on what is most valuable to the customer and the organization. This process involves placing bets on which backlog items (problems or features) should be addressed next based on their potential impact and alignment with the Product Goal. Selected Insight "Product strategy is all about which problems are the most important to solve." Scrum alignment: Scrum aligns with this principle by defining Product Goals and, consequently, the ongoing prioritization and refinement of the Product Backlog to reflect the “best” path to accomplish this goal. The Product Owner continuously assesses which items are most critical to achieving the Product Goal, ensuring the Scrum Team is always focused on solving the most important problems. III. Product Discovery Principles 1. Principle: Minimize Waste Scrum principle match: Iterative development and incremental delivery Explanation: Scrum’s iterative and incremental approach inherently minimizes waste by focusing on delivering small, valuable Increments that are regularly reviewed and adjusted. This prevents large amounts of work that may not be needed or valued, aligning with the principle of minimizing waste. 2. Principle: Assess Product Risks Scrum principle match: Inspection and adaptation Explanation: In Scrum, the regular inspection of Increments and subsequent adaptation of the Product Backlog help to identify and manage risks early. By continuously assessing what has been built and adjusting the plan accordingly, Scrum teams can mitigate product risks effectively. 3. Principle: Embrace rapid experimentation Scrum principle match: Sprint Explanation: Each Sprint in Scrum can be seen as an experiment in which the team tests ideas and solutions by building and delivering Increments while investing in product discovery activities, such as creating prototypes to test assumptions. This rapid cycle of planning, execution, review, and adaptation aligns with the principle of rapid experimentation in product discovery. 4. Principle: Test Ideas Responsibly Scrum principle match: Sprint Planning and incremental approach Explanation: In Scrum, Sprint Planning can be adapted to include the creation of prototypes or experiments specifically for testing assumptions. These experiments do not need to meet the full Definition of Done but should be designed to gather valuable feedback quickly and responsibly. By breaking down larger ideas into smaller, testable components (often referred to as "spikes" or "experiments"), teams can test assumptions without the full cost of production-level quality. This incremental approach allows teams to validate hypotheses quickly, adjust based on feedback, and ensure they responsibly manage risk and resources during the discovery phase. Selected Insight "The heart of product discovery is rapidly testing product ideas for what the solution could be […] an experimentation culture not only helps you address risks, but it is absolutely central to innovation." Scrum alignment: Scrum’s framework is built around iterative experimentation. Each Sprint is an opportunity to test and validate ideas quickly, making Scrum a natural fit for an experimentation-driven approach to product discovery. IV. Transformed: Product Delivery Principles 1. Principle: Small, Frequent, Uncoupled Releases Scrum principle match: Incremental delivery Explanation: Scrum promotes delivering small, valuable Increments frequently. Each Sprint should produce potentially shippable product Increments that can be released to stakeholders or users. This aligns with the principle of small, frequent, and uncoupled releases, ensuring continuous value delivery. 2. Principle: Instrumentation Scrum principle match: Definition of Done and transparency Explanation: The Definition of Done in Scrum includes all the criteria necessary to ensure that an Increment is fully functional and of high quality. Instrumentation — such as testing, documentation, and monitoring — is crucial to meet the Definition of Done, ensuring that every Increment is releasable. Transparency in Scrum also ensures that the state of each Increment is always visible and understood by all stakeholders. 3. Principle: Monitoring Scrum principle match: Empiricism (Inspection and Adaptation) Explanation: In Scrum, regularly inspecting and adapting the product is akin to monitoring. By constantly checking the product’s progress and quality through reviews and inspections, Scrum teams can identify issues early and make necessary adjustments, aligning with the principle of ongoing monitoring. 4. Principle: Deployment Infrastructure Scrum principle match: Continuous Integration and Deployment Explanation: While Scrum doesn’t explicitly prescribe technical practices, it is often complemented by practices like Continuous Integration and Continuous Deployment (CI/CD). These practices ensure that the deployment infrastructure is set up to allow for frequent, reliable releases, which aligns with the principle of having a robust deployment infrastructure. Selected Insight "This means, at a minimum, each product team releases their new work no less than every other week (…) For strong product companies, this means releasing several times per day (...)." Scrum alignment: Scrum’s iterative cycles of work (Sprints) encourage regular, frequent releases. While traditional Scrum suggests a Sprint duration of up to one month, many teams operate in environments where releases happen more frequently, demonstrating Scrum’s flexibility to accommodate rapid delivery cycles. V. Product Culture Principles 1. Principle: Principles Over Process Scrum principle match: Scrum values (Commitment, Courage, Focus, Openness, Respect) Explanation: Scrum places a strong emphasis on values that guide the behavior of the team rather than rigidly adhering to a process. These values create a culture where principles like commitment and openness take precedence, enabling teams to make decisions that align with their goals and values rather than just following a set process. 2. Principle: Trust Over Control Scrum principle match: Self-managing teams Explanation: Scrum teams are self-managing, meaning they have the authority and trust to decide how best to achieve their goals. This principle of trusting the team to make decisions rather than imposing strict control aligns perfectly with Scrum’s emphasis on self-management and empowerment. 3. Principle: Innovation Over Predictability Scrum principle match: Empiricism (Inspection, Adaptation) Explanation: Scrum’s iterative approach, where each Sprint allows for experimentation, learning, and adaptation, fosters a culture of innovation. Instead of prioritizing predictability, Scrum teams are encouraged to inspect and adapt based on their learning, promoting innovation over rigid adherence to plans. 4. Principle: Learning Over Failure Scrum principle match: Sprint Retrospective Explanation: The Sprint Retrospective is a key Scrum event focusing on continuous improvement by learning from past Sprints. It provides a space for the team to reflect, fostering a culture of learning from experience rather than dwelling on failure, and aligns well with the principle of prioritizing learning over avoiding failure. Selected Insight "(...) this means moving from hands-on micromanagement to servant-based leadership with active coaching. It means leading with context rather than control." Scrum alignment: Scrum advocates for servant leadership, particularly in the role of the Scrum Master, who supports the team by removing impediments and fostering an environment of self-organization. This approach aligns with the shift from micromanagement to leadership that provides context and support, allowing the team to operate with autonomy and trust. Food for Thought on Transformed and Scrum Let us finish with some thought-provoking questions and points for further reflection to encourage deeper consideration of the topic: 1. Is Scrum Truly Flexible Enough? While Scrum aligns well with the principles in Transformed, some argue that Scrum can become too rigid if not adapted to specific team needs. How can teams maintain the flexibility to innovate while adhering to Scrum’s principles? 2. Balancing Empowerment and Accountability Empowerment is a key principle in both Scrum and Cagan’s approach, but how can teams balance this with the need for accountability? What practices ensure empowerment doesn’t lead to a lack of direction or responsibility? 3. The Role of Leadership in Scrum Scrum emphasizes servant leadership, but how can leaders balance being hands-off and providing the necessary guidance and support? How can leadership adapt as teams mature in their Scrum practices? 4. Is There a Point Where Scrum Becomes Counterproductive? In complex, fast-moving environments, could there be situations where adhering to Scrum might hinder rather than help? How can teams recognize these situations and adjust their approach accordingly? After all, we are not paid to practice Scrum but to solve our customers’ problems within the given constraints while contributing to the organization's sustainability. Evolving Product Culture Scrum’s and Cagan’s principles emphasize the importance of culture. What steps can organizations take to continuously evolve and improve their product culture, ensuring it supports innovation and sustainable delivery? Transformed: Conclusion Marty Cagan’s Transformed provides a compelling vision for product success, with principles that champion empowerment, focus, rapid experimentation, and a product-first culture. However, there’s a growing narrative that Scrum might fall short of supporting these ambitious goals — a perspective often voiced within the product community. But let’s set the record straight: Scrum, when applied thoughtfully, is not just a process or a set of rituals. It’s a framework designed to empower teams, deliver real outcomes, and foster a culture of continuous improvement. The principles outlined by Cagan are not at odds with Scrum; in fact, they can be seen as an evolution of the very principles Scrum has always stood for. Consider this: Scrum’s focus on self-organizing, cross-functional teams directly aligns with empowering teams with problems to solve. The emphasis on delivering potentially shippable Increments matches the drive for outcomes over output. Also, the culture of continuous inspection and adaptation through retrospectives perfectly mirrors the need for learning over failure. Scrum is far from being the bureaucratic beast some critics make it out to be. It’s a dynamic, adaptable framework that, when used with a clear understanding of its purpose, can be the very vehicle that drives the success Cagan’s principles aim for. Yes, Scrum requires a commitment to transparency, collaboration, and relentless pursuit of value. Still, when these elements are in place, Scrum becomes a powerful enabler of the product excellence Cagan envisions. In the end, it's not about Scrum vs. Cagan’s principles from Transformed. It’s about leveraging Scrum as the foundational practice that makes these principles actionable and achievable in the real world. When we stop viewing Scrum as a set of rules and start seeing it as a tool for fostering a culture of self-management, innovation, focus, and continuous delivery, we realize that Scrum is not just helpful—it’s essential for driving the kind of product success Cagan talks about. What is your take on “Transformed” meeting Scrum? Please share with us in the comments.
As software systems grow in complexity, it becomes increasingly important to write maintainable, extensible, and testable code. The SOLID principles, introduced by Robert C. Martin (Uncle Bob), are a set of guidelines that can help you achieve these goals. These principles are designed to make your code more flexible, modular, and easier to understand and maintain. In this post, we’ll explore each of the SOLID principles with code examples to help you understand them better. 1. Single Responsibility Principle (SRP) The Single Responsibility Principle states that a class should have only one reason to change. In other words, a class should have a single responsibility or job. By adhering to this principle, you can create classes that are easier to understand, maintain, and test. Example Imagine we have a UserManager class that handles user registration, password reset, and sending notifications. This class violates the SRP because it has multiple responsibilities. Java public class UserManager { public void registerUser(String email, String password) { // Register user logic } public void resetPassword(String email) { // Reset password logic } public void sendNotification(String email, String message) { // Send notification logic } } To adhere to the SRP, we can split the responsibilities into separate classes: Java public class UserRegistration { public void registerUser(String email, String password) { // Register user logic } } public class PasswordReset { public void resetPassword(String email) { // Reset password logic } } public class NotificationService { public void sendNotification(String email, String message) { // Send notification logic } } 2. Open/Closed Principle (OCP) The Open/Closed Principle states that software entities (classes, modules, functions, etc.) should be open for extension but closed for modification. In simpler terms, you should be able to add new functionality without modifying the existing code. Example Let’s consider a ShapeCalculator class that calculates the area of different shapes. Without adhering to the OCP, we might end up with a lot of conditional statements as we add support for new shapes. Java public class ShapeCalculator { public double calculateArea(Shape shape) { if (shape instanceof Circle) { Circle circle = (Circle) shape; return Math.PI * circle.getRadius() * circle.getRadius(); } else if (shape instanceof Rectangle) { Rectangle rectangle = (Rectangle) shape; return rectangle.getWidth() * rectangle.getHeight(); } // More conditional statements for other shapes return 0; } } To adhere to the OCP, we can introduce an abstract Shape class and define the calculateArea method in each concrete shape class. Java public abstract class Shape { public abstract double calculateArea(); } public class Circle extends Shape { private double radius; public Circle(double radius) { this.radius = radius; } @Override public double calculateArea() { return Math.PI * radius * radius; } } public class Rectangle extends Shape { private double width; private double height; public Rectangle(double width, double height) { this.width = width; this.height = height; } @Override public double calculateArea() { return width * height; } } Now, we can add new shapes without modifying the ShapeCalculator class. 3. Liskov Substitution Principle (LSP) The Liskov Substitution Principle states that subtypes must be substitutable for their base types. In other words, if you have a base class and a derived class, you should be able to use instances of the derived class wherever instances of the base class are expected, without affecting the correctness of the program. Example Let’s consider a Vehicle class and a Car class that extends Vehicle. According to the LSP, any code that works with Vehicle objects should also work correctly with Car objects. Java public class Vehicle { public void startEngine() { // Start engine logic } } public class Car extends Vehicle { @Override public void startEngine() { // Additional logic for starting a car engine super.startEngine(); } } In this example, the Car class overrides the startEngine method and extends the behavior by adding additional logic. However, if we violate the LSP by changing the behavior in an unexpected way, it could lead to issues. Java public class Car extends Vehicle { @Override public void startEngine() { // Different behavior, e.g., throwing an exception throw new RuntimeException("Engine cannot be started"); } } Here, the Car class violates the LSP by throwing an exception instead of starting the engine. This means that code that expects to work with Vehicle objects may break when Car objects are substituted. 4. Interface Segregation Principle (ISP) The Interface Segregation Principle states that clients should not be forced to depend on interfaces they don’t use. In other words, it’s better to have many smaller, focused interfaces than a large, monolithic interface. Example Imagine we have a Worker interface that defines methods for different types of workers (full-time, part-time, contractor). Java public interface Worker { void calculateFullTimeSalary(); void calculatePartTimeSalary(); void calculateContractorHours(); } public class FullTimeEmployee implements Worker { @Override public void calculateFullTimeSalary() { // Calculate full-time salary } @Override public void calculatePartTimeSalary() { // Not applicable, throw exception or leave empty } @Override public void calculateContractorHours() { // Not applicable, throw exception or leave empty } } In this example, the FullTimeEmployee class is forced to implement methods it doesn't need, violating the ISP. To address this, we can segregate the interface into smaller, focused interfaces: Java public interface FullTimeWorker { void calculateFullTimeSalary(); } public interface PartTimeWorker { void calculatePartTimeSalary(); } public interface ContractWorker { void calculateContractorHours(); } public class FullTimeEmployee implements FullTimeWorker { @Override public void calculateFullTimeSalary() { // Calculate full-time salary } } Now, each class only implements the methods it needs, adhering to the ISP. 5. Dependency Inversion Principle (DIP) The Dependency Inversion Principle states that high-level modules should not depend on low-level modules; both should depend on abstractions. Additionally, abstractions should not depend on details; details should depend on abstractions. Example Let’s consider a UserService class that depends on a DatabaseRepository class for data access. Java public class UserService { private DatabaseRepository databaseRepository; public UserService() { databaseRepository = new DatabaseRepository(); } public void createUser(String email, String password) { // Use databaseRepository to create a new user } } In this example, the UserService class is tightly coupled to the DatabaseRepository class, violating the DIP. To adhere to the DIP, we can introduce an abstraction (interface) and inject the implementation at runtime. Java public interface Repository { void createUser(String email, String password); } public class DatabaseRepository implements Repository { @Override public void createUser(String email, String password) { // Database logic to create a new user } } public class UserService { private Repository repository; public UserService(Repository repository) { this.repository = repository; } public void createUser(String email, String password) { repository.createUser(email, password); } } Now, the UserService class depends on the Repository abstraction, and the implementation (DatabaseRepository) can be injected at runtime. This makes the code more modular, testable, and easier to maintain. By following the SOLID principles, you can create more maintainable, extensible, and testable code. These principles promote modular design, loose coupling, and separation of concerns, ultimately leading to better software quality and easier maintenance over time.
Quality is the pillar that supports any software product. If a platform works poorly, both the business and the customers fail, as they do not get what they are looking for or satisfy their most immediate needs. That's why, as customer demands and market competitiveness increase, software teams must adapt quickly to deliver high-quality products. In this scenario, Agile practices can make an important difference and are the basis of project managers today, since they can not only improve efficiency through the Agile methodology but also promote software quality in a notable way. Keys to Agile Practices “Agile is an iterative, introspective, and adaptive project management methodology. In an Agile practice, a project is divided into subprojects. These are usually called sprints. At the end of each sprint, stakeholders and the team review their work, make adjustments for the next sprint, and iterate until complete. The goal of Agile is the constant and incremental delivery of value throughout the project, instead of doing it all at once at the end,” they explained about this methodology in a Forbes article. Agile methodologies focus on flexibility, collaboration, and continuous delivery of value. Instead of following a rigid plan, Agile teams take an iterative and incremental approach. This allows us to respond in an Agile manner to changes in requirements and market needs. But How Exactly Do These Practices Contribute To Improving Software Quality? 1. Iterative and Incremental Deliveries “Iterative delivery means that a team delivers work frequently rather than doing it all at once. Incremental means they deliver it in small packages of end-to-end functionality that is usable. After all, the only thing better than a great product is a great product that improves frequently,” they detailed on the Scrum portal, one of the most used Agile methodologies. This allows: Continuous feedback: Teams receive early and frequent feedback from end users and other stakeholders. This makes it easier to identify and fix errors early before they become costly problems. Constant improvement: Each iteration offers an opportunity to improve the product and adjust processes, allowing for a continuous focus on quality. 2. Prioritization of Requirements and Value One of the key principles of Agile is prioritizing the product backlog based on customer value. “Scrum uses value-based prioritization as one of the core principles that drives the structure and functionality of the entire Scrum framework. It benefits projects through adaptability and iterative development of the product or service. More importantly, Scrum aims to deliver a valuable product or service to the customer early and continuously,” they noted in the Scrum Study. Project managers must: Collaborate with stakeholders: Work closely with customers and other stakeholders to identify and prioritize the features that provide the most value. Adapt team focus: Ensure the team focuses on the most critical tasks that improve product quality and maximize delivered value. 3. Integrated Testing and Automation Integrating continuous testing into the development cycle is essential in Agile. Testing is accomplished through: Testing in each iteration: Testing is not reserved for the end of the development cycle. In Agile, software is tested during each sprint, allowing for early detection of defects. Test automation: Implementing automated testing tools allows for faster and more frequent testing, ensuring that new code does not break existing functionality. 4. Collaboration and Constant Communication In an Agile environment, open communication and collaboration are essential. “Agile methodologies value people and human interactions over processes and tools. Agile approaches help teams keep the focus on team members by allowing communication to occur fluidly and naturally as the need arises. And when team members can communicate freely and naturally, they can collaborate more effectively”, they detailed in a GitLab article. In this sense, among the responsibilities of project managers are: Facilitate communication between teams: Promote daily stand-ups and sprint reviews to keep all team members informed and aligned. Remove barriers: Act as facilitators to remove obstacles that may slow down the team, ensuring efficient workflow. 5. Promotion of a Culture of Continuous Improvement Agile project managers must instill a continuous improvement mindset within the team, including: Regular retrospectives: Hold retrospective meetings at the end of each sprint to reflect on what worked well and what can be improved. Adoption of best practices: Promote the adoption of techniques and tools that improve the quality of development, such as code refactoring, test-driven development (TDD), and continuous integration (CI). “Unlike waterfall project management, which is a sequential approach to project execution, continuous improvement allows you to make constant adjustments to meet changing project demands. These small adjustments and changes you make are part of the continuous improvement process,” they explained in an Atlassian article. 6. Team Empowerment In Agile, there is a strong emphasis on team empowerment. Project managers must: Delegate responsibility: Allow teams to have autonomy in making decisions related to software development and quality. Foster product ownership: Encourage team members to feel shared responsibility for the quality of the final product. Adopting and applying Agile practices is not just a matter of following a set of procedures; is a philosophy that focuses on continuous improvement, collaboration, and value delivery. For project managers, the challenge lies in leading their teams with these practices, ensuring that each development cycle not only meets customer requirements but also constantly improves software quality. Successful implementation of Agile methodologies can transform the way software is developed, providing more robust products, with fewer defects and greater customer satisfaction. As a project manager, embracing Agile can be the way to raise software quality and take your team to the next level of excellence.
Never-ending Slack channels. Hours-long all-hands-on-deck calls. Constant alignment and realignment meetings. And after all that, releases still fail too often! Production readiness doesn’t need to be this painful for developer teams. Metric data scorecards are a simple way to view production readiness all in one report. These scorecards provide a concise overview of the readiness status, offering a snapshot of key metrics that gauge the health of systems and applications – think of a simple dashboard with green indicators versus checking dozens of different channels. Here are practical steps to shifting your organizational culture and practices to scorecards: Create the Metric Scorecard Create your centralized dashboard. Include all relevant metrics that relate to your release readiness, such as infrastructure tagging, on-call assignment, and code coverage. Streamline metrics by integrating cloud providers, operations platforms, and developer tools like Jira and GitHub, to ingest data and automatically adjust readiness metrics. The scorecard should be easily accessible to teams and properly communicated to ensure transition to the new system. Implement Cross-Functional Production Readiness Teams Create a core team to bring together perspectives from development, operations, quality assurance, security, and other relevant domains. Find a single stakeholder within each team to ensure accountability and clarity of ownership without unnecessary noise. When it comes time to do a release, stakeholders look at the report, and if all indicators are green, give the go-ahead. While most of the work will be done automatically by processing data, these point people work with the broader company to fix red indicators. Establish a Plan for Continuous Monitoring of Standards Your team mindset needs to shift from static indicator checking to continuous, always-on monitoring. By leveraging continuous monitoring tools, teams can detect anomalies, bottlenecks, and performance degradation at pre-release time, before they escalate into critical incidents. When it's time to release, stakeholders can assess readiness via a simple glance at a scorecard report. Develop Systems for Adaptive Standards and Exception Handling Establish flexible criteria that can accommodate changing requirements and circumstances. Traditionally, these kinds of changes create friction and delays, because communication channels like Slack and conference calls are inefficient for changing processes at scale. With a scorecard, stakeholders just change a single rule in one place to reflect that change or exception, and it cascades to every service and every user looking at the scorecard instantly. By incorporating feedback mechanisms, teams can refine these standards iteratively to ensure they remain relevant and effective over time. Consider the scenario of a cloud service provider facing evolving regulatory requirements. By implementing rule exceptions and adaptive standards, the provider successfully navigates compliance challenges without sacrificing operational efficiency or user experience. Gamify Improvement Across the Scorecard With more intelligence on performance, teams can unlock the ability to continuously set the bar higher for metric goals. Set iterative goals that continuously motivate the team to improve their performance. Having worked with teams that use scorecards and those that don’t, making the jump results in better performance and less frustration for teams. With time, organizations can fortify their readiness posture and deliver exceptional experiences to users.
Software development can be tricky. One of the biggest challenges can be choosing the right approach. The traditional "waterfall" method can be frustrating, with its inflexible plans, slow release cycles, and lack of agility. Fortunately, the Agile revolution brought new options that promise faster releases and greater flexibility. However, with so many Agile frameworks to choose from, it can be difficult to know which is the best fit for your team. In this article, I will explore two popular Agile methodologies — Scrum and Kanban — and help you understand their strengths and weaknesses. By the end, you'll have a better idea of which approach is right for your team. Scrum Scrum is a structured and intense method that works well for teams that thrive on focused bursts of activity. It involves: Timeboxed sprints: Scrum uses fixed-length iterations called "sprints" that usually last two to four weeks. Each sprint focuses on a clearly defined set of goals that are meticulously planned in a "sprint planning" session. Defined roles and responsibilities: Scrum introduces specific roles — the Product Owner, the Scrum Master, and the Development Team. This clear separation fosters accountability and keeps everyone focused on the sprint goal. Daily scrums: These are short, daily meetings where team members share progress updates, roadblocks, and dependencies. The Good, the Bad, and the Burndown Chart When it comes to organizing projects, Scrum can be a good choice if the requirements are clear and there's a defined vision. By breaking the work into sprints, progress can be tracked using a "burndown chart," which can help keep the team motivated. However, Scrum might not be the best fit for projects with constantly changing requirements. The focus on sprints can make it hard to be flexible, and daily stand-ups can be time-consuming for smaller teams. For instance, when we developed a new enterprise application with well-defined features, Scrum was very helpful. We were able to stay focused and make predictable progress. However, when we started working on a mobile app with lots of user feedback, Scrum's rigid structure became more of a hindrance than a help. Kanban Kanban is a flexible development approach that's great for environments where change is inevitable. Here's how it works: You start with a board that has columns representing the different stages of your workflow, such as "To Do," "In Progress," and "Done." You then place tasks on cards and move them between the columns as they progress through the workflow. This creates a visual representation of your workflow that allows you to see how work is flowing at a glance. To prevent bottlenecks and ensure smooth workflow, Kanban emphasizes the importance of limiting the number of tasks in each stage. This is called Work-in-Progress Limits. Kanban encourages teams to continuously improve their processes by regularly analyzing their workflow and identifying areas for improvement through retrospectives. This helps teams optimize their processes for better results. Focus on Flow, Not Sprints Kanban is a system that works well when requirements change frequently. There are no predetermined sprints, which means it can respond more quickly to new priorities. The Kanban board visualizes the flow of work through the system, and the main focus is to maintain this continuous flow. However, Kanban can feel less structured than Scrum. The lack of sprint goals can sometimes lead to a lack of direction, and unfinished tasks may stay in the "In Progress" stage if there's no strong discipline. When we switched to the mobile app, Kanban provided real-time visibility into our workflow, which helped with collaboration. Limits on work in progress ensured that everything was manageable, and thanks to the continuous focus on improvement, we could make changes to our process as user feedback came in. Scrum vs. Kanban Scrum and Kanban are two popular Agile frameworks used for software development. They have different approaches to structure, planning, work management, and change management. Here’s a more concise comparison: Feature Scrum Kanban Structure Fixed Sprints Continuous Flow Roles Defined Roles No Predefined Roles Planning Sprint Planning Ongoing Planning Goals Sprint Goals Flow Management Change Management Less Adaptable More Adaptable Reviews Sprint Reviews Regular Retrospectives Structure Scrum and Kanban are two different approaches to managing projects. Scrum has a more defined framework, dividing the work into fixed-length iterations called sprints that usually last 2-4 weeks. These sprints have a set workflow that includes a planning meeting, a daily check-in, and a review meeting. On the other hand, Kanban is more flexible and adaptable. It follows a continuous flow model and visualizes the work on a board with different stages, such as "To Do," "In Progress," and "Done." Planning and Work Management When using Scrum, planning is done at the start of each sprint. The team sets goals and creates a list of tasks called a product backlog, which is then prioritized by the Product Owner. The amount of work done is limited to what can be achieved within the sprint timeframe. On the other hand, Kanban has a more ongoing planning approach. Tasks are added to the Kanban board as they are needed, and work-in-progress (WIP) limits are established for each stage of the workflow to avoid bottlenecks and ensure smooth workflow. Change Management Scrum requires the team to focus on completing pre-determined goals within a fixed timeframe, known as a sprint. However, once a sprint has begun, it can be challenging to make changes to these goals. On the other hand, Kanban offers more flexibility. The team can add new tasks to the Kanban board as needed, making it easier to adjust their workload and priorities. The use of Work In Progress (WIP) limits also helps the team to manage their workload more efficiently. Review and Retrospection Scrum uses sprint reviews and retrospectives to assess progress, identify roadblocks, and improve the following sprint. Kanban, on the other hand, uses regular retrospectives to analyze the workflow, identify areas for optimization, and continuously refine their process. Choosing the Right Framework When it comes to selecting a framework for your project, there are several factors that you need to consider. Choosing the right framework depends on your project's specific requirements, as well as the needs and capabilities of your team. If your project has clearly defined goals and requirements, and you need to complete the work in short intervals, then Scrum can be a good choice. Scrum is a framework that emphasizes teamwork, collaboration, and frequent feedback. It is designed to help teams work on complex projects by breaking them down into smaller, more manageable tasks. Scrum is particularly effective for projects that require a high degree of concentration and focus, with regular check-ins and progress updates. On the other hand, if your project has constantly changing requirements and needs to be flexible and adaptable, then Kanban might be a better fit. Kanban is a framework that emphasizes visualizing work, limiting work in progress, and maximizing flow. It is designed to help teams manage their workflow by focusing on the tasks that are most important at any given time. Kanban is particularly effective for projects that are subject to frequent changes and adjustments, as it allows teams to respond quickly and efficiently to new requirements or priorities. Ultimately, your choice between Scrum and Kanban (or any other framework) will depend on your specific needs and circumstances. By carefully evaluating your project requirements and team capabilities, you can make an informed decision that will help you achieve your goals and deliver a successful outcome. The Hybrid Hero: Scrumban If you find yourself in a dilemma between two popular project management methodologies, Scrum and Kanban, there is an alternative approach that you might want to consider: Scrumban. It is a hybrid of both methodologies and provides you with the benefits of their most effective features. For instance, you can use the sprint planning technique from Scrum and combine it with Kanban's work-in-progress (WIP) limits and emphasis on continuous flow. This approach enables you to leverage the strengths of both methodologies, resulting in an optimized workflow that is tailored to your specific needs. The Final Verdict It's important to understand your project's needs and your team's strengths when choosing an Agile methodology like Scrum or Kanban. There's no one-size-fits-all solution, but by examining both options, you can make an informed choice that will help your team deliver high-quality software. Remember that collaboration, continuous improvement, and adaptability are key to success. Experiment with both Scrum and Kanban by running pilot projects to see which works best for your team. As your team evolves and your projects change, your Agile framework should change, too, to keep empowering your team. The most important thing is to create a culture of openness and communication where everyone feels empowered to contribute. With the right Agile methodology in place, you can navigate the ever-changing tech landscape with agility and innovation.
The majority of program management encompasses two types of post-incident or post-project review: “pre-mortem” and “post-mortem.” A pre-mortem typically occurs during the project initiation phase, where potential risks are forecast before the project begins and a plan to mitigate these risks is developed. In contrast, a post-mortem occurs following the closing phase of the project, where an analysis of “what went well” and “lessons learned” is conducted. This enables the identification of areas for improvement and their incorporation into subsequent releases or other projects. These exercises are valuable learning and planning tools for every program. A mid-mortem is equally important as the other "mortems." It is essential to understand the nature and significance of a mid-mortem. What Is a Mid-Mortem? A mid-mortem, a self-initiated review exercise, can be conducted by anyone involved in the project at any time during its lifecycle. The primary objective of a mid-mortem is to evaluate the current status of the project and its challenges in relation to its predetermined goals and milestones. How Does a Mid-Mortem Work? This review is a collaborative effort to address the challenges faced by the project. For each challenge, your team should brainstorm potential impacts and formulate recommendations. Assessing each challenge thoroughly will enable the creation of a robust recommendation plan approved by program leads. It is important to approach this activity as a team, as different members may have unique perspectives and suggestions for each challenge. The second part of the mid-mortem is the project review where you measure the project status against the project goals and milestones you have outlined at the beginning of the project. Once the mid-mortem is complete, the program manager or engineering manager is responsible for ensuring that all approved recommendations are executed promptly. This may involve allocating additional resources, adjusting the project schedule, or making changes to the project plan. Different Challenges Faced by the Project That You May Need To Consider During a Mid-Mortem Scope Creep Potential Impacts Unrealistic project expectations Increased project costs Delays in project completion Reduced project quality Impacts the project timeline This leads to escalation from the management team Resource Constraints Potential Impacts Delays in project completion Reduced project quality Increased stress on project team members Technical Complexity Potential Impacts Unanticipated technical challenges Delays in project completion Increased project costs Communication and Collaboration Potential Impacts Misunderstandings among team members Delays in project completion Reduced project quality Stakeholder Management Potential Impacts Unrealistic expectations from stakeholders Delays in project completion Reduced project quality How It Adds Value A mid-mortem is unique because it presents an opportunity to assess risks and challenges during project execution. Forecasting risks in a complex, multi-year project is challenging, as during a project things are dynamic and may not go according to plan. This dynamic environment creates a need for a mid-mortem. It is important to note, as previously conveyed, that these activities are not addressed in either the pre-mortem framework due to the absence of precise forecasting capabilities or in the post-mortem framework, as it is beyond the scope of retrospective analysis and action-taking in the post-mortem phase. There are various ways in which a mid-mortem adds value: Ensuring timely corrective actions are taken to address project risks Providing clear recommendations for each challenge faced Fostering transparency and alignment among stakeholders and leadership Conclusion The midterm review is an essential tool for ensuring the successful completion of any project. By taking the time to assess the project's progress, identify challenges, and make necessary adjustments, project managers can help ensure that their projects are delivered on time, within budget, and to the satisfaction of all stakeholders.
The acronym "Ops" has rapidly increased in IT operations in recent years. IT operations are turning towards the automation process to improve customer delivery. Traditional application development uses DevOps implementation for Continued Integration (CI) and Continued Deployment (CD). The exact delivery and deployment process may not be suitable for data-intensive Machine Learning and Artificial Intelligence (AI) applications. This article will define different "Ops" and explain their work for the following: DevOps, DataOps, MLOps, and AIOps. DevOps This practice automates the collaboration between Development (Dev) and Operations (Ops). The main goal is to deliver the software product more rapidly and reliably and continue delivery with software quality. DevOps complements the agile software development process/agile way of working. DataOps DataOps is a practice or technology that combines integrated and process-oriented data with automation to improve data quality, collaboration, and analytics. It mainly deals with the cooperation between data scientists, data engineers, and other data professionals. MLOps MLOps is a practice or technology that develops and deploys machine learning models reliably and efficiently. MLOps is the set of practices at the intersection of DevOps, ML, and Data Engineering. AIOps AIOps is the process of capabilities to automate and streamline operations workflows for natural language processing and machine learning models. Machine Learning and Big Data are major aspects of AIOps because AI needs data from different systems and processes using ML models. AI is driven by machine learning models to create, deploy, train, and analyze the data to get accurate results. As per the IBM Developer, below are the typical “Ops” work together: Image Source: IBM Collective Comparison The table below describes the comparison between DevOps, DataOps, MLOps, and AIOps: Aspect DevOps DataOps MLOps AIOps Focus on: IT operations and software development with Agile way of working Data quality, collaboration, and analytics Machine Learning models IT operations Key Technologies/Tools: Jenkins, JIRA, Slack, Ansible, Docker, Git, Kubernetes, and Chef Apache Airflow, Databricks, Data Kitchen, High Byte Python, TensorFlow, PyTorch, Jupyter, and Notebooks Machine learning, AI algorithms, Big Data, and monitoring tools Key Principles: IT process automation Team collaboration and communication Continuous integration and continuous delivery (CI/CD) Collaboration between data Data pipeline automation and optimization Version control for data artifacts Data scientists and operations teams collaborate. Machine learning models, version control Continuous monitoring and feedback Automated analysis and response to IT incidents Proactive issue resolution using analytics IT management tools integration Continuous improvement using feedback Primary Users Software and DevOps engineers Data and DataOps engineers Data scientists and MLOps engineers Data scientists, Big Data scientists, and AIOps engineers Use Cases Microservices, containerization, CI/CD, and collaborative development Ingestion of data, processing and transforming data, and extraction of data into other platforms Machine learning (ML) and data science projects for predictive analytics and AI IT AI operations to enhance network, system, and infrastructure Summary In summary, managing a system from a single project team is at the end of its life due to business processes becoming more complex and IT systems changing dynamically with new technologies. The detailed implementation involves a combination of collaborative practices, automation, monitoring, and a focus on continuous improvement as part of DevOps, DataOps, MLOps, and AIOps processes. DevOps focuses primarily on IT processes and software development, and the DataOps and MLOps approaches focus on improving IT and business collaborations as well as overall data use in organizations. DataOps workflows leverage DevOps principles to manage the data workflows. MLOps also leverages the DevOps principles to manage applications built-in machine learning.
Innovation and increased productivity play crucial roles in software development. One method to achieve this goal is applying the Specification-First approach, which structures and manages the development process. This article explores the concept of Specification-First, its significance for development teams, and the advantages it brings in testing and integration. Specification-First is a software development methodology based on the principle that product requirements specification should be developed and approved before the active coding phase begins. This enables the establishment of clear project goals and parameters from the outset, fostering a more structured and predictable development process. This methodology helps to avoid misunderstandings between clients and developers and minimizes the risks of requirement changes in later stages of development. Who Is This Article For? This article is intended for project managers and team leaders in software development. It is useful for them because it provides insights into the Specification-First approach, which can enhance the efficiency of the development process and improve the quality of software. By understanding and implementing this approach, managers can improve team communication, reduce development time, increase client satisfaction, and facilitate parallel work among teams, ultimately leading to faster product development and achieving desired results. What Is Specification-First? Specification-First is an approach to software development in which the specification of an API or service is created and approved before the actual development begins. This means that the development team first defines how the application interface should look, which endpoints (methods) should be available, what data should be transmitted, and in what manner. Why Is The Specification-First Approach Important? Proactive Development Process Management Specification-First enables the team to clearly understand what they need to create even before they start coding. This reduces the likelihood of misunderstandings and discrepancies between customer expectations and the actual outcome. Improved Communication Creating an API specification encourages developers, clients, and other stakeholders to discuss and refine requirements. This leads to a better understanding of the project and accelerates the development process. Easy Integration and Testing One of the main advantages of Specification-First is the ability to easily start integration and testing even before the code is ready. With an API specification, mock services can be set up, and automated tests can be created, speeding up the development process and ensuring higher code quality. Benefits of Automated Quality Assurance 1. Earlier Test Development Since the API specification is created before development begins, the AQA department can start writing tests in advance based on the methods already described in the specification. This significantly reduces the time required to develop the test suite and increases its completeness and accuracy. For example, with a clear specification in hand, the AQA department can begin developing test scenarios even during the planning stage, optimizing the testing process and reducing time spent in the future. 2. Increased Efficiency Testing according to a predefined specification simplifies the process and enhances the AQA department's work efficiency. With clear and concise requirements outlined in the specification, testing specialists can focus on verifying specific functional capabilities and requirements rather than spending time identifying discrepancies in the interface or ambiguities in the requirements. For instance, having a detailed specification helps AQA engineers quickly determine which tests to conduct to verify specific functionality, significantly reducing the time spent on test scenario development and execution. Integration Benefits Having a specification in software development is crucial for efficient integration with other teams for several important reasons. Here's why: Clarity and Alignment A specification defines clear project goals and parameters from the outset. This ensures that all teams involved have a unified understanding of what needs to be developed and how different components will interact. A shared specification allows teams to align their efforts more effectively towards achieving common objectives. Minimizing Misunderstandings Specifications help to avoid misunderstandings between teams, clients, and stakeholders. By documenting requirements comprehensively upfront, the risk of misinterpretation or miscommunication during the integration phase is significantly reduced. This leads to smoother collaboration and integration across teams. Faster Issue Resolution When teams work with a well-defined specification, any issues or questions that arise during integration can be addressed more quickly and decisively. The specification serves as a reference point to troubleshoot problems, identify root causes, and implement solutions efficiently. Accelerated Development Process With a specification in place, integration tasks can commence even before the entire system is fully developed. Teams can start integrating their components based on the agreed interfaces and behaviors specified in the document. This parallel work streamlines the development process and accelerates overall project timelines. Enhanced Quality Assurance Specifications facilitate easier and more comprehensive testing. Test scenarios can be developed based on the expected behavior defined in the specification, allowing quality assurance teams to validate functionalities early on. This leads to higher-quality software with fewer defects and issues. Improved Stakeholder Satisfaction Having a specification-driven approach often results in better outcomes that align closely with stakeholder expectations. By adhering to the documented requirements, development teams can deliver products that meet or exceed client needs, leading to higher satisfaction. Specification API specifications may be maintained utilizing a range of OAS3 tools, particularly in the context of backend development. These platforms offer efficient methods for creating, managing, and documenting API specifications, ensuring availability to the entire development and Quality Assurance (QA) team. OAS3 OAS3 refers to the OpenAPI Specification 3, which is a standard for describing web services in a machine-readable format. This standard is used to document and define API functionalities. OAS3 is a specification presented in JSON or YAML format, detailing request and response structures, data schemas, parameters, paths, and other API specifics. Key features of the OpenAPI Specification 3 (OAS3) include: API description: OAS3 allows for the description of your API's structure, including available endpoints (paths), supported methods (GET, POST, PUT, DELETE, etc.), request parameters, headers, and bodies. Data schemas: OAS3 enables the definition of data schemas for API requests and responses, providing a clear specification of the data format used in the API. Validation and documentation: The OAS3 specification can be used to automatically validate requests and responses and generate API documentation that is easily readable by humans and machines. OAS3 is a powerful tool for standardizing API descriptions and simplifying web service development, testing, and integration. Let’s illustrate with an example: YAML openapi: 3.0.0 info: title: Sample API version: 1.0.0 paths: /users: get: summary: Returns a list of users. responses: '200': description: A list of users. content: application/json: schema: type: array items: type: object properties: id: type: integer name: type: string Apicurio Apicurio facilitates the creation, modification, and administration of API specifications (definitions of software interfaces). It empowers users to develop new specifications, modify existing ones, manage versions, generate documentation, and integrate with various development tools. Apicurio streamlines the lifecycle management of API specifications, enhancing their precision and accessibility for stakeholders. Apicurio is a utility designed to craft, refine, and oversee API specifications. This tool empowers developers and teams to author and document API specifications using an intuitive and user-friendly interface. For instance: JSON { "openapi": "3.0.0", "info": { "title": "Sample API", "version": "1.0.0" }, "paths": { "/users": { "get": { "summary": "Returns a list of users.", "responses": { "200": { "description": "A list of users.", "content": { "application/json": { "schema": { "type": "array", "items": { "type": "object", "properties": { "id": { "type": "integer" }, "name": { "type": "string" } } } } } } } } } } } } gRPC Ultimately, we can use gRPC as a specification, describing methods, services, and objects. gRPC (gRPC Remote Procedure Call) is a tool for defining, designing, and deploying remote services that use Remote Procedure Call (RPC) protocols. gRPC utilizes a simple interface for defining services and structured data for their exchange, which can then be used to generate client and server code in various programming languages. A use case for gRPC in a backend team might look as follows: Suppose you have a backend team developing microservices for your application. You can use gRPC to define the interfaces of these microservices in the form of an RPC protocol, which describes what methods are available, what parameters they take, and what data they return. This interface can be defined using the Interface Definition Language (IDL) Protobuf (Protocol Buffers), which is part of the standard gRPC specification. After defining the interface, you can generate code in your team's programming language for the client and server sides of the microservices. This allows the team to quickly create clients and servers that can communicate with each other using the generated code. Thus, using gRPC as a specification for the backend team enables standardization of data exchange between microservices, simplifies development, and ensures the high performance of your application. Example: ProtoBuf syntax = "proto3"; service UserService { rpc GetUser(UserRequest) returns (UserResponse); } message UserRequest { string user_id = 1; } message UserResponse { string name = 1; int32 age = 2; } Auto-Generation To generate code based on the specification, you can utilize various tools and libraries specifically designed for this purpose. Here are several popular methods of code generation: 1. Using Code Generators Many tools, such as OpenAPI Generator, or gRPC Tools, provide code generators that automatically create client and server code based on the API specification. Specify your specification in the appropriate format, select the programming language and the type of code you want to generate, and the tool will do the rest. 2. Using IDE Plugins Some integrated development environments (IDEs), such as IntelliJ IDEA, Visual Studio Code, or Eclipse, offer plugins that allow you to generate code based on the API specification directly from the development environment. This is typically done through the IDE context menu or special commands. 3. Using Scripts and Command-Line Utilities You can use scripts and command-line utilities to configure and automate the code generation process more flexible. The choice of a specific method depends on your project's preferences and requirements. The most suitable code generation method will depend on the type of API specification, the technologies used, and the tools preferred by your development team. Conclusion Implementing the Specification-First principle in development teams is a crucial step toward improving the efficiency of software development processes. This approach fosters a more structured and transparent development process, enhances quality, and accelerates time to market. To successfully transition to Specification-First, the following steps should be considered: 1. Selecting the Right Tool Choosing a tool for creating and storing API specifications plays a significant role. The choice affects the ease of working with APIs and the accessibility and clarity of specifications for the entire team. 2. Gradual Integration and Adaptation It's best to implement the new approach gradually, starting with individual projects or modules. This allows the team to become familiar with the new methodologies and tools, learn best practices, and optimize the process. 3. Consideration of Authentication and Security API specifications may also include information about authentication methods, authorization, and other security aspects. This ensures the security of the developed applications from the outset and helps avoid issues in the future. 4. Team Training and Preparation Transitioning to a new approach requires understanding and support from the entire team. Training team members on the fundamentals of Specification-First, its advantages, and implementation methodologies is the first step toward successful adoption. Once the team successfully adopts the Specification-First principle in one project, it can expand this approach to all subsequent projects and teams. Over time, Specification-First will become part of the corporate culture and a standard approach to software development within the organization. Transitioning to Specification-First optimizes processes within the team and contributes to achieving higher quality standards and customer satisfaction.
Decision-making is a critical aspect of our lives. Nothing slows down our progress like indecision. Well-thought-out decisions are instrumental in navigating complexities and achieving individual, team, and organizational desired outcomes. In software development, decision-making is a crucial process that occurs at various stages of the development lifecycle, starting from technology stack finalization, architecture design, feature prioritization, user experience (UX) design, development methodologies and integration, security and performance considerations, testing strategies, etc. I have used different Decision-Making Models to support critical decision-making. In this article, I will talk about a decision model, SOLVED, which I found useful in most of the scenarios encountered during software development. SOLVED is a 6-step decision model that not only helps in software development but also product and project management. The name SOLVED is derived from the steps involved in this model. S - Survey O - Outline L - Leverage V - Validate E - Engage D - Document Let's go through the steps. Each step also included the details about a practical example. Step 1: Survey Examine the condition and situation, and understand the problem statement. Collect the different data related to the problem statement via research and analysis. Example There is a request to improve the performance of a page. As a part of this step, start with the following: Capture the current performance of the page during different scenarios. Understand the user behavior on the page by analyzing the statistics. Identify the probable areas where performance is potentially degraded and can be improved by refactoring the code. Step 2: Outline Outline the objective and problem statement clearly. A clear, precise, and well-defined understanding of the objective or problem statement will help ensure the correctness of the decision. Example For the performance improvement, outline the problem statement as improving the page performance and bringing the load time to under 2 seconds from the current load time of 10 seconds. Step 3: Leverage Use the data points collected in Step 1 and identify the probable decision path. Data-driven decisions help in enhancing the probability of a decision’s success. Example In the same example of performance improvement, using the information captured in step 1, identify the potential changes that can be implemented to potentially improve the performance to achieve the outcome of bringing the performance to 2 seconds. Step 4: Validate Validate the decision path by small-scale execution as per the decision path in Step 3. The decision can be also validated by discussing it with an SME. This step ensures decisions are not based on unfounded beliefs. Example In the same example, perform the spike of refactoring code changes decided in Step 3 to validate the improvement in performance before implementing actual changes. Step: 5 Engage Engage with all the stakeholders to ensure the decision path covers their preferences and feedback. This step ensures the decision is accepted. Example Share the details about the improvements to be implemented to bring the performance to 2 seconds with all the stakeholders and get the buy-in to implement the change. Step 6: Document Document the final decision and supporting information for future reference. Example Document the details about approved improvements and supporting rationale behind the improvements for future reference. Additional Practical Examples Below is a reference of some of the other practical examples where I have used this model successfully: UX design Identifying which database to use (relational vs non-relational) Architectural design decision Automation test strategy finalization Deciding on a product planning tool Suggestion to add a new feature to a mobile application Summary The SOLVED Decision-Making Model is a simple, but methodological model that helps to make a well-thought-out, effective decision. This method can be used not only in software development but also in any decision-making situation.
Before We Start, What’s a Chapter? A chapter is a craft community (sometimes also referred to as a Practice or CoE) whose primary purpose is to help practitioners of the same craft or discipline (e.g., development, testing, business analysis, scrum mastery, UX, etc.) elevate their mastery of that particular craft or discipline. Chapters are also typically tasked with setting the standards and guidelines for performing that craft in the organization. Credit: Ashley-Christian Hardy, “Agile Team Organisation: Squads, Chapters, Tribes and Guilds,” 2016 TL/DR In Agile organizations, chapters (Chapters, Practices, CoEs, etc.) pursue the systematic development of capability and craft. This pursuit adds a lot of value to the organization (better quality, better knowledge retention, higher employee engagement, etc.). Chapters often struggle to pinpoint where they need to improve or how they can add more value to their members and the organization. Many organizations don’t offer clear guidelines to chapters (and chapter leads) as to what exactly is expected of them or what good looks like. This article presents a simple diagnostic (click to download the diagnostic) that a chapter could use to identify areas where they must focus their improvement efforts. It defines what ‘good’ looks like in the context of a chapter and provides a tool to help the chapter assess where they stand against that definition (where they’re doing well and where they need to improve). In the second part of this series, I will share several experiments that could be run to optimize each dimension of the chapter's effectiveness. I will also share a case study of how this model was implemented at scale at a large organization. Key Terms First, let’s define some of the key terms that will be used throughout this article: Craft refers to the specific discipline, domain, or skill set around which the chapter is formed. e.g., QA is a craft; UX is a craft; business analysis is a craft, etc. A craftsperson is a practitioner of a craft (developer, QA specialist, marketer, business analyst, etc.) I use the term performing the craft to refer to the actual day-to-day carrying out of tasks by a craftsperson (chapter member) – e.g. a QA specialist performs their craft by carrying out QA tasks. Craft quality (quality of the craft) refers to how well the craft is being performed. I sometimes use craftsmanship, which refers to craft mastery and excellence. Knowledge base refers to a centralized repository or system where craft-related information, best practices, documentation, standards, etc. are stored and shared among chapter members (and others across the organization). Standards (craft standards) refer to the established guidelines and principles that define the expected level of quality within a specific craft. Learning journey refers to the ongoing formal and informal learning efforts (training programs, hands-on application of new knowledge, mentoring, etc.) intended to extend existing skills and build new skills, and how that learning is expected to be acquired over time. Is It Worth Reading This Article? Well, if any of the following statements resonate with you, then I would strongly suggest that you read on: As an organization (or tribe, business unit, etc.): “We have a high risk of losing critical knowledge if a small number of our people leave” “We struggle with onboarding and, despite hiring top talent, it’s difficult for them to hit the ground running” “Despite hiring people with a lot of experience to mentor and grow our junior staff, we feel that knowledge sharing isn’t happening” “We invest a lot in training our people – sending them to courses, etc. – but we don’t feel that investment has had much impact on the overall quality of our products” “We have knowledge siloes even within the same discipline — there are islands of expertise that aren’t connected” “We fear that when the contractors and external consultants leave, our people won’t be able to deliver the same level of high craft quality” “Team members struggle when moving from one team to another due to a lack of consistency in how things are done” “Our team members seem to struggle to tap into the collective expertise of the team, leading to a lot of re-inventing the wheel and time wasted” While these are all difficult problems that result from complex interactions of causes that affect and are affected by every aspect of how the organization works, these issues are all heavily influenced by how effective we are at developing craftsmanship — that is, how good we are at developing craft mastery and excellence. Given that in Agile organizations craft communities (chapters, practices, CoEs, etc.) are the primary custodians of developing craftsmanship, what I’m proposing here is that by systematically assessing and optimizing the effectiveness of how these craft communities work, we can make great strides at addressing these challenges. So, Why Care About Chapters? Effective chapters create the conditions that enable high product quality, high employee satisfaction, and low knowledge loss risk. This is because effective chapters are good at developing master craftspeople. People who feel mastery of their craft are typically happier and more engaged, their output and the speed at which that output is delivered is superior, and, due to the fact that there’s more than a small number of them (and that there’s a robust process to develop more), the departure of a few won’t be catastrophic for the organization. Members of an effective chapter (that is, a chapter that’s good at developing the craftsmanship of its members), would typically say things like: Our chapter follows a systematic approach to building our capability and craft mastery (defining what capability needs to be built, providing its members with the mechanisms to plan how to build those capabilities, and providing the resources and support needed to implement that plan) Our chapter has in place the process and systems that enable us to leverage and build upon the accumulated formal knowledge that the chapter has amassed over time – the standards, playbooks, guidelines, best practices, lessons learned, etc. Our chapter has nurtured a rich social network that enables us to tap into the collective informal (tacit) knowledge and expertise of the chapter – the knowledge, nuances, and highly contextual experiences that aren’t documented anywhere (most knowledge is tacit) Our chapter follows a systematic approach to measuring the impact (outcomes) of craftsmanship-building and capability uplift efforts and leveraging the feedback to guide further craftsmanship-building efforts If we improve the effectiveness of a chapter (that is, optimize the 4 factors mentioned above that are key leading indicators to chapter effectiveness), we would increase the chapter’s ability to develop its members into craftspeople, which will positively affect and improve problems such as high knowledge loss risk, siloes, ineffective knowledge sharing, and low product quality. How Do We Improve Chapter Effectiveness? The first step to improving chapter effectiveness is to systematically assess how the chapter is performing against the 4 key dimensions of chapter effectiveness identified above (access to documented (formal) knowledge; systematic capability building; access to tacit (informal) knowledge; and systematic craft quality measurement and continuous improvement). In this 2-part series, I will introduce a simple diagnostic tool to assess chapter effectiveness (Part 1 – this article), and then delve into how to use the insights from the assessment to identify areas of improvement and how to go about setting chapter effectiveness goals and planning, implementing, and learning from chapter effectiveness improvement actions (Part 2). How Do We Measure Chapter Effectiveness? In this section, we will first go over the dimensions comprising it in some detail, and then present the diagnostic tool itself. Chapter Effectiveness Dimensions Dimension #1 The comprehensiveness, fitness-for-purpose, and ease of access (and use) of our craft standards and knowledge base – this is a broad area that covers things like how good we are at leveraging (and creating/documenting) existing knowledge, the ease of access to relevant knowledge, the accuracy and relevance of the knowledge chapter members can draw from (and its alignment with industry best practices), and the effective distilment of ‘lessons learned,’ which represents how outside knowledge is contextualized to fit the unique context of the organization, among other factors. Dimension #2 The effectiveness of the chapter’s effort to uplift the capability and craftsmanship of its members — effective chapters are good at describing what mastery of their craft means (what skills to acquire, what the levels of mastery of each skill look like, etc.), helping chapter members pinpoint where they are on that journey, and then collaborate as a team to envision what the path to mastery looks like for each member. They’re also good at taking those plans and turning them into reality: not only providing the resources and mentorship, but also the encouragement and peer support, keeping each other accountable, and measuring the outcomes of their elevated levels of craft mastery. Dimension #3 The effectiveness of tacit (informal) knowledge sharing between chapter members – Effective chapters realize that most knowledge is tacit – that is, not documented anywhere. Tacit knowledge is difficult to extract or express, and therefore difficult to formally document or write down. How do we effectively leverage knowledge that isn’t documented? By nurturing a thriving social network that allows chapter members to feel comfortable reaching out to each other for help, seek advice, ask questions, share interesting insights, etc. Such a network doesn’t just happen – it requires conscious, persistent effort to build. The statements comprising this dimension seek to explore some of the leading indicators to building effective knowledge-sharing and advice-seeking social networks. Dimension #4 The effectiveness of the chapter’s efforts to systematically and continuously improve craft quality – how do we know if the actions we’re undertaking to uplift the quality of our craft (committing to learning and capability uplift, fostering stronger knowledge-sharing networks, etc.) are delivering value? How do we know if the investment we’re putting into uplifting our capability into specific tools or frameworks is generating the returns expected? Effective chapters are really good at measuring and evaluating the quality of their craft across the organization (quantitatively and/or qualitatively). They’re good at setting SMART craft improvement goals given their understanding of how well the craft is being implemented and where they need to focus and invest in improvement, planning how to implement those goals, and good at implementing those plans (and learning from that implementation). This is a significant challenge area for many chapters, as it is often difficult to ‘measure’ the quality of how the craft is being implemented. The Chapter Effectiveness Diagnostic Introduction The diagnostic (click here to download the pdf version) comprises a number of statements that are intended to capture what ‘good’ looks like for that particular dimension. Chapter members are expected to rate how well they believe each statement describes the reality of their chapter on a scale ranging from 'completely disagree' to 'completely agree.' All chapter members (including their chapter lead) should take part in completing this diagnostic. One option to fill it (what many chapters do) is to send it out as a survey first, then discuss results or insights in one or more follow-up workshops. The purpose of this diagnostic is to serve as a conversation starter. As with all diagnostic and maturity models, the questions are merely intended to prompt us to have a discussion. The comments, anecdotes, and insights chapter members share as the team moves from one item to another provide a wealth of information. That information is what’s going to guide us (as a chapter) as we attempt to optimize the outcomes our chapter is creating. There’s no particular magic to this (or any) assessment model – it simply provides a structure within which good conversations can take place. What’s in the Pack? This pack contains the statements comprising this diagnostic model. Next to each statement is a brief explanation of why having a conversation about that statement is important and what to look for (and how to dig deeper and uncover insights) if the score against that particular statement is low. In the appendix, you'll find a printable version of the diagnostic (a template with only the statements), which can be distributed as handouts. Next Steps If you want to run the diagnostic as a survey, copy the statements into your survey tool. You may set the response options for each statement as completely disagree — disagree — neutral — agree — completely agree. Alternatively, you might opt for a sliding scale of 1-5, for example, or use whatever rating method you prefer to enable your team to assess how well each statement describes its reality. OK, We Ran the Diagnostic – What’s Next? As mentioned earlier, the conversation that follows this self-assessment is where we really get the value. As a team, the chapter gets together to reflect, explore, and try to make sense of the results of their chapter effectiveness self-assessment. They reflect on where they seem to be doing well and where they’re struggling, where they seem to all have the same experience, and where the scores reflect a difference in their perceptions. They reflect on common themes, outliers, relationships, and connections between statements, explore why some statements are not correlated even though they were expected to (and vice versa), and any other interesting insights that came out of the assessment. In the second part of this series, we will do a deep dive into how to translate these insights and learning into experiments and actions and measure the impact they create in optimizing chapter effectiveness. We will explore how to prioritize chapter effectiveness interventions, what experiments to run to uplift each chapter effectiveness dimension, and how to establish a robust continuous improvement cycle to consistently and systematically seek higher chapter effectiveness. We will go through a case study from a large financial organization where this model was applied at scale across a large number of chapters, and share some of the learnings from that experience.
Stefan Wolpers
Agile Coach,
Berlin Product People GmbH
Daniel Stori
Software Development Manager,
AWS
Alireza Rahmani Khalili
Officially Certified Senior Software Engineer, Domain Driven Design Practitioner,
Worksome