Modern Digital Website Security: Prepare to face any form of malicious web activity and enable your sites to optimally serve your customers.
Low-Code Development: Learn the concepts of low code, features + use cases for professional devs, and the low-code implementation process.
The Agile methodology is a project management approach that breaks larger projects into several phases. It is a process of planning, executing, and evaluating with stakeholders. Our resources provide information on processes and tools, documentation, customer collaboration, and adjustments to make when planning meetings.
CodeCraft: Agile Strategies for Crafting Exemplary Software
Agile Testing: Blending Shift-Left, Automation, and Collaborative Testing Strategies
In the realm of software development, Agile methodologies have taken center stage for their ability to enable rapid and iterative progress. But what about continuous data management (CDM)? While often considered separate disciplines, closer examination reveals a symbiotic relationship that can propel Agile projects to new heights. In this article, we'll look at how integrating Agile and CDM can supercharge your development cycle, while also enhancing data quality and security. The Agile Mindset in Software Development Agile is more than just a buzzword; it's a mindset that emphasizes adaptability, customer collaboration, and iterative development. But what's less discussed is how data management fits into this picture. Data is the lifeblood of any application, and poor data quality can have a ripple effect across your entire project. Ken Collier, author of "Agile Analytics," articulates it best when he says, "Data is at the center of the Agile analytics cycle. If the data isn't right, nothing else matters." By acknowledging the centrality of data, we can begin to imagine a world where Agile and CDM not just coexist, but collaborate. The Role of Continuous Data Management In traditional data management practices, a series of rigid processes and protocols often guide the handling of data. Continuous data management, on the other hand, aims to make the data management process more fluid and adaptive. This fluidity allows for faster decision-making and higher data quality, with strong governance protocols in place to ensure security and compliance. The principles of CDM resonate strongly with Agile's emphasis on adaptability and rapid iteration. Imagine you're running a Scrum-based project; wouldn't it be advantageous if your data management practices could keep pace with each sprint? The Convergence Point: A Symbiotic Relationship The synergetic effect of combining Agile with continuous data management creates a feedback loop that benefits both disciplines. On the one hand, Agile methodologies can gain from the high-quality, well-governed data that CDM provides. On the other, CDM can leverage Agile processes to evolve and adapt, moving away from monolithic, static data models to a more dynamic and modular architecture. In practice, this could mean implementing CDM policies during the planning and execution of sprints, making data governance an intrinsic part of your Agile workflows. By doing so, teams can quickly adapt to new data requirements, ensuring that the data is both accurate and secure at all times. Data guru DJ Patil once said, "Data is the new oil." If that's true, then integrating Agile and CDM is akin to building a state-of-the-art refinery that maximizes the value extracted from that oil. Wrapping It Up Integrating Agile and continuous data management is not merely a novel idea — it's a pressing necessity in a world that increasingly relies on data-driven decision-making. For those who would like a deeper exploration of this topic, I invite you to read our original blog post that tackles this from another angle. By considering both Agile methodologies and continuous data management as essential parts of the same ecosystem, we create an environment that enhances data quality, speeds up delivery, and provides greater value to the end-users. It's time to stop thinking of these practices as isolated silos and start recognizing the powerful synergy that arises when they work together. By embracing this integrated approach, you're not just staying ahead of the curve — you're defining it.
Relational DataBase Management Systems (RDBMS) represent the state-of-the-art, thanks in part to their well-established ecosystem of surrounding technologies, tools, and widespread professional skills. During this era of technological revolution encompassing both Information Technology (IT) and Operational Technology (OT), it is widely recognized that significant challenges arise concerning performance, particularly in specific use cases where NoSQL solutions outperform traditional approaches. Indeed, the market offers many NoSQL DBMS solutions interpreting and exploiting a variety of different data models: Key-value store (e.g., the simplest storage where the access to persisted data must be instantaneous and the retrieve is made by keys like a hash-map or a dictionary); Documented-oriented (e.g., widely adopted in server-less solutions and lambda functions architectures where clients need a well-structured DTO directly from the database); Graph-oriented (e.g., useful for knowledge management, semantic web, or social networks); Column-oriented (e.g., providing highly optimized “ready-to-use” data projections in query-driven modeling approaches); Time series (e.g., for handling sensors and sample data in the Internet of Things scenarios); Multi-model store (e.g., combining different types of data models for mixed functional purposes). "Errors using inadequate data are much less than those using no data at all."CHARLES BABBAGE A less-explored concern is the ability of software architectures relying on relational solutions to flexibly adapt to rapid and frequent changes in the software domain and functional requirements. This challenge is exacerbated by Agile-like software development methodologies that aim at satisfying the customer in dealing with continuous emerging demands led by its business market. In particular, RDBMS, by their very nature, may suffer when software requirements change over time, inducing rapid effects over database tabular schemas by introducing new association tables -also replacing pre-existent foreign keys- and producing new JOIN clauses in SQL queries, thus resulting in more complex and less maintainable solutions. In our enterprise experience, we have successfully implemented and experimented with a graph-oriented DBMS solution based on the Neo4j Graph Database so as to attenuate architectural consequences of requirements changes within an operational context typical of a digital social community with different users and roles. In this article, we: Exemplify how graph-oriented DBMS is more resilient to functional requirements; Discuss the feasibility of adopting graph-oriented DBMSs in a classic N-tier (layered) architecture, proposing some approach for overcoming main difficulties; Highlight advantages and disadvantages and threats to their adoption in various contexts and use cases. The Neo4j Graph Database The idea behind graph-oriented data models is to adopt a native approach for handling entities (i.e., nodes) and relationships behind them (i.e., edges) so as to query the knowledge base (namely, knowledge graph) by navigating relationships between entities. The Neo4j Graph Database works on oriented property graphs where both nodes and edges own different kinds of property attributes. We choose it as DBMS, primarily for: Its “native” implementation is concretely modeled through a digital graph meta-model, whose runtime instance is composed of nodes (containing the entities with their attributes of the domain) and edges (representing navigable relationships among the interconnected concepts).In this way, relationships are traversed in O(1); The Cypher query language, adopted as a very powerful and intuitive query system of the persisted knowledge within the graph. Furthermore, the Neo4j Graph Database also offers Java libraries for Object Graph Mapping (OGM), which help developers in the automated process of mapping, persisting, and managing model entities, nodes, and relationships. Practically, OGM interprets, for graph-oriented DBMS, the same role that the pattern Object Relational Mapping (ORM) has for relational persistence layers. Comparable to the ORM pattern designed for RDBMS, the OGM pattern serves to streamline the implementation of Data Access Objects (DAOs).Its primary function is to enable semi-automated elaboration in persisting domain model entities that are properly configured and annotated within the source code. With respect to Java Persistence API (JPA)/Hibernate, widely recognized as a leading ORM technology, Neo4j's OGM library operates in a distinctive manner: Write Operations OGM propagates persistence changes across all relationships of a managed entity (analyzing the whole tree of objects relationships starting from the managed object); JPA performs updates table by table, starting from the managed entity and handling relationships based on cascade configurations. Read Operations OGM retrieves an entire "tree of relationships" with a fixed depth by the query, starting from the specified node, acting as the "root of the tree"; JPA allows the configuration of relationships between an EAGER and a LAZY loading approach. Solution Benefits of an Exemplary Case Study To exemplify the meaning of our analysis, we introduce a simple operative scenario: the UML Class Diagram of Fig. 1.1 depicts an entity User which has a 1-to-N relationship with the entity Auth (abbr. of Authorization), which defines permissions and grants inside the application.This Domain Model may be supported in RDBMS by a schema like that of Tab. 1.1 and Tab. 1.2 or, in graph-oriented DBMS, as in the knowledge graph of Fig. 1.2. Fig. 1.1: UML Class Diagram of the Domain Model. users table id firstName lastName ... ... ... Tab. 1.1: Table mapped within RDBMS schema for User entity. AUTHS table id name level user_fk ... ... ... ... Tab. 1.2: Table mapped within RDBMS schema for Auth entity. Fig. 1.2: Knowledge graph related to the Domain Model of Fig. 1.1. Now, imagine that a new requirement emerges during the production lifecycle of the application: the customer, for administrative reasons, needs to bound authorizations in specific time periods (i.e., from and until the date of validity) as in Fig. 2.1, transforming the relationship between User and Auth in a N-to-N. This Domain Model may be supported in RDBMS by a schema like that of Tab. 2.1 or, in graph-oriented DBMS, as in the knowledge graph of Fig. 2.2. Fig. 2.1: UML Class Diagram of the Domain Model after the definition of new requirements. users table id firstName lastName ... ... ... Tab. 2.1: Table mapped within RDBMS schema for User entity. users_AUTHS table user_fk auth_fk from until ... ... ... ... Tab. 2.2: Table mapped within RDBMS schema for storing associations between User and Auth. entities. AUTHS table id name level ... ... ... Tab. 2.3: Table mapped within RDBMS schema for Auth entity. Fig. 2.2: Knowledge graph related to the Domain Model of Fig. 2.1. The advantage is already clear at a schema level: indeed, the graph-oriented approach did not change the schema but only prescribes the definition of two new properties on the edge (modeling the relationship), while the RDBMS approach has created the new association table users_auths substituting the external foreign key in auths table referencing the user's table. Proceeding further with a deeper analysis, we can try to analyze a SQL query wrt a query written in the Cypher query language syntax under the two approaches: we’d like to identify users with the first name “Paul” having an Auth named “admin” with the level greater than or equal to 3. On the one hand, in SQL, the required queries (respectively the first one for retrieving data from Tab. 1.1 and Tab. 1.2, while the second one for Tab. 2.1, Tab. 2.2, and Tab. 2.3) are: SQL SELECT users.* FROM users INNER JOIN auths ON users.id = auths.user_fk WHERE users.firstName = 'Paul' AND auths.name = 'admin' AND auths.level >= 3 SQL SELECT users.* FROM users INNER JOIN users_auths ON users.id = users_auths.user_fk INNER JOIN auths ON auths.id = users_auths.auth_fk WHERE users.firstName = 'Paul' AND auths.name = 'admin' AND auths.level >= 3 On the other hand, in Cypher query language, the required query (for both cases) is: Cypher MATCH (u:User)-[:HAS_AUTH]->(auth:Auth) WHERE u.firstName = 'Paul' AND auth.name = 'admin' AND auth.level >= 3 RETURN u While the SQL query needs one more JOIN clause, it can be noted that, in this specific case, not only the query written in Cypher query language does not present an additional clause or a variation on the MATCH path, but it also remains identical. No changes were necessary on the "query system" of the backend! Conclusions Wedge Engineering contributed as the technological partner within an international Project where a collaborative social platform has been designed as a decoupled Web Application in a 3-tier architecture composed of: A backend module, a layered RESTful architecture, leveraging on the JakartaEE framework; A knowledge graph, the NoSQL provided by the Neo4j Graph Database; A frontend module, a single-page app based on HTML, CSS, and JavaScript, exploiting the Angular framework. The most challenging design choice we had to face was about using a driver that exploits natively the Cypher query language or leveraging on the OGM library to simplify DAO implementations: we discovered that building an entire application with custom queries written in Cypher query language is neither feasible nor scalable at all, while OGM may be not efficient enough when dealing with large data hierarchies that involve a significant number of relationships involving referenced external entities. We finally opted for a custom approach exploiting OGM as the reference solutions for mapping nodes and edges in an ORM-like perspective and supporting the implementation of ad hoc DAOs, therefore optimizing punctually with custom query methods that were incapable of performing well. In conclusion, we can claim that the adopted software architecture well responded to changes in the knowledge graph schema and completely fulfilled customer needs while easing efforts made by the Wedge Engineering developers team. Nevertheless, some threats have to be considered before adopting this architecture: SQL is far more common expertise than Cypher query language → so it’s much easier to find -and thus to include within a development team- experts able to maintain code for RDBMS rather than for theNeo4j Graph Database; Neo4j system requirements for on-premise production are relevant (i.e., for server-based environments, at least 8 GB are recommended) → this solution may not be the best fit for limited resources scenarios and for low-cost implementations; At the best of our efforts, we didn’t find any open source editor “ready and easy to use” for navigating through the Neo4j Graph Database data structure (the official data browser of Neo4j does not allow data modifications through the GUI without custom MERGE/CREATE query) as there are many for RDBMS → this may be intrinsically caused by the characteristic data model which hardens the realization of tabular views of data.
Scrum team failure addresses three categories from the Scrum anti-patterns taxonomy that are closely aligned: Planning and process breakdown, conflict avoidance and miscommunication, and inattention to quality and commitment, often resulting in a Scrum team performing significantly below its potential. Learn how these Scrum anti-patterns categories manifest themselves and how they affect value creation for customers and the organization’s long-term sustainability. This is the third of three articles analyzing the 183 anti-patterns from the upcoming Scrum Anti-Patterns Guide book. The previous two articles address adhering to legacy systems, processes, practices, and communication and collaboration issues. Scrum Team Failure in Detail Let us delve into the three aspects of Scrum team failure: Planning and process breakdown, conflict avoidance and miscommunication, and inattention to quality and commitment: Planning and Process Breakdown at the Scrum Level This category of Scrum Team failure patterns identifies setbacks and breakdowns in planning, process, collaboration, and alignment within the Scrum framework. Such failures may include, for example: Disregarding essential Scrum practices. Inadequately investing in planning and training. Failing to communicate and stakeholder management. Insufficient Product Backlog management. These issues can lead to chaotic, inefficient work, erode trust, hinder alignment, and undermine the Scrum team’s ability to deliver value and uphold the principles of Scrum. Manifestations Examples of the effects of this anti-pattern category include: Excessive control by the Scrum Master over the team’s processes. Overreach by the Scrum Master or Product Owner into direct task control. Product Owner’s control over the ‘What’ and ‘How’ instead of focusing on the ‘Why.’ Accepting unrefined Product Backlog items into the Sprint. Allowing disruptions during a Sprint. Mishandling Sprint cancellations. Developers disregard practices like aligning with Sprint Goals or adherence to the Definition of Done. Temporary abandonment of Scrum in critical situations. Varying Sprint lengths, neglecting to plan for new team members, or resorting to a “hardening” Sprint. Inconsistency in Sprint lengths or other planning aspects, signaling reluctance to implement Scrum fully. Accepting spillovers without discussion. Postponing Retrospectives, ignoring action items. A lack of attention to technical debt. Stakeholder inclusion: Failure in alignment, empathy, trust, robust collaboration mechanisms, and inclusive stakeholder engagement. Pressure from stakeholders to release undone work or barriers in understanding strategic direction. Avoiding Retrospectives or believing there’s no room for improvement, undermining continuous learning and adaptation. Scrum Masters take on tasks outside their responsibility, such as organizing meetings or buying office supplies, stifling the team’s growth. Rushed Product Backlog creation leads to chaotic planning and undermines Scrum principles. Untimely introducing tasks during the Sprint limits team independence and adherence to Scrum principles. Conflict Avoidance and Miscommunication at the Scrum Team Level The “Conflict Avoidance and Miscommunication” category deals with Scrum Team failure in communication and the evasion of conflict within Scrum teams. These anti-patterns include excluding collaboration, prioritizing individual achievements, delaying communication, lack of transparency, and misunderstanding within the team. Such behaviors lead to friction and undermine trust-building, alignment with Agile principles, and continuous improvement. Additionally, they reflect systemic failures, such as the lack of strategic alignment and adherence to core Agile principles, hindering efficiency and collaboration within the Scrum team. Manifestations Examples of the effects of this anti-pattern category include: Ignoring struggling team members. Failing to create an environment for open conflict resolution. Exclusion of collaboration, focusing on individual accomplishments, and delayed communication. Misusing the Daily Scrum, leading to unresolved issues and hindering transparency. Lack of sharing organizational vision and strategy, creating misunderstandings within the team. A dogmatic approach to planning or a lack of shared understanding creates barriers to efficient working. Non-inclusive behaviors, for example, a lack of diversity among Sprint Reviews attendees, disengagement, and blame games. Systemic failures, such as ignoring core Scrum principles, a lack of focus on goals, abandoning Agile for traditional practices, and mismanagement of planning and goal setting. Inattention to Quality and Commitment The “Inattention to Quality and Commitment” category of Scrum Team Failure patterns focuses on Developers’ neglect of quality, professionalism, and adherence to Scrum principles. It stresses the importance of not compromising quality standards and maintaining a continuous commitment to excellence. Disregarding quality can lead to suboptimal products, misalignment with customer expectations, and undermining the core values of Agile. The category calls for a renewed focus on standards, learning, adaptation, and avoiding shortcuts that may expedite delivery but risk long-term quality and sustainability. Manifestations Examples of the effects of this anti-pattern category include: Disregarding quality standards as defined by the Definition of Done. Attending meetings unprepared, reflecting a lack of commitment. Assuming knowledge of customer needs without interaction leads to unsuited products. Lack of clear standards for “Sprint-ready” items. Lack of attention to continuous improvement. Lack of documentation and choosing UNSMART actions, ignoring quality controls. Releasing an Increment that doesn’t meet the Definition of Done. Arbitrary deviations from Sprint Goals. Any compromise on quality standards, commitments, and the ethos of delivering optimal value. Conclusion: Scrum Team Failure The discussed Scrum team failure patterns reveal potential pitfalls that will undermine Scrum’s effectiveness. Breakdowns in planning and collaboration can erode trust and veer teams away from core Scrum principles. Conflict avoidance and miscommunication further exacerbate misalignments, pointing to systemic failures in adhering to Agile principles. Finally, compromising quality and commitment jeopardizes the alignment on creating value for customers and fundamental Agile values. In short, the Scrum teams of your organization will perform significantly below their potential, leading to an undesirable outcome. Therefore, recognizing and actively countering these Scrum team failure patterns is crucial for Scrum’s successful application in any organization.
In the fast-paced world of software development, projects need agility to respond quickly to market changes, which is only possible when the organizations and project management improve efficiency, reduce waste, and deliver value to their customers fastest. A methodology that has become very popular in this digital era is the Agile methodology. Agile strives to reduce efforts, yet it delivers high-quality features or value in each build. Within the Agile spectrum, there exists a concept known as "Pure Agile Methodology," often referred to simply as "Pure Agile," which is a refined and uncompromising approach to Agile project management. It adheres strictly to the core values of the Agile Manifesto. Adherence to the Agile Manifesto includes favoring individuals and interactions over processes and tools, working solutions over comprehensive documentation, customer collaboration over contract negotiation, and responding to change over following a plan. Though agile is being used worldwide for most software projects, the way it is implemented is not always pure agile. We must be able to discern the Pure Agile if the way it is implemented is seamless. Hence, that is also known as "Agile in its truest form." Within the Agile framework, Agile Testing plays a pivotal role in ensuring that software products are not only developed faster but also meet high-quality standards. Agile testing is a new-age approach to software testing to keep pace with the agile software development process. Agile testing is an iterative and incremental that applies the principles of agile software development to the practice of testing. It goes beyond traditional testing methods, becoming a collaborative and continuous effort throughout the project lifecycle. Agile testing is a collaborative, team-oriented process. Unlike traditional software testing, Agile testing tests systems in small increments, often developing tests before writing the code or feature. Below are the ways it is much different than traditional testing: Early involvement: Agile testing applies a 'test-first' approach. Testers are involved in the project from the beginning itself, i.e., requirements discussions, user story creation, and sprint planning. This assures that testing considerations are taken into account from the outset. Integration: In Agile testing, activities are performed with development simultaneously rather than driving them separately in the testing phase. The biggest advantage of having Agile testing is defects are detected and addressed at an early stage, which eventually helps to reduce the cost, time, and effort. User-centric: Agile testing has the most preference and importance for customer feedback, and the testing effort gets aligned as per the feedback given by the customer. Feedback-driven: Agile testing has the significance of continuous feedback. This enduring feedback and communication ensures that everyone is aligned on project goals and quality standards. TDD: As we know, test-driven development is common practice in Agile, where tests are prepared before the code is written or developed to ensure that the code meets the acceptance criteria. This promotes a "test-first" mindset among developers. Regression testing: As the product evolves with each iteration, regression testing becomes critical. New functionality changes or features shouldn't introduce regression, which can break existing functionality. Minimal documentation: Agile Testing often relies on lightweight documentation, focusing more on working software than extensive test plans and reports. Test cases may be captured as code or in simple, accessible formats. Collaboration: All Agile teams are cross-functional, with all the groups of people and skills needed to deliver value across traditional organizational silos, largely eliminating handoffs and delays. The term "Agile testing quadrants" refers to a concept introduced by Brian Marick, a software testing expert, to help teams and testers think systematically about the different types of testing they need to perform within an Agile development environment. At Scale, many types of tests are required to ensure quality: tests for code, interfaces, security, stories, larger workflows, etc. By describing a matrix (having four quadrants defined across two axes), many types of tests are necessary to ensure quality: tests for code, interfaces, security, stories, larger workflows, etc. That guides the reasoning behind these tests. Extreme Programming (XP) proponent and Agile Manifesto co-author Brian Marick helped pioneer agile testing. Agile Testing: Quadrants Q1- Contains unit and component tests. The test uses Test-Driven Development (TDD). Q2- Feature-level and capability-level acceptance tests confirm the aggregate behavior of user stories. The team automates these tests using BDD techniques. Q3- Contains exploratory tests, user acceptance tests, scenario-based tests, and final usability tests. these tests are often manual. Q4- To verify if the system meets its Non-functional Requirements (NFRs). Like Load and performance testing
At the core of agile is the better ability to respond to change (agility), less defined roles and top-to-down control (decentralized decision making), and increased visibility and promoted trust (collaboration). Agile methodology has proved its value in software development with reduced risks of product failure and delivering value in the quickest possible time. Resulting in minimized losses and maximized productivity, something workforce management tries to achieve. Agile projects have a 64% success rate, almost 1.5X more successful than waterfall projects. And 71% of U.S. companies are now using Agile practices to manage various job functions including non-IT. Thus, the same agile practices, principles, and values can be adopted in workforce management to reap similar benefits. The purpose of workforce management is to maximize productivity and minimize loss by having the right resources in the right places at the right times. In this article, I will shed light on how you can implement agile practices for workforce management, and its benefits, challenges, and best practices. I will also talk about popular agile scaling frameworks like SAFe, LeSS, DA, Spotify, and Scrum@Scale (S@S). By the end of this article, you will have an insight and understanding of the impact of agile scaling on workforce management. Understanding Agile Scaling Frameworks Before you understand the implementation of agile in workforce management, it is very important to first understand agile scaling and agile scaling frameworks. Agile scaling is the process of implementing agile values, principles, and practices to people and processes organization-wide. And the structured approach to scaling agile at the enterprise level is called the agile scaling framework. The concept of agile scaling originally came into existence from the need to scale the software development team to meet the growing product demands. When there is more work to be done than your agile team can handle in a given period of time, it’s time to scale. Agile scaling frameworks facilitate multiple teams to work together while maintaining agility. Seeing the possibility of agile scaling, agile practices started getting implemented in other non-IT processes of an organization to make it better able to adapt to change. Thus, agile scaling became synonymous with the cultural transformation of an organization to agile. Popular Agile Scaling Frameworks One cannot just simply replicate the agile practices at the enterprise level to scale agile. You need a framework to scale agile because various factors play a role in it such as team size, culture shift, and industry requirements. Here is a brief overview of the top five agile scaling frameworks: Scaled Agile Framework (SAFe) SAFe is the most trusted and popular agile scaling framework. It has 10 foundation principles to align the right people, deliver high-quality solutions, and respond to change. Large-Scale Scrum (LeSS) Large-Scale Scrum (LeSS) is a lightweight agile scaling framework that is primarily regular Scrum applied to large-scale development. It focuses on a customer-centric approach to development. Scrum@Scale (S@S) Scrum@Scale is an extension of Scrum's agile methodology. It revolves around the concept of the Scrum of Scrums (SoS). It includes each team choosing an individual to represent them in the SoS meetings. The aim of each day’s SoS meeting is to improve coordination and communication among multiple teams. Disciplined Agile (DA) DA is a hybrid of different agile frameworks such as Kanban and Scrum. It is easier than other frameworks to adopt due to its flexibility in adopting agile strategies. Spotify Spotify wasn’t a framework but more of a model to scale agile. It focuses on the importance of culture and networks to manage multiple teams. It emphasizes building autonomous and cross-functional teams for work alignment. How Agile Practices Can Be Adopted for Workforce Management To understand the implementation of agile practices for workforce management, first, it is very important to understand the common challenges faced in workforce management (WFM). Traditional WFM has a certain degree of challenges associated with it because of the times it is designed for. For example, workforce plans are usually designed to execute on an annual basis and budgets are fixed for the year. This was good when the market was not dynamic. But in today’s market, you cannot create a workforce plan, forecast resource needs, and fix a budget for the entire year. You need to be agile in workforce planning to better adapt to changing market needs and achieve the objectives of your organization. Agile principles can help you in that with agile workforce planning. Sprint Planning Agile focuses on breaking large work into small, time-fixed, iterative sprints to better respond to changing requirements. You can create sprints for shorter workforce planning cycles. This helps you more accurately forecast the resource needs, allocate budgets, find resources aligned to project needs, and respond to the changing market needs. You can consider the market situations into account and make changes to the next cycle of workforce planning to fulfill your objectives rather than adhering to a fixed plan created at the beginning of the year. Collaboration The agile value of collaboration can be applied at different layers of the organization to better cater to the skills gaps. For example, HR managers can take inputs from the line managers, subject matter experts, and key stakeholders to learn about the exact skills requirements of a resource. It helps them find the right resources rather than making decisions in the silos. Feedback Loops The other core agile values of incremental delivery, collecting feedback, and making improvements can be adopted for better workforce planning and employee scheduling. You can review your workforce needs at a shorter cycle, collect feedback, and make improvements as needed to meet the changing requirements than following a rigid workforce plan throughout the year. There is no one-size-fits-all approach to adopting agile for workforce management — more of an end goal of executing practices, driving value, and making improvements. Benefits of Agile Workforce Management Agile workforce management provides the necessary agility and adaptability required to meet changing requirements. Have a look at the key benefits of agile workforce management. Better forecast resources: Agile workforce management encourages iterative planning. This helps you better forecast resource needs and make adjustments to the staffing based on changing demands. Additionally, helping you to have the right resources at the right place at the right time. Improved employee scheduling: Agile management uses techniques like daily stand-up meetings to coordinate and schedule work. It helps to ensure alignment of work with team capacity and employee availability. This helps you improve payroll efficiency, avoid understaffing or overstaffing issues, and increase productivity. Reduce risks: Agile encourages feedback loops. Thus, you can understand the workforce’s needs better and respond to them quickly. You can use iterations to test new ideas and make sure they're effective before moving on to the next phase of planning. Collaborative decision-making: Agile focuses on collaboration for decision-making rather than decisions made in silos or top-to-down workforce planning. All the stakeholders are involved in decision making and requirements are appropriately communicated. Better work-life balance: Employee burnout is one of the major challenges faced by the workforce. Agile encourages self-organizing teams. It empowers employees to make decisions and take ownership of their work which helps reduce employee burnout, increase employee engagement, and enhance job satisfaction. Challenges in Agile Scaling for Workforce Management Agile is more of a mindset than a principle. To scale agile, you have to shift from the old way of working to new means. Thus, it can pose a series of challenges. Here are the three major challenges you may face: Culture Shift Most organizations have a command-and-control management style over an open style of leadership, fixed milestones and budgets over continuous improvement, and extensive planning over failing fast and learning. Thus, to scale agile, you need a change of mindset first at a leadership level and then at the employee level. Lack of Proper Understanding of the Agile Framework Agile is complex to understand. It is different from traditional management in many ways. For example, in an agile development team, there is no project manager. Teams are self-organizing. It is hard for someone coming from a traditional work management style to adapt to agile. Thus, it requires training and learning to make your team skilled in agile. Technology Requirements Without the right technology, you cannot scale agile. For example, managing a cross-functional agile team and making multiple agile teams work together means you need a technology stack to create visibility, transparency, and information flow. Thus, you have to adopt technological solutions that help you scale agile. Case Studies and Real-Life Examples of Agile Scaling There are many organizations that successfully scaled agile at an enterprise level to make organizations better adapt to change. Here are the three popular case studies: Spotify: Spotify, a popular music streaming service, uses agile scaling for workforce management. It uses a framework called Squads, Tribes, and Chapters to scale agile across the organization. It has helped Spotify become one of the most successful music streaming services in the world by making the organization customer-centric. Siemens: Siemens, a multinational technology company, has used agile scaling for workforce planning. The company used agile frameworks such as Kanban to allocate resources to match project demands. It helps Siemens allocate resources based on project priorities leading to improved workforce management and project outcomes. Philips: Philips, a multinational electronics company, has used an agile framework called SAFe (Scaled Agile Framework) to scale agile. It helps them improve their product development process and customer satisfaction. Best Agile Scaling Practices for Workforce Management There is no one right way to agile scaling, but some practices can help you scale agile. Here are some of the best practices: Define goals you want to achieve, establish roles, and make changes in organizational structure Involve leadership in decision-making and communicate regularly Choose the right Agile framework Run a pilot program at small scale Train your employees on agile practices Use the right tools and technology Take time to change Conclusion In today’s world, almost every business function can benefit from agile. Agile values, principles, and practices provide the foundation for any business process to make it better adaptable to change. Workforce management can also benefit from agile practices. Agile workforce management makes the process iterative, enhances collaboration, and incorporates feedback, providing a better ability to respond to change and counter the challenges.
If you work in software development, you likely encounter technical debt all the time. It accumulates over time as we prioritize delivering new features over maintaining a healthy codebase. Managing technical debt, or code debt, can be a challenge. Approaching it the right way in the context of Scrum won’t just help you manage your tech debt. It can allow you to leverage it to strategically ship faster and gain a very real competitive advantage. In this article, I’ll cover: The basics of technical debt and why it matters How tech debt impacts Scrum teams How to track tech debt How to prioritize tech debt and fix it Why continuous improvement matters in tech debt Thinking About Tech Debt in Scrum: The Basics Scrum is an Agile framework that helps teams deliver high-quality software in a collaborative and iterative way. By leveraging strategies like refactoring, incremental improvement, and automated testing, Scrum teams can tackle technical debt head-on. But it all starts with good issue tracking. Whether you're a Scrum master, product owner, or developer, I’m going to share some practical insights and strategies for you to manage tech debt. The Impact of Technical Debt on Scrum Teams Ignoring technical debt can lead to higher costs, slower delivery times, and reduced productivity. Tech debt makes it harder to implement new features or updates because it creates excessive complexity. Product quality suffers in turn. Then maintenance costs rise. There are more customer issues, and customers become frustrated. Unmanaged technical debt has the potential to touch every part of the business. Technical debt also brings the team down. It’s a serial destroyer of morale. Ignoring tech debt or postponing it is often frustrating and demotivating. It can also exacerbate communication problems and create silos, hindering project goals. Good management of tech debt, then, is absolutely essential for the modern Scrum team. How to Track Tech Debt Agile teams who are successful at managing their tech debt identify it early and often. Technical debt should be identified: During the act of writing code. Scrum teams should feel confident accruing prudent tech debt to ship faster. That’s so long as they track that debt immediately and understand how it could be paid off. Backlog refinement. This is an opportunity to discuss and prioritize the product backlog and have nuanced conversations about tech debt in the codebase. Sprint planning. How technical debt impacts the current sprint should always be a topic of conversation during sprint planning. Allocate resources to paying back tech debt consistently. Retrospectives. An opportunity to identify tech debt that has been accrued or which needs to be considered or prioritized. Use an in-editor issue tracker, which enables your engineers to track issues directly linked to code. This is a weakness of common issue-tracking software like Jira, which often undermines the process entirely. Prioritising Technical Debt in Scrum There are many ways to choose what to prioritize. I suggest choosing a theme for each sprint. Allocate 15-20% of your resources to fixing a specific subset of technical debt issues. For example, you might choose to prioritize issues based on… Their impact on a particular part of the codebase needed to ship new features Their impact on critical system functionality, security, or performance Their impact on team morale, employee retention, or developer experience The headaches around issue resolution often stem from poor issue tracking. Once your Scrum team members have nailed an effective issue-tracking system that feels seamless for engineers, solving tech debt becomes much easier. The Importance of Good Issue Tracking in Managing Technical Debt in Scrum Good issue tracking is the foundation of any effective technical debt management strategy. Scrum teams must be able to track technical debt issues systematically to prioritize and address them effectively. Using the right tools can make or break a tech debt management strategy. Modern engineering teams need issue-tracking tools that: Link issues directly to code. Make issues visible in the code editor Enable engineers to visualize tech debt in the codebase Create issues from the code editor in Stepsize Continuous Improvement in Scrum Identify tech debt early and consistently. Address and fix tech debt continuously. Use Scrum sessions such as retrospectives as an opportunity to reflect on how the team can improve their process for managing technical debt. Consider: Where does tech debt tend to accumulate? Is everybody following a good issue-tracking process? Are issues high-quality? Regularly review and update the team's “Definition of Done” (DoD), which outlines the criteria that must be met for a user story to be considered complete. Refining the DoD increases their likelihood of shipping high-quality code that is less likely to result in technical debt down the line. Behavioral change is most likely when teams openly collaborate, supported by the right tools. I suggest encouraging everybody to reflect on their processes and actively search for opportunities to improve. Wrapping Up Managing technical debt properly needs to be a natural habit for modern Scrum teams. Doing so protects the long-term performance of the team and product. Properly tracking technical debt is the foundation of any effective technical debt management strategy. By leveraging the right issue-tracking tools and prioritizing technical debt in the right way, Scrum teams can strategically ship faster. Doing so also promotes better product quality and maintains team morale and collaboration. Remember, technical debt is an unavoidable part of software development, but with the right approach and tools, it’s possible to drive behavioral change and safeguard the long-term success of your team.
When someone mentions lead times in software delivery, it's often unclear whether they mean the definition of lead times from Lean Software Development, the one from DevOps, or something else entirely. In this post, I look at why there are so many definitions of lead time and how you can put them to use. Lead Time Definitions The DevOps definition of lead time for changes is the time between a developer committing code into version control and someone deploying that change to the production environment. This definition covers a smaller part of the software delivery process than the Lean definition. Mary and Tom Poppendieck created Lean Software Development based on the lean manufacturing movement, and they measured lead time from when you discover a requirement to when someone fulfills that requirement. The Lean movement, based on the Toyota Production System, defines lead time as the time between a customer placing an order and receiving their car. Lead Time Is a Customer Measurement All these lead times represent a customer measurement. But they differ because the customer is different. Toyota measured the system from the perspective of a car buyer. The Poppendiecks measured the software development system as the users see it. DevOps measures the deployment pipeline from the perspective of the developer as the customer. Lead time Customer Start End Toyota Production System Car Buyer Order Delivery Lean Software Development User Requirement Working software DevOps Developer Code commit Production deployment The key to successful lead time measurement is representing how the customer views the elapsed time. If you run a coffee shop, you might measure the time between a customer placing an order and handing them their coffee. You might consider a two-minute lead time to be good as your competitors take three minutes between the order and its fulfillment. However, your competitor is using a whole-system lead time, which starts when the customer joins the queue. They added another barista and reduced the queue from 15 minutes to seven. Their customers get coffee in ten minutes, but your customers have to wait 17 minutes (and you're losing customers who leave when they see the queue) Unless your lead time represents the customer's complete view of the system, you will likely optimize the wrong things. Cycle Times When you measure a part of the system, you're collecting a cycle time. In the car industry, it's useful to track how long it takes for a car to move along the production line. In software delivery, it's common to collect the cycle time from when a work item starts to when it's closed. This indicates the performance of software delivery without the varying wait times that can occur before work begins. As the coffee shop example shows, your customer doesn't care about cycle times. While you can use cycle times to measure different parts of the system to identify bottlenecks constraining the flow of work, you should always keep the complete system in mind. In software delivery, it's common to find a large proportion of elapsed time is due to work waiting in a queue. For example, a requirement that would take a few days to deliver might sit in a backlog for months, or a pull request may wait for approval for hours or even days. You can identify these delays by subdividing your system and measuring each part. Lead times measure the real output of a system, but cycle times help you find the system's constraint. All Measurements Are Useful Lead time is valuable because it represents the customer's perception. Identifying your customer and tracking lead times as they see them ensures any improvements you make impact their experience. If you make an improvement that doesn't reduce the lead time, you've optimized the wrong part of your system. In some cases, reducing the time for the wrong part of the system can even increase the overall lead time if it adds additional stress at the constraint. A constraint is a bottleneck that limits the speed of flow for the whole system. Resolving a constraint causes the bottleneck to move, so the process of identifying and resolving constraints is continuous. Software delivery represents a constraint to most organizations as technology is such a key competitive advantage. However, this isn't a granular enough identification to make improvements. You need to look at your software delivery value stream and make improvements where they increase the flow of work in the system. The Theory of Constraints, created by Eli Goldratt, tells us there's always at least one constraint in a system. Optimizing anywhere other than the constraint will fail to improve the performance of the whole system. Cycle times and other part-system timers help you work out where optimization is likely to reduce the overall lead time, so you can use cycle times and lead times together to assess the improvement. Common Software Delivery Constraints There are some common constraints in software delivery: Working in large batches. Pull request approval queues. Having too many branches or branches that have existed for too long. Manual testing Policy constraints, such as unnecessary approvals. Hand-offs between functional silos (such as development, testing, and operations.) Some of these constraints are reflected in the Continuous Delivery commit cycle, which has the following recommended timings: Commits every 15 minutes. Initial build and test feedback in five minutes. Any failures fixed or the change reverted after ten minutes. Conclusion The different definitions of lead time reflect various customer perceptions of parts of the same process. You can use as many measurements of lead and cycle times as you need to find and resolve constraints in your system. You can track the lead times over the long term and use cycle times temporarily as part of a specific improvement exercise. When you improve or optimize, lead time can help you understand if you're positively impacting the whole system. Happy deployments!
Estimating work is hard as it is. Using dates over story points as a deciding factor can add even more complications, as they rarely account for the work you need to do outside of actual work, like emails, meetings, and additional research. Dates are also harder to measure in terms of velocity making it harder to estimate how much effort a body of work takes even if you have previous experiences. Story points, on the other hand, can bring more certainty and simplify planning in the long run… If you know how to use them. What Are Story Points in Scrum? Story points are units of measurement that you use to define the complexity of a user story. In simpler words, you’ll be using a gradation of points from simple (smallest) to hardest (largest) to rank how long you think it would take to complete a certain body of work. Think of them as rough time estimates of tasks in an agile project. Agile teams typically assign story points based on three major factors: The complexity of work; The amount of work that needs to be done; And the uncertainty in how one could tackle a task. The less you know about how to complete something, the more time it will take to learn. How to Estimate a User Story With Story Points Ok, let’s take a good look at the elephant in the room: There’s no one cut and dry way of estimating story points. The way we do it in our team is probably different from your estimation method. That’s why I will be talking about estimations on a more conceptual level making sure anyone who’s new to the subject matter can understand the process as a whole and then fine-tune it to their needs. T-shirt size Story Point Time to deliver work XS 1 Minutes to 1-2 hours S 2 Half a day M 3 1-2 days L 5 Half a week XL 8 Around 1 week XXL 13 More than 1 week XXXL 21 Full Sprint Story point vs. T-shirt size Story Points of 1 and 2 Estimations that seem the simplest can sometimes be the trickiest. For example, if you’ve done something a lot of times and know that this one action shouldn’t take longer than 10-15 minutes, then you have a pretty clear one-pointer. That being said, the complexity of a task isn’t the only thing you need to consider. Let’s take a look at fixing a typo on a WordPress-powered website as an example. All you need to do is log into the interface, find the right page, fix the typo, and click publish. Sounds simple enough. But what if you need to do this multiple times on multiple pages? The task is still simple, but it takes a significantly longer amount of time to complete. The same can be said about data entry and other seemingly trivial tasks that can take a while simply due to the number of actions you’ll need to perform and the screens you’ll need to load. Story Point Estimation in Complex User Stories While seemingly simple stories can be tricky, the much more complex ones are probably even trickier. Think about it: If your engineers estimate, they’ll probably need half a week to a week to complete one story; there’s probably a lot they are still uncertain of in regards to implementation, meaning a story like that could take much longer. Then there’s the psychological factor where the team will probably go for the low-hanging fruits first and use the first half of the week to knock down the one, two, and three-pointers. This raises the risk of the five and eight-pointers not being completed during the Sprint. One thing you can do is ask yourself if the story really needs to be as complex as it is now? Perhaps it would be wiser to break it down. You can find out the answer to whether you should break a story using the KISS principle. KISS stands for “Keep It Simple, Stupid” and makes you wonder if something needs to be as complex as it is. Applying KISS is pretty easy too — just ask a couple of simple questions like what is the value of this story and if the same value can be achieved in a more convenient way. “Simplicity is the ultimate sophistication.” –Leonardo Da Vinci How to Use Story Points in Atlassian’s Jira A nice trick I like is to give the team the ability to assign story points to epics. Adding the story points field is nothing too in-depth or sophisticated as a project manager needs the ability to easily assign points when creating epics. The rule of thumb here is to indicate whether your development team is experienced and well-equipped to deliver the epic or whether they would need additional resources and time to research. An example of a simpler epic could be the development of a landing page and a more complex one would be the integration of ChatGPT into a product. The T-shirt approach works like a charm here. While Jira doesn’t have the functionality to add story points to epics by default, you can easily add a checkbox custom field to do the trick. Please note that you’ll need admin permissions to add and configure custom fields in Jira. Assigning story points to user stories is a bit trickier as — ideally — you’d like to take everyone’s experience and expertise into consideration. Why? A project manager can decide the complexity of an epic based on what the team has delivered earlier. Individual stories are more nuanced as engineers will usually have a more precise idea of how they’ll deliver this or that piece of functionality, which tools they’ll use and how long it’ll take. In my experience, T-shirt sizes don’t fit here as well as the Fibonacci sequence. The given sequence exhibits a recurring pattern in which each number is obtained by adding the previous two numbers in the series. The sequence begins with 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, and 89, and this pattern continues indefinitely. This sequence, known as the Fibonacci sequence, is utilized as a scoring scale in Fibonacci agile estimation. It aids in estimating the effort required for agile development tasks. This approach proves highly valuable as it simplifies the process by restricting the number of values in the sequence, eliminating the need for extensive deliberation on complexity nuances. This simplicity is significant because determining complexity based on a finite set of points is much easier. Ultimately, you have the option of selecting either 55 or 89, rather than having to consider the entire range between 55 and 89. As for the collaboration aspect of estimating and assigning story points to user stories, there’s a handy tool called Planning Poker. This handy tool helps the team collaborate on assigning story points to their issues. Here’s the trick: each team member anonymously assigns a value to an issue, keeping their choices incognito. Then, when the cards are revealed, it’s fascinating to see if the team has reached a consensus on the complexity of the task. If different opinions emerge, it’s actually a great opportunity for engaging in discussions and sharing perspectives. The best part is, this tool seamlessly integrates with Jira, making it a breeze to incorporate into your existing process. It’s all about making teamwork smoother and more efficient! How does the process of assigning story points work? Before the Sprint kicks off — during the Sprint planning session — the Scrum team engages in thorough discussions regarding the tasks at hand. All the stories are carefully reviewed, and story points are assigned to gauge their complexity. Once the team commits to a Sprint, we have a clear understanding of the stories we’ll be tackling and their respective point values, which indicate their significance. As the Sprint progresses, the team diligently works on burning down the stories that meet the Definition of Done by its conclusion. These completed stories are marked as finished. For any unfinished stories, they are returned to the backlog for further refinement and potential re-estimation. The team has the option to reconsider and bring these stories back into the current Sprint if deemed appropriate. When this practice is consistently followed for each sprint, the team begins to understand their velocity — a measure of the number of story points they typically complete within a Sprint — over time. It becomes a valuable learning process that aids in product management, planning, and forecasting future workloads. What Do You Do With Story Points? As briefly mentioned above — you burn them throughout the Sprint. You see, while story points are good practice for estimating the amount of work you put in a Sprint, Jira makes them better with Sprint Analytics showing you the amount of points you’ve actually burned through the Sprint and comparing it to the estimation. These metrics will help you improve your planning in the long run. Burndown chart: This report tracks the remaining story points in Jira and predicts the likelihood of completing the Sprint goal. Burnup chart: This report works as an opposite to the Burndown chart. It tracks the scope independently from the work done and helps agile teams understand the effects of scope change. Sprint report: This report analyses the work done during a Sprint. It is used to point out either overcommitment or scope creep in a Jira project. Velocity chart: This is a kind of bird’s eye view report that shows historic data of work completed from Sprint to Sprint. This chart is a nice tool for predicting how much work your team can reliably deliver based on previously burned Jira story points. Add Even More Clarity to Your Stories With a Checklist With a Jira Checklist, you have the ability to create practical checklists and checklist templates. They come in handy when you want to ensure accountability and consistency. This application proves particularly valuable when it comes to crafting and enhancing your stories or other tasks and subtasks. It allows you to incorporate explicit and visible checklists for the Definition of Done and Acceptance Criteria into your issues, giving you greater clarity and structure. It’s ultimately a useful tool for maintaining organization and streamlining your workflow with automation. Standardization isn’t about the process. It’s about helping people follow it.
Agile estimation plays a pivotal role in Agile project management, enabling teams to gauge the effort, time, and resources necessary to accomplish their tasks. Precise estimations empower teams to efficiently plan their work, manage expectations, and make well-informed decisions throughout the project's duration. In this article, we delve into various Agile estimation techniques and best practices that enhance the accuracy of your predictions and pave the way for your team's success. The Essence of Agile Estimation Agile estimation is an ongoing, iterative process that takes place at different levels of detail, ranging from high-level release planning to meticulous sprint planning. The primary objective of Agile estimation is to provide just enough information for teams to make informed decisions without expending excessive time on analysis and documentation. Designed to be lightweight, collaborative, and adaptable, Agile estimation techniques enable teams to rapidly adjust their plans as new information emerges or priorities shift. Prominent Agile Estimation Techniques 1. Planning Poker Planning Poker is a consensus-driven estimation technique that employs a set of cards with pre-defined numerical values, often based on the Fibonacci sequence (1, 2, 3, 5, 8, 13, etc.). Each team member selects a card representing their estimate for a specific task, and all cards are revealed simultaneously. If there is a significant discrepancy in estimates, team members deliberate their reasoning and repeat the process until a consensus is achieved. 2. T-Shirt Sizing T-shirt sizing is a relative estimation technique that classifies tasks into different "sizes" according to their perceived complexity or effort, such as XS, S, M, L, and XL. This method allows teams to swiftly compare tasks and prioritize them based on their relative size. Once tasks are categorized, more precise estimation techniques can be employed if needed. 3. User Story Points User story points serve as a unit of measurement to estimate the relative effort required to complete a user story. This technique entails assigning a point value to each user story based on its complexity, risk, and effort, taking into account factors such as workload, uncertainty, and potential dependencies. Teams can then use these point values to predict the number of user stories they can finish within a given timeframe. 4. Affinity Estimation Affinity Estimation is a technique that involves grouping tasks or user stories based on their similarities in terms of effort, complexity, and size. This method helps teams quickly identify patterns and relationships among tasks, enabling them to estimate more efficiently. Once tasks are grouped, they can be assigned a relative point value or size category. 5. Wideband Delphi The Wideband Delphi method is a consensus-based estimation technique that involves multiple rounds of anonymous estimation and feedback. Team members individually provide estimates for each task, and then the estimates are shared anonymously with the entire team. Team members discuss the range of estimates and any discrepancies before submitting revised estimates in subsequent rounds. This process continues until a consensus is reached. Risk Management in Agile Estimation Identify and Assess Risks Incorporate risk identification and assessment into your Agile estimation process. Encourage team members to consider potential risks associated with each task or user story, such as technical challenges, dependencies, or resource constraints. By identifying and assessing risks early on, your team can develop strategies to mitigate them, leading to more accurate estimates and a smoother project execution. Assign Risk Factors Assign risk factors to tasks or user stories based on their level of uncertainty or potential impact on the project. These risk factors can be numerical values or qualitative categories (e.g., low, medium, high) that help your team prioritize tasks and allocate resources effectively. Incorporating risk factors into your estimates can provide a more comprehensive understanding of the work involved and help your team make better-informed decisions. Risk-Based Buffering Include risk-based buffering in your Agile estimation process by adding contingency buffers to account for uncertainties and potential risks. These buffers can be expressed as additional time, resources, or user story points, and they serve as a safety net to ensure that your team can adapt to unforeseen challenges without jeopardizing the project's success. Monitor and Control Risks Continuously monitor and control risks throughout the project lifecycle by regularly reviewing your risk assessments and updating them as new information becomes available. This proactive approach allows your team to identify emerging risks and adjust their plans accordingly, ensuring that your estimates remain accurate and relevant. Learn From Risks Encourage your team to learn from the risks encountered during the project and use this knowledge to improve their estimation and risk management practices. Conduct retrospective sessions to discuss the risks faced, their impact on the project, and the effectiveness of the mitigation strategies employed. By learning from past experiences, your team can refine its risk management approach and enhance the accuracy of future estimates. By incorporating risk management into your Agile estimation process, you can help your team better anticipate and address potential challenges, leading to more accurate estimates and a higher likelihood of project success. This approach also fosters a culture of proactive risk management and continuous learning within your team, further enhancing its overall effectiveness and adaptability. Best Practices for Agile Estimation Foster Team Collaboration Efficient Agile estimation necessitates input from all team members, as each individual contributes unique insights and perspectives. Promote open communication and collaboration during estimation sessions to ensure everyone's opinions are considered and to cultivate a shared understanding of the tasks at hand. Utilize Historical Data Draw upon historical data from previous projects or sprints to inform your estimations. Examining past performance can help teams identify trends, patterns, and areas for improvement, ultimately leading to more accurate predictions in the future. Velocity and Capacity Planning Incorporate team velocity and capacity planning into your Agile estimation process. Velocity is a measure of the amount of work a team can complete within a given sprint or iteration, while capacity refers to the maximum amount of work a team can handle. By considering these factors, you can ensure that your estimates align with your team's capabilities and avoid overcommitting to work. Break Down Large Tasks Large tasks or user stories can be challenging to estimate accurately. Breaking them down into smaller, more manageable components can make the estimation process more precise and efficient. Additionally, this approach helps teams better understand the scope and complexity of the work involved, leading to more realistic expectations and improved planning. Revisit Estimates Regularly Agile estimation is a continuous process, and teams should be prepared to revise their estimates as new information becomes available or circumstances change. Periodically review and update your estimates to ensure they remain accurate and pertinent throughout the project lifecycle. Acknowledge Uncertainty Agile estimation recognizes the inherent uncertainty in software development. Instead of striving for flawless predictions, focus on providing just enough information to make informed decisions and be prepared to adapt as necessary. Establish a Baseline Create a baseline for your estimates by selecting a well-understood task or user story as a reference point. This baseline can help teams calibrate their estimates and ensure consistency across different tasks and projects. Pursue Continuous Improvement Consider Agile estimation as an opportunity for ongoing improvement. Reflect on your team's estimation accuracy and pinpoint areas for growth. Experiment with different techniques and practices to discover what works best for your team and refine your approach over time. Conclusion Agile estimation is a vital component of successful Agile project management. By employing the appropriate techniques and adhering to best practices, teams can enhance their ability to predict project scope, effort, and duration, resulting in more effective planning and decision-making. Keep in mind that Agile estimation is an iterative process, and teams should continuously strive to learn from their experiences and refine their approach for even greater precision in the future.
Beyond Unit Testing Test-driven development (TDD) is a well-regarded technique for an improved development process, whether developing new code or fixing bugs. First, write a test that fails, then get it to work minimally, then get it to work well; rinse and repeat. The process keeps the focus on value-added work and leverages the test process as a challenge to improving the design being tested rather than only verifying its behavior. This, in turn, also improves the quality of your tests, which become a more valued part of the overall process rather than a grudgingly necessary afterthought. The common discourse on TDD revolves around testing relatively small, in-process units, often just a single class. That works great, but what about the larger 'deliverable' units? When writing a microservice, it's the services that are of primary concern, while the various smaller implementation constructs are simply enablers for that goal. Testing of services is often thought of as outside the scope of a developer working within a single codebase. Such tests are often managed separately, perhaps by a separate team, using different tools and languages. This often makes such tests opaque and of lower quality and adds inefficiencies by requiring a commit/deploy as well as coordination with a separate team. This article explores how to minimize those drawbacks with test-driven development (TDD) principles applied at the service level. It addresses the corollary that such tests would naturally overlap with other API-level tests, such as integration tests, by progressively leveraging the same set of tests for multiple purposes. This can also be framed as a practical guide to shift-left testing from a design as well as implementation perspective. Service Contract Tests A Service Contract Test (SCT) is a functional test against a service API (black box) rather than the internal implementation mechanisms behind it (white box). In their purest form, SCTs do not include subversive mechanisms such as peeking into a database to verify results or rote comparisons against hard-coded JSON blobs. Even when run wholly within the same process, SCTs can loop back to localhost against an embedded HTTP server such as that available in Spring Boot. By limiting access through APIs in this manner, SCTs are agnostic as to whether the mechanisms behind the APIs are contained in the same or a different process(es), while all aspects of serialization/deserialization can be tested even in the simplest test configuration. The general structure of an SCT is: Establish a starting state (preferring to keep tests self-contained) One or more service calls (e.g., testing stateful transitions of updates followed by reads) Deep verification of the structural consistency and expected behavior of the results from each call and across multiple calls Because of the level they operate, SCTs may appear to be more like traditional integration tests (inter-process, involving coordination across external dependencies) than unit tests (intra-process operating wholly within a process space), but there are important differences. Traditional integration test codebases might be separated physically (separate repositories), by ownership (different teams), by implementation (different language and frameworks), by granularity (service vs. method focus), and by level of abstraction. These aspects can lead to costly communication overhead, and the lack of observability between such codebases can lead to redundancies, gaps, or problems tracking how those separately-versioned artifacts relate to each other. With the approach described herein, SCTs can operate at both levels, inter-process for integration-test level comprehensiveness as well as intra-process as part of the fast edit-compile-test cycle during development. By implication, SCTs operating at both levels Co-exist in the development codebase, which ensures that committed code and tests are always in lockstep Are defined using a uniform language and framework(s), which lowers the barriers to shared understanding and reduces communication overhead Reduce redundancy by enabling each test to serve multiple purposes Enable testers and developers to leverage each other’s work or even (depending on your process) remove the need for the dev/tester role distinction to exist in the first place Faking Real Challenges The distinguishing challenge to testing at the service level is the scope. A single service invocation can wind through many code paths across many classes and include interactions with external services and databases. While mocks are often used in unit tests to isolate the unit under test from its collaborators, they have downsides that become more pronounced when testing services. The collaborators at the service testing level are the external services and databases, which, while fewer in number than internal collaboration points, are often more complex. Mocks do not possess the attributes of good programming abstractions that drive modern language design; there is no abstraction, no encapsulation, and no cohesiveness. They simply exist in the context of a test as an assemblage of specific replies to specific method invocations. When testing services, those external collaboration points also tend to be called repeatedly across different tests. As mocks require a precise understanding and replication of collaborator requests/responses that are not even in your control, it is cumbersome to replicate and manage that malleable know-how across all your tests. A more suitable service-level alternative to mocks is fakes, which are an alternative form of test double. A fake object provides a working, stateful implementation of its interface with implementation shortcuts, making it not suitable for production. A fake, for example, may lack actual persistence while otherwise providing a fully (or mostly, as deemed necessary for testing purposes) functionally consistent representation of its 'real' counterpart. While mocks are told how to respond (when you see exactly this, do exactly that), fakes know themselves how to behave (according to their interface contract). Since we can make use of the full range of available programming constructs, such as classes, when building fakes, it is more natural to share them across tests as they encapsulate the complexities of external integration points that need not then be copied/pasted throughout your tests. While the unconstrained versatility of mocks does, at times, have its advantages, the inherent coherence, and shareability of fakes make them appealing as the primary implementation vehicle for the complexity behind SCTs. Alternately Configured Tests (ACTs) Being restricted to an appropriately high level of API abstraction, SCTs can be agnostic about whether fake or real integrations are running underneath. The same set of service contract tests can be run with either set. If the integrated entities, here referred to as task objects (because they often can be run in parallel as exemplified here), are written without assuming particular implementations of other task objects (in accordance with the "L" and "D" principles in SOLID), then different combinations of task implementations can be applied for any purpose. One configuration can run all fakes, another with fakes mixed with real, and another with all real. These Alternately Configured Tests (ACTs) suggest a process, starting with all fakes and moving to all real, possibly with intermediate points of mixing and matching. TDD begins in a walled-off garden with the 'all fakes' configuration, where there is no dependence on external data configurations and which runs fast because it is operating in process. Once all SCTs pass in that test configuration, subsequent configurations are run, each further verifying functionality while having only to focus on the changed elements with respect to the previous working test configuration. The last step is to configure as many “real” task implementations as required to match the intended level of integration testing. ACTs exist when there are at least two test configurations (color code red and green in the diagram above). This is often all that is needed, but at times, it can be useful in order to provide a more incremental sequence from the simplest to the most complex configuration. Intermediate test configurations might be a mixture of fake and real or semi-real task implementations that hit in-memory or containerized implementations of external integration points. Balancing SCTs and Unit Testing Relying on unit tests alone for test coverage of classes with multiple collaborators can be difficult because you're operating at several levels removed from the end result. Coverage tools tell you where there are untried code paths, but are those code paths important, do they have more or less no impact, and are they even executed at all? High test coverage does not necessarily equal confidence-engendering test coverage, which is the real goal. SCTs, in contrast, are by definition always relevant to and important for the purpose of writing services. Unit tests focus on the correctness of classes, while SCTs focus on the correctness of your API. This focus necessarily drives deep thinking about the semantics of your API, which in turn can drive deep thinking about the purpose of your class structure and how the individual parts contribute to the overall result. This has a big impact on the ability to evolve and change: tests against implementation artifacts must be changed when the implementation changes, while tests against services must change when there is a functional service-level change. While there are change scenarios that favor either case, refactoring freedom is often regarded as paramount from an agile perspective. Tests encourage refactoring when you have confidence that they will catch errors introduced by refactoring, but tests can also discourage refactoring to the extent that refactoring results in excessive test rework. Testing at the highest possible level of abstraction makes tests more stable while refactoring. Written at the appropriate level of abstraction, the accessibility of SCTs to a wider community (quality engineers, API consumers) also increases. The best way to understand a system is often through its tests; since those tests are expressed in the same API used by its consumers, they can not only read them but also possibly contribute to them in the spirit of Consumer Driven Contracts. Unit tests, on the other hand, are accessible only to those with deep familiarity with the implementation. Despite these differences, it is not a question of SCTs vs. unit tests, one excluding the other. They each have their purpose; there is a balance between them. SCTs, even in a test configuration with all fakes, can often achieve most of the required code coverage, while unit testing can fill in the gaps. SCTs also do not preclude the benefits of unit testing with TDD for classes with minimal collaborators and well-defined contracts. SCTs can significantly reduce the volume of unit tests against classes without those characteristics. The combination is synergistic. SCT Data Setup To fulfill its purpose, every test must work against a known state. This can be a more challenging problem for service tests than for unit tests since those external integration points are outside of the codebase. Traditional integration tests sometimes handle data setup through an out-of-band process, such as database seeding with automated or manual scripts. This makes tests difficult to understand without having to hunt down that external state or external processes and is subject to breaking at any time through circumstances outside your control. If updates are involved, care must be taken to reset or restore the state at the test start or end. If multiple users happen to run the tests at the same time, care must be taken to avoid update conflicts. A better approach tests that independently set up (and possibly tear down) their own non-conflicting (with other users) target state. For example, an SCT that tests the filtered retrieval of orders would first create an order with a unique ID and with field values set to the test's expectations before attempting to filter on it. Self-contained tests avoid the pitfalls of shared, separately controlled states and are much easier to read as well. Of course, direct data setup is not always directly possible since a given external service might not provide the mutator operations needed for your test setup. There are several ways to handle this: Add testing-only mutator operations. These might even go to a completely different service that isn't otherwise required for production execution. Provide a mixed fake/real test configuration using fakes for the update-constrained external service(s), then employ a mechanism to skip such tests for test configurations where those fake tasks are not active. This at least tests the real versions of other tasks. Externally pre-populated data can still be employed with SCTs and can still be run with fakes, provided those fakes expose equivalent results. For tests whose purpose is not actually validating updates (i.e., updates are only needed for test setup), this at least avoids any conflicts with multiple simultaneous test executions. Providing Early Working Services A test-filtering mechanism can be employed to only run tests against select test configurations. For example, a given SCT may initially work only against fakes but not against other test configurations. That restricted SCT can be checked into your code repository, even though it is not yet working across all test configurations. This orients toward smaller commits and can be useful for handing off work between team members who would then make that test work under more complex configurations. Done right, the follow-on work need only be focused on implementing the real task that doesn’t break the already-working SCTs. This benefit can be extended to API consumers. Fakes can serve to provide early, functionally rich implementations of services without those consumers having to wait for a complete solution. Real-task implementations can be incrementally introduced with little or no consumer code changes. Running Remote Because SCTs are embedded in the same executable space as your service code under test, all can run in the same process. This is beneficial for the initial design phases, including TDD, and running on the same machine provides a simple way for execution, even at the integration test level. Beyond that, it can sometimes be useful to run both on different machines. This might be done, for example, to bring up a test client against a fully integrated running system in staging or production, perhaps also for load/stress testing. An additional use case is for testing backward compatibility. A test client with a previous version of SCTs can be brought up separately from and run against the newer versioned server in order to verify that the older tests still run as expected. Within an automated build/test pipeline, several versions can be managed this way: Summary Service Contract Tests (SCTs) are tests against services. Alternatively, Configured Tests (ACTs) define multiple test configurations that each provide a different task implementation set. A single set of SCTs can be run against any test configuration. Even though SCT can be run with a test configuration that is entirely in process, the flexibility offered by ACTs distinguishes them from traditional unit/component tests. SCTs and unit tests complement one another. With this approach, Test Driven Development (SCT) can be applied to service development. This begins by creating SCTs against the simplest possible in-process test configuration, which is usually also the fastest to run. Once those tests have passed, they can be run against more complex configurations and ultimately against a test configuration of fully 'real' task implementations to achieve the traditional goals of integration or end-to-end testing. Leveraging the same set of SCTs across all configurations supports an incremental development process and yields great economies of scale.
Jasper Sprengers
senior developer,
Team Rockstars IT
Alireza Chegini
DevOps Architect / Azure Specialist,
Coding As Creating