Integrating PostgreSQL Databases with ANF: Join this workshop to learn how to create a PostgreSQL server using Instaclustr’s managed service
Mobile Database Essentials: Assess data needs, storage requirements, and more when leveraging databases for cloud and edge applications.
In our Culture and Methodologies category, dive into Agile, career development, team management, and methodologies such as Waterfall, Lean, and Kanban. Whether you're looking for tips on how to integrate Scrum theory into your team's Agile practices or you need help prepping for your next interview, our resources can help set you up for success.
The Agile methodology is a project management approach that breaks larger projects into several phases. It is a process of planning, executing, and evaluating with stakeholders. Our resources provide information on processes and tools, documentation, customer collaboration, and adjustments to make when planning meetings.
There are several paths to starting a career in software development, including the more non-traditional routes that are now more accessible than ever. Whether you're interested in front-end, back-end, or full-stack development, we offer more than 10,000 resources that can help you grow your current career or *develop* a new one.
Agile, Waterfall, and Lean are just a few of the project-centric methodologies for software development that you'll find in this Zone. Whether your team is focused on goals like achieving greater speed, having well-defined project scopes, or using fewer resources, the approach you adopt will offer clear guidelines to help structure your team's work. In this Zone, you'll find resources on user stories, implementation examples, and more to help you decide which methodology is the best fit and apply it in your development practices.
Development team management involves a combination of technical leadership, project management, and the ability to grow and nurture a team. These skills have never been more important, especially with the rise of remote work both across industries and around the world. The ability to delegate decision-making is key to team engagement. Review our inventory of tutorials, interviews, and first-hand accounts of improving the team dynamic.
Kubernetes in the Enterprise
In 2022, Kubernetes has become a central component for containerized applications. And it is nowhere near its peak. In fact, based on our research, 94 percent of survey respondents believe that Kubernetes will be a bigger part of their system design over the next two to three years. With the expectations of Kubernetes becoming more entrenched into systems, what do the adoption and deployment methods look like compared to previous years?DZone's Kubernetes in the Enterprise Trend Report provides insights into how developers are leveraging Kubernetes in their organizations. It focuses on the evolution of Kubernetes beyond container orchestration, advancements in Kubernetes observability, Kubernetes in AI and ML, and more. Our goal for this Trend Report is to help inspire developers to leverage Kubernetes in their own organizations.
Software Verification and Validation With Simple Examples
Hands-On Agile #52: Jim Highsmith and the Agile Manifesto On August 17, 2023, we had the opportunity to interview Jim Highsmith about his path to Agile product development: From Wild West to the co-authoring of the Agile Manifesto. Jim has penned numerous books on the subject and was honored with the International Stevens Award in 2005. He is a prominent figure in the Agile community, co-authoring the Agile Manifesto and the Declaration of Interdependence for Project Leaders. Jim was pivotal in establishing The Agile Alliance and co-founding the Agile Leadership Network, serving as its inaugural president. Connect With Jim Highsmith Jim Highsmith on LinkedIn.
If you work in software development, you likely encounter technical debt all the time. It accumulates over time as we prioritize delivering new features over maintaining a healthy codebase. Managing technical debt, or code debt, can be a challenge. Approaching it the right way in the context of Scrum won’t just help you manage your tech debt. It can allow you to leverage it to strategically ship faster and gain a very real competitive advantage. In this article, I’ll cover: The basics of technical debt and why it matters How tech debt impacts Scrum teams How to track tech debt How to prioritize tech debt and fix it Why continuous improvement matters in tech debt Thinking About Tech Debt in Scrum: The Basics Scrum is an Agile framework that helps teams deliver high-quality software in a collaborative and iterative way. By leveraging strategies like refactoring, incremental improvement, and automated testing, Scrum teams can tackle technical debt head-on. But it all starts with good issue tracking. Whether you're a Scrum master, product owner, or developer, I’m going to share some practical insights and strategies for you to manage tech debt. The Impact of Technical Debt on Scrum Teams Ignoring technical debt can lead to higher costs, slower delivery times, and reduced productivity. Tech debt makes it harder to implement new features or updates because it creates excessive complexity. Product quality suffers in turn. Then maintenance costs rise. There are more customer issues, and customers become frustrated. Unmanaged technical debt has the potential to touch every part of the business. Technical debt also brings the team down. It’s a serial destroyer of morale. Ignoring tech debt or postponing it is often frustrating and demotivating. It can also exacerbate communication problems and create silos, hindering project goals. Good management of tech debt, then, is absolutely essential for the modern Scrum team. How to Track Tech Debt Agile teams who are successful at managing their tech debt identify it early and often. Technical debt should be identified: During the act of writing code. Scrum teams should feel confident accruing prudent tech debt to ship faster. That’s so long as they track that debt immediately and understand how it could be paid off. Backlog refinement. This is an opportunity to discuss and prioritize the product backlog and have nuanced conversations about tech debt in the codebase. Sprint planning. How technical debt impacts the current sprint should always be a topic of conversation during sprint planning. Allocate resources to paying back tech debt consistently. Retrospectives. An opportunity to identify tech debt that has been accrued or which needs to be considered or prioritized. Use an in-editor issue tracker, which enables your engineers to track issues directly linked to code. This is a weakness of common issue-tracking software like Jira, which often undermines the process entirely. Prioritising Technical Debt in Scrum There are many ways to choose what to prioritize. I suggest choosing a theme for each sprint. Allocate 15-20% of your resources to fixing a specific subset of technical debt issues. For example, you might choose to prioritize issues based on… Their impact on a particular part of the codebase needed to ship new features Their impact on critical system functionality, security, or performance Their impact on team morale, employee retention, or developer experience The headaches around issue resolution often stem from poor issue tracking. Once your Scrum team members have nailed an effective issue-tracking system that feels seamless for engineers, solving tech debt becomes much easier. The Importance of Good Issue Tracking in Managing Technical Debt in Scrum Good issue tracking is the foundation of any effective technical debt management strategy. Scrum teams must be able to track technical debt issues systematically to prioritize and address them effectively. Using the right tools can make or break a tech debt management strategy. Modern engineering teams need issue-tracking tools that: Link issues directly to code. Make issues visible in the code editor Enable engineers to visualize tech debt in the codebase Create issues from the code editor in Stepsize Continuous Improvement in Scrum Identify tech debt early and consistently. Address and fix tech debt continuously. Use Scrum sessions such as retrospectives as an opportunity to reflect on how the team can improve their process for managing technical debt. Consider: Where does tech debt tend to accumulate? Is everybody following a good issue-tracking process? Are issues high-quality? Regularly review and update the team's “Definition of Done” (DoD), which outlines the criteria that must be met for a user story to be considered complete. Refining the DoD increases their likelihood of shipping high-quality code that is less likely to result in technical debt down the line. Behavioral change is most likely when teams openly collaborate, supported by the right tools. I suggest encouraging everybody to reflect on their processes and actively search for opportunities to improve. Wrapping Up Managing technical debt properly needs to be a natural habit for modern Scrum teams. Doing so protects the long-term performance of the team and product. Properly tracking technical debt is the foundation of any effective technical debt management strategy. By leveraging the right issue-tracking tools and prioritizing technical debt in the right way, Scrum teams can strategically ship faster. Doing so also promotes better product quality and maintains team morale and collaboration. Remember, technical debt is an unavoidable part of software development, but with the right approach and tools, it’s possible to drive behavioral change and safeguard the long-term success of your team.
Your approach to DevOps is likely to be influenced by the methods and practices that came before. For organizations that gave teams autonomy to adapt their process, DevOps would have been a natural progression. Where an organization has been more prescriptive in the past, people will look for familiar tools to run a DevOps implementation, such as maturity models. In this article, I explain why a maturity model isn't appropriate and what you should use instead. What Is a Maturity Model? A maturity model represents groups of characteristics, like processes or activities, into a sequence of maturity levels. By following the groups from the easiest to the most advanced, an organization can implement all the required elements of the model. The process is a journey from adoption through to maturity. Maturity models: Provide a roadmap for adopting characteristics Make it easier to get started by suggesting a smaller initial set of characteristics Can be assessed to provide the organization with a maturity score For example, a maturity model for riding a bike might have 5 levels of maturity: Walk upright on 2 legs Ride a balance bike with a walking motion Ride a balance bike with both feet off the ground Ride a pedal bike from a starting point facing downhill Ride a pedal bike from a starting point facing uphill The sequence of maturity levels is a useful roadmap to follow and you may already be able to achieve the lower levels. Each maturity level is easier to reach from the level below, as the earlier levels provide a basis for increasing your skills and progressing to the next stage. You can also assess someone by asking them to demonstrate their ability at each level. You can create a maturity model by designing the levels first and expanding each with characteristics, or you can collect together all the characteristics before arranging them into levels. You'll find maturity models are commonly used as part of standards and their certification process. Most process certifications require you to demonstrate that: You have documented your process People follow the documented process You regularly review and improve the process When you plan to achieve a certification, your roadmap is clear; until you document the process you can't tell if people are following it. Limitations of Maturity Models You can use a maturity model to assess whether a set of activities is taking place, but not whether these activities impact your key outcomes. Maturity models are rigid and require you to adopt all characteristics to achieve maturity levels. You have to trust that following the model will bring you the same benefits experienced by the model's authors. The sequence of maturity levels might not work for everyone. They could slow down your progress or even have counter-productive outcomes. A maturity model doesn't take into account the unique challenges facing your business — it may not even solve the kind of problems you're facing. It also defines an end point that may not be good enough. Maturity models are most commonly used in due-diligence frameworks to ensure suppliers meet a minimum standard for process or security. If you were cynical, you might argue they're used to ensure an organization can't be blamed when one of its suppliers makes a mistake. In DevOps, the context and challenges faced by organizations and teams are so important, a maturity model is not an appropriate tool. If you want to apply a maturity model to DevOps, you may need to adjust your mindset and approach as there's no fixed end state to DevOps. Neither should the capabilities be adopted in a pre-determined order. Maturity models are not appropriate for DevOps because they: Assume there is a known answer to your current context Focus on arriving at a fixed end state Encourage standardization, not innovation and experimentation Have a linear progression Are activity-based For DevOps, you need a different kind of model. Capability Models A capability model describes characteristics in terms of their relationship to an outcome. Rather than arrange sets of characteristics into levels, they connect them to the effect they have on a wider system outcome. Going back to riding a bike, a capability model would show that balance affects riding stability and steering, whereas walking has some bearing on the ability to pedal to power the bicycle. Instead of following the roadmap for learning to ride a bike, you would identify areas that could be improved based on your current attempts to ride. If you were using a capability model, you wouldn't stop once you proved you could ride uphill. Capability models encourage you to continue your improvement efforts, just like Ineos Grenadiers (formerly Sky Professional Racing/Team Sky) who achieved 7 Tour de France wins in their first 10 years using their approach to continuous improvement, which they called marginal gains. A capability model: Focuses on continuous improvement Is multi-dimensional, dynamic, and customizable Understands that the landscape is always changing Is outcome-based When you use a capability model, you accept that high performance today won't be sufficient in the future. Business, technology, and competition are always on the move and you need a mindset that can keep pace. Maturity vs. Capability Models A maturity model tends to measure activities, such as whether a certain tool or process has been implemented. In contrast, capability models are outcome-based, which means you need to use measurements of key outcomes to confirm that changes result in improvements. For example, the DevOps capability model is aligned with the DORA metrics. Using throughput and stability metrics helps you assess the effectiveness of improvements. While maturity models tend to focus on a fixed standardized list of activities, capability models are dynamic and contextual. A capability model expects you to select capabilities that you believe will improve your performance given your current goals, industry, organization, team, and the scenario you face at this point in time. You level up in a maturity model based on proficiency against the activities. In a capability model, you constantly add gains as you continuously improve your skills and techniques. These differences are summarized below: Maturity model Capability model Activity-based Outcome-based Fixed Dynamic Standardized Contextual Proficiency Impact The DevOps Capability Model The DevOps capability model is the structural equation model (SEM), sometimes referred to as the big friendly diagram (BFD). It arranges the capabilities into groups and maps the relationships they have to outcomes. Each of the arrows describes a predictive relationship. You can use this map to work out what items will help you solve the problems you're facing. For example, Continuous Delivery depends on several technical capabilities, like version control and trunk-based development, and leads to increased software delivery performance and reduced burnout (among other benefits). If you find this version of the model overwhelming, the 2022 version offers a simpler view, with many of the groups collapsed. Using simplified views of the model can help you navigate it before you drill into the more detailed lists of capabilities. How to Use the DevOps Model Depending on which version you look at, the model can seem overwhelming. However, the purpose of the model isn't to provide a list of all the techniques and practices you must adopt. Instead, you can use the model as part of your continuous improvement process to identify which capabilities may help you make your next change. As the capability model is outcome-based, your first task is finding a way to measure the outcomes for your team and organization. Any improvement you make should eventually move the needle on these outcomes, although a single capability on its own may not make a detectable difference. The DORA metrics are a good place to start, as they use throughput and stability metrics to create a balanced picture of successful software delivery. In the longer term, it's best to connect your measurements to business outcomes. Whatever you measure, everyone involved in software delivery and operations needs to share the same goals. After you can measure the impact of changes, you can review the capability model and select something you believe will bring the biggest benefit to your specific scenario. The highest performers use this process of continuous improvement to make gains every year. The high performers are never done and persistently seek new opportunities to build performance. This is why the high performance of today won't be enough to remain competitive in the future. Conclusion DevOps shouldn't be assessed against a maturity model. You should be wary of anyone who tries to introduce one. Instead, use the structural equation model from Accelerate and the State of DevOps reports as part of your continuous improvement efforts. The DevOps capability model supports the need for constant incremental gains and encourages teams to experiment with their tools and processes. Happy deployments!
In the dynamic world of business, Agile methodologies have become increasingly popular as organizations seek to deliver high-quality products and services more efficiently. As Agile practices gain traction, it is crucial to measure the progress, quality, and performance of Agile projects to ensure their success. This article will delve into various Agile metrics and key performance indicators (KPIs) by providing real-world examples that can help organizations track and evaluate their Agile projects' effectiveness. Understanding Agile Metrics and KPIs With Examples Agile metrics and KPIs are quantifiable measures that offer insights into an Agile project's progress, performance, and quality. They assist teams in identifying areas for improvement, tracking progress toward goals, and ensuring the project remains on track. By gathering and analyzing these metrics, organizations can make data-driven decisions, optimize their processes, and ultimately achieve better results. Key Agile Metrics and KPIs With Examples Velocity This metric measures the amount of work a team completes during a sprint or iteration. It is calculated by adding up the story points or effort estimates for all completed user stories. For example, if a team completes five user stories worth 3, 5, 8, 2, and 13 story points, their velocity for that sprint would be 31. Velocity helps teams understand their capacity and predict how much work they can complete in future sprints. Burn-Up and Burn-Down Charts These charts visualize the progress of a sprint or project by showing the amount of work completed (burn-up) and the remaining work (burn-down). For instance, if a team has a sprint backlog of 50 story points and completes ten story points per day, the burn-down chart will show a decreasing slope as the team progresses through the sprint. These charts help teams monitor their progress toward completing the sprint backlog and provide an early warning if the project is off track. Cycle Time This metric measures the time it takes for a user story to move from the start of the development process to completion. Suppose a team begins working on a user story on Monday and completes it on Thursday. In that case, the cycle time for that story is four days. A shorter cycle time indicates that the team is delivering value to customers more quickly and is a sign of efficient processes. Lead Time This metric measures the time it takes for a user story to move from the initial request to completion. It includes both the time spent waiting in the backlog and the actual development time. For example, if a user story is added to the backlog on January 1st and is completed on January 15th, the lead time would be 15 days. Reducing lead time can help improve customer satisfaction and reduce the risk of scope changes. Cumulative Flow Diagram (CFD) A CFD is a visual representation of the flow of work through a team's process. It shows the amount of work in each stage of the process, such as "To Do," "In Progress," and "Done." By analyzing a CFD, teams can identify bottlenecks, inefficiencies, and areas for improvement. For example, if the "In Progress" stage consistently has a large number of items, it may indicate that the team is struggling with capacity or that work is not moving smoothly through the process. Defect Density This metric measures the number of defects found in a product relative to its size (e.g., lines of code or story points). Suppose a team delivers a feature with 1000 lines of code and discovers ten defects. In that case, the defect density is 0.01 defects per line of code. A lower defect density indicates higher-quality software and can help teams identify areas where their quality practices need improvement. Escaped Defects This metric tracks the number of defects discovered after a product has been released to customers. For example, if a team releases a new mobile app and users report 15 bugs within the first week, the team would have 15 escaped defects. A high number of escaped defects may indicate inadequate testing or quality assurance processes. Team Satisfaction Measuring team satisfaction through regular surveys helps gauge team morale and identify potential issues that could impact productivity or project success. For example, a team might be asked to rate their satisfaction with factors such as communication, workload, and work-life balance on a scale of 1 to 5, with 5 being the highest satisfaction level. Customer Satisfaction Collecting customer feedback on delivered features and overall product quality is crucial for ensuring that the project meets customer needs and expectations. For instance, a company might send out surveys to customers asking them to rate their experience with a new software feature on a scale from 1 (very dissatisfied) to 5 (very satisfied). Business Value Delivered This metric measures the tangible benefits a project delivers to the organization, such as increased revenue, cost savings, or improved customer satisfaction. For example, an Agile project might deliver a new e-commerce feature that results in a 10% increase in online sales, representing a clear business value. Using Agile Metrics and KPIs Effectively To maximize the benefits of Agile metrics and KPIs, organizations should: Choose the right metrics: Select metrics relevant to the project's goals and objectives, focusing on those that drive improvement and provide actionable insights. Establish baselines and targets: Identify current performance levels and set targets for improvement to track progress over time. Monitor and analyze data: Regularly review metric data to identify trends, patterns, and areas for improvement. Make data-driven decisions: Use metric data to inform decision-making and prioritize actions with the most significant impact on project success. Foster a culture of continuous improvement: Encourage teams to use metrics as a tool for learning and improvement rather than as a means of punishment or control. Conclusion Agile metrics and KPIs play a critical role in ensuring the success of Agile projects by providing valuable insights into progress, performance, and quality. By selecting the right metrics, monitoring them regularly, and using the data to drive continuous improvement, organizations can optimize their Agile processes and achieve better results. Real-world examples help illustrate the practical applications of these metrics, making it easier for teams to understand their importance and implement them effectively.
Working more than 15 years in IT, I rarely met programmers who enjoy writing tests and only a few people who use something like TDD. Is this really such an uninteresting part of the software development process? In this article, I’d like to share my experience of using TDD. In most of the teams I worked with, programmers wrote code. Often, they didn't write tests at all or added them symbolically. The only mention of the TDD abbreviation made programmers panic. The main reason is that many people misunderstand the meaning of TDD and try to avoid writing tests. It is generally assumed that TDD are usual tests but written before implementation. But this is not quite true. TDD is a culture of writing code. This approach implies a certain order of solving a task and a specific way of thinking. TDD implies solving a task using loops or iterations. Formally, the cycle consists of three phases: Writing a test that gives something to the input and checks the output. In this case, the test doesn’t pass. Writing the simplest implementation with which the test passes. Refactoring. Changing the code without changing the test. The cycle repeats itself until the problem is solved. TDD Cycle I use a slightly different algorithm. In my approach, refactoring is most often the cycle. That is, I write the test, and then I write the code. Next, I write the test again and write the code because refactoring still often requires editing to the test (various mocks, generation of instances, links to existing modules, etc.), but not always. The general algorithm of what we will do I think I won’t describe it as it is done in textbooks. I'll just show you an example of how it works for me. Example Imagine we got a task. No matter how it is described, we can clarify it ourselves, coordinate it with the customer, and solve it. Let’s suppose that the task is described something like this: "Add an endpoint that returns the current time and user information (id, first name, last name, phone number). Also, it is necessary to sign this information based on a secret key." I will not complicate the task to demonstrate it. But in real life, you may need to make a full-fledged digital signature and supplement it with encryption, and this endpoint needs to be added to an existing project. For academic purposes, we will have to create it from scratch. Let's do it using FastAPI. Most programmers just start working on this task without detailed study. They keep everything they can in their head. After all, such a task does not need to be divided into subtasks since it is quite simple and quickly implemented. While working on it, they clarify the requirements of stakeholders and ask questions. And at the end, they write tests anxiously. But we will do it differently. It may seem unexpected, but let's take something from the Agile methodology. Firstly, this task can be divided into logically completed subtasks. You can immediately clarify all the requirements. Secondly, it can be done iteratively, having something working at each step (even incorrectly) that can be demonstrated. Planning Let’s start with the following partition. The First Subtask Make an empty FastAPI app work with one method. Acceptance Criteria There is a FastAPI app, and it can be launched. The GET request "/my-info" returns a response with code 200 and body {} - empty json. The Second Subtask Add a model/User object. At this stage, it will just be a pedantic scheme for the response. You will have to agree with the business on the name of the fields and whether it is necessary to somehow convert the values (filter, clean, or something else). Acceptance Criteria The GET request "/my-info" returns a response with code 200 and body {"user":{"id":1,"firstname":"John","lastname":"Smith","phone":"+995000000000"}. The Third Subtask Add the current time to the response. Again, we need to agree on the time format and the name of the field. Acceptance Criteria The GET request "/my-info" returns a response with code 200 and body {"user":{added earlier},"timestamp":1691377675}. The Fourth Subtask Add a signature. Immediately, some questions to the business appear: Where to add? How to form it? Where to get the key? Where to store? Who has access? And so on… As a result, we use a simple algorithm: We get base64 from the JSON response body. We concatenate with the private key. First, we use an empty string as a key. Then, we take md5 from the received string. We add the result to the X-Signature header. Acceptance Criteria The GET request "/my-info" returns a response described earlier without changes, but with an additional header: "X-Signature":"638e4c9e30b157cc56fadc9296af813a" For this step, the X-Signature is calculated manually. Base64 = eyJ1c2VyIjp7ImlkIjoxLCJmaXJzdG5hbWUiOiJKb2huIiwibGFzdG5hbWUiOiJTbWl0aCIsInBob25lIjoiKzk5NTAwMDAwMDAwMCJ9LCJ0aW1lc3RhbXAiOjE2OTEzNzc2NzV9. Note that the endpoint returns hard-coded values. To what level tasks should be split is up to you. This is just an example. The most important thing will be described further. These four subtasks result in the endpoint that always returns the same response. But there is a question: why have we described the stub in such detail? Here is the reason: these subtasks don’t have to be physically present. They are just steps. They are needed to use the TDD practice. However, their presence on any storage medium other than our memory will make our work much easier. So, let’s begin. Implementation The First Subtask We add the main.py file to the app directory. Python from fastapi import FastAPI app = FastAPI() @app.get("/my-info") async def my_info(): return {} Right after that, we add one test. For example, to the same directory: test_main.py. Python from fastapi.testclient import TestClient from .main import app client = TestClient(app) def test_my_info_success(): response = client.get("/my-info") assert response.status_code == 200 assert response.json() == {} As a result of the first subtask, we added just a few lines of code and a test. At the very beginning, a simple test appeared. It does not cover business requirements at all. It checks only one case — one step. Obviously, writing such a test does not cause much negativity. And at the same time, we have a working code that can be demonstrated. The Second Subtask We add JSON to the verification. To do this, replace the last line in the test. Python result = { "user": { "id": 1, "firstname": "John", "lastname": "Smith", "phone": "+995000000000", }, } assert response.json() == result ❌ Now, the test fails. We change the code so that the test passes. We add the schema file. Python from pydantic import BaseModel class User(BaseModel): id: int firstname: str lastname: str phone: str class MyInfoResponse(BaseModel): user: User We change the main file. We add import. Python from .scheme import MyInfoResponse, User We change the router function. Python @app.get("/my-info", response_model=MyInfoResponse) async def my_info(): my_info_response = MyInfoResponse( user=User( id=1, firstname="John", lastname="Smith", phone="+995000000000", ), ) return my_info_response ✅ Now, the test passes. And we got a working code again. The Third Subtask We add "timestamp": 1691377675 to the test. Python result = { "user": { "id": 1, "firstname": "John", "lastname": "Smith", "phone": "+995000000000", }, "timestamp": 1691377675, } ❌ The test fails again. We change the code so that the test passes. To do this, we add timestamp to the scheme. Python class MyInfoResponse(BaseModel): user: User timestamp: int We add its initialization to the main file. Python my_info_response = MyInfoResponse( user=User( id=1, firstname="John", lastname="Smith", phone="+995000000000", ), timestamp=1691377675, ) ✅ The test passes again. The Fourth Subtask We add the "X-Signature" header verification to the test: "54977504fbe6c7aec318722d9fbcaec8". Python assert response.headers.get("X-Signature") == "638e4c9e30b157cc56fadc9296af813a" ❌ The test fails again. We add this header to the application's response. To do this, we add middleware. After all, we will most likely need a signature for other endpoints of the application. But this is just our choice, which in reality can be different so as not to complicate the code. Let's do it to understand this. We add import Request. Python from fastapi import FastAPI, Request And the middleware function. Python @app.middleware("http") async def add_signature_header(request: Request, call_next): response = await call_next(request) response.headers["X-Signature"] = "638e4c9e30b157cc56fadc9296af813a" return response ✅ The test passes again. At this stage, we have received a ready-made working test for the endpoint. Next, we will change the application, converting it from a stub into a fully working code while checking it with just one ready-made test. This step can already be considered as refactoring. But we will do it in exactly the same small steps. The Fifth Subtask Implement signature calculation. The algorithm is described above, as well as the acceptance criteria, but the signature should change depending on the user's data and timestamp. Let's implement it. ✅ The test passes, and we don't do anything to it at this step. That is, we do a full-fledged refactoring. We add the signature.py file. It contains the following code: Python import base64 import hashlib def generate_signature(data: bytes) -> str: m = hashlib.md5() b64data = base64.b64encode(data) m.update(b64data + b"") return m.hexdigest() We change main.py. We add import. Python from fastapi import FastAPI, Request, Response from .signature import generate_signature We change middleware. Python @app.middleware("http") async def add_signature_header(request: Request, call_next): response = await call_next(request) body = b"" async for chunk in response.body_iterator: body += chunk response.headers["X-Signature"] = generate_signature(body) return Response( content=body, status_code=response.status_code, headers=dict(response.headers), media_type=response.media_type, ) Here is the result of our complication, which wasn’t necessary for us to do. We did not get the best solution since we have to calculate the entire body of the response and form our own Response. But it is quite suitable for our purposes. ✅ The test still passes. The Sixth Subtask Replace timestamp with the actual value of the current time. Acceptance Criteria timestamp in the response returns the actual current time value. The signature is generated correctly. To generate the time, we will use int(time.time()) First, we edit the test. Now, we have to freeze the current time. Import: Python from datetime import datetime from freezegun import freeze_time We make the test look like the one below. Since freezegun accepts either an object or a string with a date, but not unix timestamp, it will have to be converted. Python def test_my_info_success(): initial_datetime = 1691377675 with freeze_time(datetime.utcfromtimestamp(initial_datetime)): response = client.get("/my-info") assert response.status_code == 200 result = { "user": { "id": 1, "firstname": "John", "lastname": "Smith", "phone": "+995000000000", }, "timestamp": initial_datetime, } assert response.json() == result assert response.headers.get("X-Signature") == "638e4c9e30b157cc56fadc9296af813a" Nothing has changed. ✅ That’s why the test still passes. So, we continue refactoring. Changes to the main.py code. Import: Python import time In the response, we replace the time-hard code with a method call. Python timestamp=int(time.time()), ✅ We launch the test — it works. In tests, one often tries to dynamically generate input data and write duplicate functions to calculate the results. I don’t share the idea of this approach as it can potentially contain errors and requires testing as well. The simplest and most reliable way is to input and output data prepared in advance. The only thing that can be used at the same time is configuration data, settings, and some proven fixtures. Now, we will add the settings. The Seventh Subtask Add a private key. We will take it from the settings environment variables. Acceptance Criteria There is a private key (not an empty string). It is part of the signature generation process according to the algorithm described above. The application gets it from the environment variables. For the test, we use the private key: 6hsjkJnsd)s-_=2$%723 As a result, our signature will change to: 479bb02f0f5f1249760573846de2dbc1 We replace the signature verification in the test: Python assert response.headers.get("X-Signature") == "479bb02f0f5f1249760573846de2dbc1" ❌ Now, the test fails. We add file settings.py to get the settings from environment variables. Python from pydantic_settings import BaseSettings class Settings(BaseSettings): security_key: bytes settings = Settings() We add the code for using this key to signature.py. Import: Python from .settings import settings And we replace the string with concatenation with: Python m.update(b64data + settings.security_key) ❌ Now, the test fails. Now, before running the tests, we need to set the environment variable with the correct key. This can be done right before the call, for example, like this: export security_key='6hsjkJnsd)s-_=2$%723' ✅ Now, the test passes. I would not recommend setting the default value in the settings.py file. The variable must be defined. Setting a default incorrect value can lead to hiding an error in production if the value of this variable is not set during deployment. The application will start without errors, but it will give incorrect results. However, in some cases, a working application with incorrect functionality is better than error 503. It's up to you as a developer to decide. The next steps may be replacing the stub of the User object with real values from the database writing additional validation tests and negative scenarios. In any case, you will have to add more acceptance tests at the end. The most important thing here is dividing the task into micro-tasks, writing a simple test for each subtask, and then writing the application code, and after that, if necessary, refactoring. This order in development really helps: Focus on the problem See the result of the subtask clearly Be able to quickly verify the written code Reduce negativity when writing tests Always have at least one test per task As a result, the number of situations when a programmer "overdoes it" and spends much more time solving a problem than he could with a structured approach decreases. Thus, the development time of the feature is reduced, and the quality of the code is improved. In the long term, changes, refactoring, and updates of package versions in the code are easily controlled and implemented with minimal losses. And here is what’s important: TDD should improve development, make it faster, and strengthen it. This is what the word Driven in the abbreviation means. Therefore, it is not necessary to try to write a complete test or acceptance test of the entire task before the start of development. An iterative approach is needed. Tests are only needed to verify the next small step in development. TDD helps answer the question: how do I know that I have achieved my goal (I mean, that the code fragment I wrote works)? The examples can be found here.
I’ve been working in IT in various roles for more than 12 years, and I witnessed and experienced how release cycles became faster and faster over time. Seeing recent (and not so recent) trends in competitiveness, degradation of attention span, advertisement of short-term satisfaction, etc., I wonder where this increase in speed will lead in terms of software quality, end-user experience and satisfaction, engineer burnout, and whether it is possible to slow down a little. What Do We Want? Anything! When Do We Want It? Yesterday! Two things come to my mind regarding this topic: shortened attention span and the want for short-term perceived satisfaction. I hear people complain about a shortened attention span, which is due to many various factors like social media (see products like TikTok, YouTube Shorts, and many others), and I feel as if more and more products pop up that promote and worsen this. At the same time, I also see people wanting and promoting short-term or immediate satisfaction, be it anything really. With online purchases, same-day shipping, online trading, 30-40 year mortgages to get that house now, and more, people nowadays want everything, and they want it yesterday. Long-term planning and long-term satisfaction are becoming less of a thing, and that is, to a degree, mirrored by the IT sector as well. Drawing a parallel with software releases: nowadays, continuous delivery and deployment are very prominent along with containerization, and with all that automation, production deployments can happen every hour, every 30 minutes, or even less. Although it is definitely a good thing for deploying bug fixes or security fixes, I wonder how much benefit it really has when it comes to deploying features and feature improvements. Is the new code deployed actually benefit your end-users (whether it's for competing with another company or not), benefit the company's engineers for improving/laying the foundation for further development (I see no problem with this), or is it just due to a perceived/false urgency coming from higher-ups, or from the general state of the current digital market, to "compete" with others. Does One Actually Know What They Want? In childhood, we are taught (ideally) that we don't necessarily get everything we want, and not necessarily when we want it, but we have to work hard for many things, and the results and success will materialize sometime in the future, and as periodic little successes along the way. I separate wants into two categories: whether your customers know what they want and whether you or your company know what your customers want. As for the earlier: when your customers are satisfied with your product, that is awesome. But when you always give them everything they want, and/or immediately, you might end up with cases like this: it is a problem that a certain video game is not released in time, but it is also a problem if released in time due to pressure, with questionable quality. So, they don't even necessarily know what they want. (If you are a Hungarian reader, you might be familiar with the song called Az a baj from Bëlga). As for the latter, at least in the world of outsourcing, though, there is a distinction: the client pays you for delivering what they want to their customers, but your children don't. Clients might leave like an offended child (and might never come back) if they don't get what they want when they want it, even if that thing is unrealistic or, even more, not so legal. But that is something that can be shaped by great managers, Project Owners, Business Analysts, etc. On the Topic of Analytics It is one thing that end-users, like when in a candy store, don't necessarily know what they want. But do companies themselves actually know what their customers want, or do they just throw things at the wall, hoping that something would stick? Companies can get their hands on so much analytics data that it's not an easy feat to process it, organize it, and actually do something with it. If companies don't have the (right) people, or they burn out and leave, they would have no proper clue of what their customers want and in what way. This isn't helped by the fact that, in some cases, there may be a large amount of inconclusive or contradictory information on how certain features are received by customers. Take, for example, one of my previous projects: there were several hundreds of active multi-variant tests running on the site, in many cases with 5, 6, or more variants, at the same time, along with tens of active SiteSpect tests in the same area. How you draw conclusive results from them regarding customer conversion rate and what works for your users is magic to me. These all can result in potentially premature decision-making and can lead to uncertainty. What About Slowing Down? It may be a good idea to consider taking a step back, getting some perspective on how your project operates, and slowing down the release cadence a little. Whether it is better care for your customers and your project members, or you just want the project to get more money from users, releasing less frequently may be a good idea... if you can reallocate time, money, and effort properly. I know slowing down the release cadence is not applicable to all domains and projects, and it might be suitable differently for small, big, open-source, greenfield, startup, commercial, etc. type projects. But the option is still there, and it doesn't hurt to think about its potential benefits. A Personal Example I personally develop plugins for JetBrains IDEs, and I like to bulk-release new features and improvements. It gives me more time and headspace to come up with ideas to further polish features so users can also get a better version of them right away instead of getting them in chunks. This also produces less technical-/feature-debt for me and less paperwork, e.g., for tracking and maintaining issues on GitHub. It also saves me time because I have to go through the release process less frequently. I produce fewer release-related artifacts over time compared to, for example, releasing each new feature individually. And it produces fewer plugin update cycles for IDE users as well. However, when it comes to important or critical bug fixes, I do release them quickly. Question Time I know it can work quite differently for companies with hundreds of engineers finishing features and code changes each day and wanting to release those features in the wild soon after they have finished. But, even in that case, it might be a good idea to slow down a bit. It kind of reminds me of publishing music: you can either drop a hit single now and then and then later wrap it and call it an "album," or you can work on a bigger theme for a while and publish them at once as a real album. I also realize in regulated domains, frequent and fast releases are required to keep up with regulations and legal matters. (I still remember when the cookie policy and GDPR had dropped.) Now, let me pose a few questions you might want to explore deeper: What if you would release less frequently? Do you need a release every minute/hour/day? Would a one-week/two-week/one-month/... cadence be better suited for your project? Would it make sense for your domain and project? Do you actually have the option to do so? What advantages and disadvantages would it have on your engineering and testing teams' mental health? Would they feel more ownership of the features they develop? If you have manual testing for each release, would less frequent releases have a positive effect on your testing team? Would they have more time to focus on less tedious tasks? Would they have more time to migrate manual test cases to automated ones? What effect would it have on your end-users/customers' satisfaction? On your customers' conversion rate? On your infrastructure and code/feature quality? On your business in terms of customer retention, income, and growth? If you are in a cloud infrastructure, would less frequent deployments lower the costs of operation? Let me also get a little deeper into two specific items from this list that I have had experience with before. Feeling Ownership of the Product I'm sure you've been in a situation, or at least heard someone, after finishing a bigger feature, say something like, "We could also do this and this!", "We should also add this!" and their variations. When you don't have the time to enjoy what you've accomplished, and you are instantly thinking about the next steps, it's difficult to feel ownership of the feature you've just finished. If engineers had the option to work longer on a feature and release it in a better state in fewer releases, I think that feeling of ownership and accomplishment could become stronger. (Unless you cannot relate to the feature or the whole domain at all for any reason.) On a related note, I recommend you watch the YouTube video called Running a Marathon, one mile every hour from Beau Miles, from which my favorite quote is, "Been meaning to do this for two years. 10-minute job." Manual Release Regression Years ago, I was on a project in the hotel accommodation domain, on which the website we developed was also used as a platform so other companies could pay our project's company to create branded websites for them. The creation and configuration of these sites were handled by a dedicated team, but testing affected many other teams. Almost every two weeks, we received an email from that team asking us if we would be kind enough to "do a slight regression" (automated and manual) in our area on those (up to ~20) new branded sites and/or points of sale. I know that the site creation was an on-demand process, and it was necessary to serve those customers quickly. That is fine. However, many people would have benefited if those manual regressions occurred, e.g., only once in a month instead of bi-weekly. Even with a bigger scope and workload, it would have required less effort from our team (context switching, reorganizing sprint activities, communication overhead, etc.), especially since there wasn't always a heads-up about when we would have had to do it. Thus, not much planning could have happened beforehand. Closing Thoughts There could have been many more aspects mentioned and taken into account, but covering every aspect of this topic was not the intention of this article. Hopefully, you leave with some of these questions and ideas, making you think about how to answer them or apply them to your projects. This article came to life with the help of Zoltán Limpek. I appreciate your feedback and help.
If you're considering working in DevOps, you're likely aware that it can be a challenging and rewarding field. The good news is that there are several key steps that you can take to launch a successful career. Starting a successful career in DevOps requires more than just technical skills. It's equally important to understand industry-specific knowledge, tools, and methodologies to maximize the impact of your technical skills. One of the most important things you can do is to dive in with determination and adaptability. 1. Understand the Fundamentals of DevOps DevOps is a term that has been gaining popularity in the world of tech. It is a combination of the cultural and technical aspects of the software development process. In simple terms, DevOps is a way of deploying and managing technology. It focuses on collaboration, communication, automation, monitoring, and fast feedback. DevOps brings together the development and operations teams to work on the same projects with the same goals in mind. The main idea behind DevOps is to streamline the software development process, reduce inefficiencies, and improve the speed of delivery. It is a new way of thinking about software development that has transformed a lot of organizations, especially those that leverage distributed systems and cloud computing. 2. Acquire the Necessary Technical Skills Any aspiring DevOps professional must learn the essential DevOps tools such as Jenkins, Docker, Kubernetes, and Ansible or Terraform. Each of these tools provides a unique feature set that allows for better processes and automation in software development. It is important to understand how these components interact with each other to create an efficient system. Infrastructure as Code and CI/CD pipelines are concepts that are the backbone of the DevOps culture and provide the foundation for streamlined and efficient software development processes. Infrastructure as Code ensures the entire infrastructure is programmable, making it easier to manage and reproduce. Continuous integration allows for frequent code changes and testing to be done, ensuring that the code is always functional and improving. Continuous deployment is the heart of DevOps, as it allows developers to release new features and updates in a timely and efficient manner. Without a deep understanding of these core concepts, aspiring DevOps engineers will struggle to succeed in their roles and keep up with the demands of the industry. 3. Gain Practical Experience When it comes to starting a successful career in any technical field, gaining practical experience is essential. Learning theory is only useful to a certain extent. To truly understand the ins and outs of the day-to-day responsibilities, it is necessary to have hands-on experience. One way to gain practical experience is by joining open-source projects or contributing to community initiatives. This offers an opportunity to learn from others while also building a portfolio of work. Pursuing internships or entry-level positions is another great way to gain practical experience. This could help you understand how to work on real projects with an experienced team, further developing your skills. Finally, creating personal projects can serve as a way to practice and demonstrate skills to potential employers. Having a well-maintained GitHub repository also helps you build your online presence. 4. Attend DevOps Events Networking has always been an important aspect of any industry. It allows professionals to broaden their horizons, generate new ideas, and expand their knowledge. As a DevOps professional, you can benefit greatly from attending popular events, webinars, and meetups dedicated to discussing and showcasing breakthrough approaches and practices. Follow a DevOps conferences list to make sure you don’t miss anything happening near you and add the most interesting events to your calendar. These events can provide an opportunity to connect with people who share your interests and aspirations while also honing your skills through engaging in discussions, workshops, and training sessions. You may also discover new technologies, methodologies, and solutions that could help you become more competitive and successful in your work. 5. Continuously Update Your Skills Continuous learning is no longer just an option - it's necessary to succeed personally and professionally. This is where platforms like Udemy, Coursera, and Pluralsight come in handy, offering advanced DevOps courses that help professionals stay up-to-date with the latest industry developments. With the help of these courses, you can continuously update your skills without having to sacrifice your commitments. This point also ties nicely with gaining more practical experience. Consider challenging yourself with new problems to solve to learn even more while building your technical portfolio. 6. Understand the Broader Tech Ecosystem Having an interdisciplinary knowledge base is essential for top performance — not just from a technical standpoint but also from an organizational one in the future. This means that you should have at least a basic understanding of different aspects of technology like networks, databases, coding languages, and tools used in development environments so you can identify potential issues before they become real problems. Moreover, great communication between colleagues who use different technologies is key to successful collaboration within teams—interdisciplinary knowledge is crucial here as well. Understanding the broader tech ecosystem will help you come up with creative solutions to complex problems or develop innovative approaches to address customer needs better than competitors do. In practice, this means keeping active track of new trends, such as AI/ML applications or serverless computing, so that you can adjust strategies accordingly when these new mechanisms are introduced into your organization's systems and subsequently craft smarter solutions faster than ever before. Wrapping Up DevOps can be a challenging yet rewarding path, with endless opportunities to grow and learn. The key to success lies in your determination and adaptability. The world of DevOps is constantly evolving, and the only way to stay ahead of the game is to continuously learn and improve your skills. Launching your career requires taking key steps, such as learning essential coding languages, earning relevant certifications, and gaining practical experience through internships or personal projects. So don't hesitate. Jump in with both feet and embrace the ever-changing landscape of DevOps.
Since technology’s rapid advancement outpaces conventional professional growth, teams must seek additional learning opportunities to succeed. Lead software developers give their employees the tools to succeed when they invest in them. Why Should You Promote Continuous Learning? You should promote continuous learning opportunities because they are essential for your industry and professional advancement. Software development techniques and skills are in a state of constant growth, so your employees should be, too. Stagnating in a technical role ensures you will fall behind the rapid pace of technological advancements. If a software development team is going to maintain up-to-date skills and the highest possible productivity levels, they need to absorb new industry information continuously. While they’re technically capable of doing so independently, only you can ensure purposeful and quantifiable growth. As a result, it’s your responsibility to foster a learning culture. How Is a Continuous Learning Environment Beneficial? A continuous learning environment is beneficial because it provides software developers with the latest skills, techniques, and knowledge. Additionally, it accomplishes many things individual efforts cannot. Promoting an ongoing learning environment is beneficial because it: Future-proofs your employees: By 2025, over 50% of all workers will need to upskill to match the incredible pace of technological advancements. Enhances your team’s collaboration: A continuous learning environment improves collaboration because it keeps everyone on the same page and incentivizes them to overcome obstacles together. Increases job satisfaction: Investing in your worker’s development heightens their job satisfaction. Since they remain more content in their roles, retention also increases. Increases your team’s productivity: People who are happy with their positions at work are drastically more productive and engaged than those who aren’t. Develops professional knowledge: Software developers must constantly absorb new information to avoid stagnating in their roles. They must prepare for when businesses overhaul their tech stacks and replace legacy technology with modern variants. Lead developers fostering a collaborative and adaptive learning culture can ensure their department’s success. They can also help their colleagues reach their highest potential and grow professionally. Tips for Fostering a Continuous Learning Environment Adequately promoting and supporting a continuous learning environment involves dedication and reliable access to relevant resources. Further, you must understand what material and format to utilize to incentivize your employees. Lead by Example As a lead software developer, it’s up to you to set an example. You develop crucial leadership skills and enhance your own understanding if you show your commitment to others’ professional development. It’s also essential to stay up to date with emerging technology so you know where to direct the other developers’ learning. Alternatively, you can become an in-house mentor to guide your team members personally. It gives them an accessible, trustworthy teacher and allows you to shape their learning path. However, you can delegate the responsibility to a senior software developer if you feel they fit the role better. Share Knowledge Collaboration is crucial because building a learning environment requires group effort. Ensure the other developers have a way to share knowledge, whether it’s through meetups, pair programming sessions, or coding reviews. An open line of discussion can help them move past mental obstacles and identify their potential growth areas. Provide Resources You should consider providing educational resources to make learning accessible. Luckily, a wide variety of free and paid variants are available. Whether you want your employees to practice coding or test out new tools, there’s something for everyone. Passive resources are an excellent tool for developers with high workloads since they require little effort. For example, you can listen to podcasts for the latest trends and news on emerging technologies. It doesn’t require action, but you still learn something new. On the other hand, active resources require direct engagement. For example, you can utilize open-source software to train the other developers on new programming languages. While it requires a more significant time commitment, hands-on learning is more applicable to job duties. Seek Challenging Experiences You can only progress if you push yourself out of your comfort zone toward unknown techniques and tools. If you seek challenging experiences, you prepare yourself and your developers to adapt to real-world industry changes. Plus, it can help your department with day-to-day tasks — employees who are happy with their work are nearly 20% more accurate with their duties. Volunteering for coding challenges or hackathons is a great way to expand your horizons, test your skills, and get a quantifiable metric to use when you set goals. Plus, you can enhance your department’s teamwork with collaborative experiences. Attend Growth Events Growth events give team leaders a straightforward way to develop new skills and techniques. Lunch and learns, online courses, and conferences are all great places to upskill. There’s something for everyone, whether you want everyone to learn how to leverage artificial intelligence or adapt to a new programming language. You can still get your employees to attend a growth event even if they have different schedules. For example, digital courses and online boot camps provide an asynchronous learning environment so everyone can move independently. You could then assign them a monthly task and schedule a time to get together and discuss the lesson. Invest in a Learning Culture When you invest in a workplace culture of continuous learning, you help your employees gain critical insight into their positions and industry, helping you maintain a productive and upskilled team. Since constant professional growth is necessary for developers, you should take control to set expectations and track their progress.
Agile methodologies have genuinely transformed the landscape of service delivery and tech companies, ushering in a fresh era of adaptability and flexibility that perfectly matches the demands of today's fast-paced business world. The significance of Agile methodologies runs deep, not only streamlining processes but also fostering a culture of ongoing improvement and collaborative spirit. Within the service delivery context, Agile methodologies introduce a dynamic approach that empowers teams to swiftly and effectively address evolving client needs. Unlike conventional linear models, Agile encourages iterative development and constant feedback loops. This iterative nature ensures that services are refined in real time, allowing companies to quickly adjust their strategies based on market trends and customer preferences. In the tech sector, characterized by innovation and rapid technological advancements, Agile methodologies play a pivotal role in keeping companies on the cutting edge. By promoting incremental updates, short development cycles, and a customer-focused mindset, Agile enables tech companies to swiftly incorporate new technologies or features into their products and services, positioning them as frontrunners in a highly competitive industry. Ultimately, Agile methodologies offer a structured yet flexible approach to project management and service delivery, enabling companies to deal with complexities more effectively and quickly adapt to market changes. Understanding Agile Principles and Implementation The list of Agile methodologies encompasses Scrum, Kanban, Extreme Programming (XP), Feature-Driven Development (FDD), Dynamic Systems Development Method (DSDM), Crystal, Adaptive Software Development (ASD), and Lean Development. Irrespective of the specific methodology chosen, each one contributes to enhancing efficiency and effectiveness across the software development journey. Agile methodologies are underpinned by core principles that set them apart from traditional project management approaches. Notably: Emphasis on close client interaction throughout development, ensuring alignment and avoiding miscommunication. Responsive adaptation to changes is integral to Agile, given the ever-evolving nature of markets, requirements, and user feedback. Effective, timely team communication is pivotal for success. They are embracing changes that deviate from the plan as opportunities for product improvement and enhanced interaction. Agile's key distinction from systematic work lies in its ability to combine speed, flexibility, quality, adaptability, and continuous results enhancement. Importantly, it's essential to recognize that the implementation of Agile methodologies can vary across organizations. Each entity can tailor its approach based on its specific requirements, culture, and project nature. It's worth noting that this approach is fluid and can evolve as market dynamics change during the work process. The primary challenge of adopting Agile is initiating the process from scratch and conveying to stakeholders the benefits of an alternative approach. However, the most significant reward is a progressively improving outcome, including enhanced team communication, client trust, reduced risk impact, increased transparency, and openness. Fostering Collaboration and Communication Effective communication serves as the backbone of any successful project. It's imperative to maintain constant synchronization and know whom to approach when challenges arise that aren't easily resolved. Numerous tools facilitate this process, including daily meetings, planning sessions, and task grooming (encompassing all stakeholders involved in tasks). Retrospectives also play a pivotal role, providing a platform to discuss positive aspects of the sprint, address challenges that arose, and collaboratively find solutions. Every company can select the artifacts that align with their needs. Maintaining communication with the client is critical, as the team must be aware of plans and the overall business trajectory. Agile practices foster transparency and real-time feedback, resulting in adaptive and client-centric service delivery: Iterative development ensures the client remains informed about each sprint's outcomes. Demos showcasing completed work to the client offer a gauge of project progress and alignment with expectations. Close interaction and feedback loops with the client are central during development. Agile artifacts — such as daily planning, retrospectives, and grooming, to name a few — facilitate efficient coordination. Continuous integration and testing ensure product stability amid regular code changes. Adapting To Change and Continuous Improvement Change is an undeniable reality in today's ever-evolving business landscape. Agile methodology equips your team with the agility needed to accommodate evolving requirements and shifting client needs in service delivery. Our operational approach at Innovecs involves working in succinct iterations or sprints, consistently delivering incremental value within short timeframes. This methodology empowers teams to respond promptly to changing prerequisites and adjust priorities based on invaluable customer input. Agile not only facilitates the rapid assimilation of new customer requirements and preferences but also nurtures an adaptive and collaborative service delivery approach. The foundation of continuous feedback, iterative development, and a culture centered around learning and enhancement propels Agile teams to maintain their agility, thereby delivering impactful solutions tailored to the demands of today's dynamic business landscape. A cornerstone of Agile methodologies is perpetual advancement. As an organization, we cultivate an environment steeped in learning and iteration, where experimentation with novel techniques and tools becomes an engaging challenge for the team. The satisfaction and enthusiasm arising from successful results further fuel our pursuit of continuous improvement. Measuring Success and Delivering Value Agile methodology places a central focus on delivering substantial value to customers. Consequently, gauging the triumph of service delivery endeavors regarding customer contentment and business outcomes holds the utmost significance. This assessment can take several avenues: Feedback loops and responsiveness: Employing surveys and feedback mechanisms fosters transparency and prompt responses. Above all, the ultimate success of the product amplifies customer satisfaction. Metrics analysis: Evaluating customer satisfaction and business metrics empowers organizations to make informed choices, recalibrate strategies, and perpetually enhance their services to retain their competitive edge in the market. We encountered a specific scenario where Agile methodologies yielded remarkable service delivery enhancements and tangible benefits for our clients. During this instance, my suggestion to introduce two artifacts — task refinement and demos — yielded transformative outcomes. This refinement bolstered planning efficiency and culminated in on-time sprint deliveries. Notably, clients were consistently kept abreast of project progress. In an Agile market characterized by rapid, unceasing changes, preparedness for any scenario is key. Flexibility and unwavering communication are vital to navigating uncertainties. Being adaptable and maintaining open lines of dialogue serves as bedrock principles for achieving exceptional outcomes. When it comes to clients, transparency is paramount. Delivering work that exceeds expectations is a recurring theme. Always aiming to go a step further than anticipated reinforces our commitment to client satisfaction.
As a Product Owner (PO), your role is crucial in steering an agile project toward success. However, it's equally important to be aware of the pitfalls that can lead to failure. It's worth noting that the GIGO (Garbage In - Garbage Out) effect is a significant factor: No good product can come from bad design. On Agile and Business Design Skills Lack of Design Methodology Awareness One of the initial steps towards failure is disregarding design methodologies such as Story Mapping, Event Storming, Impact Mapping, or Behavioral Driven Development. Treating these methodologies as trivial or underestimating their complexity or power can hinder your project's progress. Instead, take the time to learn, practice, and seek coaching in these techniques to create well-defined business requirements. For example, I once worked on a project where the PO practiced Story Mapping without even involving the end-users... Ignoring Domain Knowledge Neglecting to understand your business domain can be detrimental. Avoid skipping internal training sessions, Massive Open Online Courses (MooCs), and field observation workshops. Read domain reference books and, more generally, embrace domain knowledge to make informed decisions that resonate with both end-users and stakeholders. To continue with the previous example, the PO who was new in the project domain field (although having basic knowledge) missed an entire use-case with serious architectural implications due to a lack of skills, requiring significant software changes after only a few months. Disregarding End-User Feedback Overestimating your understanding and undervaluing end-user feedback can lead to the Dunning-Kruger effect. Embrace humility and actively involve end-users in the decision-making process to create solutions that truly meet their needs. Failure to consider real-world user constraints and work processes can lead to impractical designs. Analyze actual and operational user experiences, collect feedback, and adjust your approach accordingly. Don't imagine their requirements and issues, but ask actual users who deal with real-world complexity all the time. For instance, a PO I worked with ignored or postponed many obvious GUI issues from end-users, rendering the application nearly unusable. These UX issues included the absence of basic filters on screens, making it impossible for users to find their ongoing tasks. These issues were yet relatively simple to fix. Conversely, this PO pushed unasked-for features and even features rejected by most end-users, such as complex GUI locking options. Furthermore, any attempt to set up tools to collect end-user feedback was dismissed. Team Dynamics Centralized Decision-Making Isolating decision-making authority within your hands without consulting IT or other designers can stifle creativity and collaboration. Instead, foster open communication and involve team members in shaping the project's direction. The three pillars of agility, as defined in the Agile Manifesto, are Transparency, Inspection, and Adaptation. The essence of an agile team is continuous improvement, which becomes challenging when a lack of trust hinders the identification of real issues. Some POs unfortunately adopt a "divide and rule" approach, which keeps knowledge and power in their sole hands. I have observed instances where POs withheld information or even released incorrect information to both end-users and developers, and actively prevented any exchange between them. Geographical Disconnection Geographically separating end-users, designers, testers, PO and developers can hinder communication. Leverage modern collaboration tools, but don't rely solely on them. Balance digital tools with face-to-face interactions to maintain strong team connections and enables osmotic communication, which has proven to be highly efficient in keeping everyone informed and involved. The worst case I had to deal with was a project where developers were centralized in the same building as the end-users, while the PO and design team were distributed in another city. Most workshops were done remotely between both cities. In the end, the design result was very poor. It improved drastically when some designers were finally collocated with the end-users (and developers) and were able to conduct in situ formal and informal workshops. Planning and Execution Over-Optimism and Lack of Contingency Plans Hope should not be your strategy. Don't overselling features to end-users. Being overly optimistic and neglecting backup plans can lead to missed deadlines and unexpected challenges. Develop robust contingency plans (Plan B) to navigate uncertainties effectively. Avoid promising unsustainable plans to stakeholders. After two or three delays, they may lose trust in the project. I worked on a project where the main release was announced to stakeholders by the PO every two months over a 1.5-year timeline without consulting the development team. As you can imagine, the effect was devastating over the image of the project. Inadequate Stakeholder Engagement Excluding business stakeholders from demos and delaying critical communications can lead to misunderstandings and misaligned expectations. Regularly engage stakeholders to maintain transparency and gather valuable feedback. As an illustration, in a previous project, we conducted regular sprint demos; however, we failed to invite end-users to most sessions. Consequently, significant ergonomic issues went unnoticed, resulting in a substantial loss of time. Additionally, within the same project, the Product Owner (PO) organized meetings with end-users mainly to present solutions via fully completed mockups, rather than facilitating discussions to precisely identify operational requirements, which inhibited them. Embracing Waterfall Practices Thinking in terms of a waterfall approach, rather than embracing iterative development, can hinder progress, especially on a project meant to be managed with agile methodologies. Minimize misunderstandings by providing regular updates to stakeholders. Break features into increments, leverage Proof of Concepts (POC), and prioritize the creation of Minimal Viable Products (MVP) to validate assumptions and ensure steady progress. As an example, I recently had a meeting with end-users explaining that a one-year coding tunnel period resulted in a first application version almost unusable and worse than the 20-year-old application we were supposed to rewrite. With re-established communication and end-users' involvement, this has been fixed in a few months. Producing Too Much Waste As a designer, avoid creating a large stock of user stories (US) that will be implemented in months or years. This way, you work against the Lean principle to fight the overproduction muda (waste) and you produce many specifications at the worst moment (when knowing the least about actual business requirements), and this work has all chances to be thrown away. I had an experience where a PO and their designer team wrote US until one year before they were actually coded and left almost unmaintained. As expected, most of it was thrown away or, even worse, caused various flaws and misunderstandings among the development team when finally planned for the next sprint. Most backlog refinements and explanations had to be redone. User stories should be refined to a detailed state only one or two sprints before being coded. However, it's a good practice to fill the backlog sandbox with generally outlined features. The rule of thumb is straightforward: user stories should be detailed as close to the coding stage as possible. When they are fully detailed, they are ready for coding. Otherwise, you are likely to waste time and resources. Volatile Objectives Try to set consistent objectives at each sprint. Avoid context switching among developers, which can prevent them from starting many different features but never finishing any. To provide an example, in a project where the Product Owner (PO) interacted with multiple partners, priorities were altered every two or three sprints mainly due to political considerations. This was often done to appease the most frustrated partners who were awaiting certain features (often promised with unrealistic deadlines). Lack of Planning Flexibility Utilize the DevOps methodology toolkit, including tools such as feature flags, dark deployments, and canary testing, to facilitate more streamlined planning and deployment processes. As an architect, I once had a tough time convincing a PO to use canary-testing deployment strategy to learn fast and release early while greatly limiting risks. After a resounding failure when opening the application to the entire population, we finally used canary-testing and discovered performance and critical issues on a limited set of voluntary end-users. It is now a critical aspect of the project management toolkit we use extensively. Extended Delays Between Deployments Even if a product is built incrementally within 2 or 3-week timeframes, many large projects (including all those I've been a part of) tend to wait for several iterations before deploying the software in production. This presents a challenge because each iteration should ideally deliver some form of value, even if it's relatively small, to end-users. This approach aligns with the mantra famously advocated by Linus Torvalds: "Release early, release often." Some Product Owners (PO) are hesitant to push iterations into production, often for misguided reasons. These concerns can include fears of introducing bugs (indicating a lack of automated and acceptance testing), incomplete iterations (highlighting issues with user story estimation or development team velocity), a desire to provide end-users with a more extensive set of features in one go, thinking they'll appreciate it, or an attempt to simplify the user learning curve (revealing potential user experience (UX) shortcomings). In my experience, this hesitation tends to result in the accumulation of various issues, such as bugs or performance problems. Design Considerations Solution-First Mentality Prioritizing solutions over understanding the business needs can lead to misguided decisions. Focus on the "Why" before diving into the "How" to create solutions that truly address user requirements. As a bad practice, I've seen user stories including technical content (like SQL queries) or presenting detailed technical operations or screens as business rules. Oversized User Stories Designing large, complex user stories instead of breaking them into manageable increments can lead to confusion and delays. Embrace smaller, more focused user stories to facilitate smoother development, predictability in planning, and testing. Inexperienced Product Owners (POs) often find it challenging to break down features into small, manageable user stories (US). This is sort of an art, and there are numerous ways to accomplishing it based on the context. However, it's important to remember that each story should deliver value to end-users. As an example, in a previous project, the Product Owner (PO) struggled to effectively divide stories or engaged in purely technical splitting, such as creating one user story (US) for the frontend and another for the backend portion of a substantial feature. Consequently, 50% of the time, this resulted in incomplete user stories that required rescheduling for the subsequent sprint. Neglecting Expertise Avoiding consultation with experts such as UX designers, accessibility specialists, and legal advisors can result in suboptimal solutions. Leverage their insights to create more effective and user-friendly designs. As a case in point, I've observed multiple projects where the lack of a proper user experience (UX) led to inadequately designed graphical user interfaces (GUIs), incurring substantial costs for rectification at a later stage. In specific instances, certain projects demanded legal expertise, particularly in matters of data privacy. Moreover, I encountered a situation where a Product Owner (PO) failed to involve legal specialists, resulting in the final product omitting crucial legal notices or even necessitating significant architectural revisions. Ignoring Performance Considerations Neglecting performance constraints, such as displaying excessive data on screens without filters, can negatively impact user experience. Prioritize efficient design to ensure optimal system performance. I once worked on a large project where the Product Owner (PO) requested the computation of a Gantt chart involving tens of thousands of tasks spanning over 5 years. Ironically, in 99.9% of cases, a single week was sufficient. This unnecessarily intricate requirement significantly complicated the design process and resulted in the product becoming nearly unusable due to its excessive slowness. Using the Wrong Words Failing to establish a shared business language and glossary can create confusion between technical and business teams. Embrace the Ubiquitous Language (UL) Domain-Driven Design principle to enhance communication and clarity. I once worked on a project where PO and designers didn't set up any business terms glossary, used custom vocabulary instead of a business one, and used fuzzy or interchangeable synonyms even for the terms they coined themselves. This created many issues and confusion among the team or end-users and even duplicated work. Postponing Legal and Regulatory Considerations Late discovery of legal, accessibility, or regulatory requirements can lead to costly revisions. Incorporate these considerations early to avoid setbacks during development. I observed a significantly large project where the Social Security number had to be eliminated later on. This led to the need for additional transformation tools since this constraint was not taken into account from the beginning. Code Considerations Interferences Refine business requirements and don't interfere with code organization, which often has its own constraints. For instance, asking the development team to always enforce the reuse (DRY) principle through very generic interfaces comes from a good intention but may greatly overcomplicate the code (which violates the KISS principle). In a recent project, a Product Owner (PO) who had a background in development frequently complicated the design by explicitly instructing developers to extend existing endpoints or SQL queries instead of creating entirely new ones, which would have been simpler. Many developers followed the instructions outlined in the user stories (US) without fully grasping the potential drawbacks in the actual implementation. This occasionally resulted in convoluted code and wasted time rather than achieving efficiency gains. Acceptance Testing Neglecting Alternate Paths Focusing solely on nominal cases (“happy paths”) and ignoring real-world scenarios can result in very incomplete testing. Ensure that all possible paths, including corner cases, are thoroughly tested to deliver a robust solution. In a prior project, a multitude of bugs and crashes surfaced exclusively during the production phase due to testing being limited to nominal scenarios. This led to team disorganization as urgent hotfixes had to be written immediately, tarnishing the project's reputation and incurring substantial costs. Missing Acceptance Criteria Leverage the Three Amigos principle to involve cross-functional team members in creating comprehensive acceptance criteria. Incorporate examples in user stories to clarify expectations and ensure consistent understanding. Example mapping is a great workshop to achieve it. Being able to write down examples ensures many things: firstly that you have at least one realistic case for this requirement and that it is not imaginary; secondly, listing different cases is a powerful tool to gain an estimation of the alternate paths exhaustively (see the previous point) and make them emerge; lastly, it is one of the best common understanding material you can share with developers. By way of illustration, when designers began documenting real-life scenarios using Behavioral Driven Development (BDD) executable specifications, numerous alternate paths emerged naturally. This led to a reduction in production issues (as discussed in the previous section) and a gradual slowdown in their occurrence. Lack of Professional Testing Expertise Incorporating professional testers and testing tools enhances defect detection and overall quality. Invest in thorough testing to identify issues early, ensuring a smoother user experience. Not using tools also makes it more difficult for external stakeholders to figure out what has been actually tested. Conducting rigorous testing is indeed a genuine skill. In a previous project, I witnessed testers utilizing basic spreadsheets to record and track testing scenarios. This approach rendered it difficult to accurately determine what had been tested and what hadn't. Consequently, the Product Owner (PO) had to validate releases without a clear understanding of the testing coverage. Tools like the Open Source SquashTM are excellent for specifying test requirements and monitoring acceptance tests coverage. Furthermore, the testers were not testing professionals but rather designers, which frequently resulted in challenges when trying to obtain detailed bug reports. These reports lacked precision, including crucial information such as the exact time, logs, scenarios, and datasets necessary for effective issue reproduction. Take-Away Summary Symptom Possible Causes and Solutions A solution that is not aligned with end-users' needs. Ineffective Workshops with End-Users:- If workshops are conducted remotely, consider organizing them onsite.- Ensure you are familiar with agile design methods like Story Mapping.Insufficient Attention to End-Users' Needs:- Make sure to understand the genuine needs and concerns of end-users, and avoid relying solely on personal intuitions or managerial opinions.- Gather end-users' feedback early and frequently.- Utilize appropriate domain-specific terminology (Ubiquitous Language). Limited Trust from End-Users and/or Development Team. Centralized Decision-Making:- Foster open communication and involve team members in shaping the project's direction.- Enhance transparency through increased communication and information sharing.Unrealistic Timelines:- Remember that "Hope is not a strategy"; avoid excessive optimism.- Aim for consistent objectives in each sprint and establish a clear trajectory.- Employ tools that enhance schedule flexibility and ensure secure production releases, such as canary testing. Design Overhead. User story overproduction:- Minimize muda (waste) and refine user stories only when necessary, just before they are coded.Challenges in Designer-Development Team Communication:- Encourage regular physical presence of both design and development teams in the same location, ideally several days a week, to enhance direct and osmotic communication.- Focus on describing the 'why' rather than the 'how'. Leave technical specifications to the development team. For instance, when designing a database model, you might create the Conceptual Data Model, but ensure the team knows it's not the Physical Data Model. Discovery of Numerous Production Bugs. Incomplete Acceptance Testing:- Develop acceptance tests simultaneously with the user stories and in collaboration with future testers.- Conduct tests in a professional and traceable manner, involving trained testers who use appropriate tools.- Test not only the 'happy paths' but also as many alternative paths as possible.Lack of Automation:- Implement automated tests, especially unit tests, and equally important, executable specifications (Behavioral Driven Development) derived from the acceptance tests outlined in the user stories. Explore tools like Spock. Conclusion By avoiding these common pitfalls, you can significantly increase the chances of a successful agile project. Remember, effective collaboration, clear communication, and a user-centric mindset are key to delivering valuable outcomes. A Product Owner (PO) is a role, not merely a job. It necessitates training, support, and a readiness to continuously challenge our assumptions. It's worth noting that a project can fail even with good design when blueprints and good coding practices are not followed, but this is an entirely different topic. However, due to the GIGO effect, no good product can ever be released from a bad design phase.