The Agile methodology is a project management approach that breaks larger projects into several phases. It is a process of planning, executing, and evaluating with stakeholders. Our resources provide information on processes and tools, documentation, customer collaboration, and adjustments to make when planning meetings.
Behavior-Driven Development (BDD) Framework for Terraform
How To Optimize Your Agile Process With Project Management Software
In the Oxford Dictionary, the word agility is defined as "the ability to move quickly and easily." It is, therefore, understandable that many people relate agility to speed. Which I think is unfortunate. I much prefer the description from Sheppard and Young, two academics in the field of sports science, who proposed a new definition of agility within the sports science community as "a rapid whole-body movement with a change of velocity or direction in response to a stimulus" [1]. The term “agility” is often used to describe “a change of direction of speed.” However, there is a distinct difference between “agility” and “a change of direction of speed.” Agility involves the ability to react in unpredictable environments. Change of direction of speed, on the other hand, focuses purely on maintaining speed as the direction of travel is changed. Maintaining a speed while changing direction is usually only possible when each change of direction of travel is known in advance. Using a sports analogy as an example, in soccer, we can say that a defender’s reaction to an attacker’s sudden movement is an agility-based movement. The defender has to react based on what the attacker is doing. Compare this to an athlete running in a zig-zag through a course of pre-positioned cones, and the reactive component is missing. There are no impulsive or unpredictable events happening. The athlete is trying to maintain speed while changing direction. Often, when leaders of organizations want to adopt Agile, they do so for reasons such as “to deliver faster.” In this case, they are thinking of agile as a way to enable a change of direction of speed like the athlete, and not in the sense of agility needed by the soccer defender when faced with the attacker. This may explain why agile ways of working do not always live up to expectations, even though more and more companies are adopting it. Sticking with the sports analogy, the athlete running through the cones tries to reach each one as quickly as possible and then runs in the direction of the next until the end of the course. This works as a metaphor for defining the scope of a project and has teams work in short iterations in which they deliver each planned feature as quickly as possible and then move on to the next. This may be fine in a predictable environment, where the plan does not need to change, where requirements stay fixed, where the market stays the same, and where customer behaviors are well understood and set in stone. In many environments, however, change is a constant: customer expectations and behavior, market trends, the actions of competitors, and more. These are the VUCA environments (volatile, uncertain, complex, and ambiguous) where there is a need to react or, to put it another way, where agility is needed. Frameworks such as Scrum are meant to support agility. Sprints are short planning horizons, and the artifacts and events in Scrum are there to provide greater transparency and opportunities to inspect and adapt based on early feedback, changes in conditions, and new information. It gives an opportunity to pivot and react in a timely manner. However, Scrum is unfortunately often misunderstood as a mechanism to deliver with speed. Focusing only on speed and delivery and not investing in the practices that enable true agility is likely to actually slow things down in the long run. When the focus is only on speed, it becomes harder to maintain that speed, let alone to increase it, and any semblance of agility is a fantasy. Let me talk about a pattern that I see again and again as an example. Company A has a goal to build an e-commerce site through which they can sell their goods. Their first slice of functionality is delivered in a 1-month Sprint and consists of a static catalog page in which the company can upload its products with a short description. The first delivery is received by happy and excited stakeholders who are hungry for more. The team keeps on building, and the product keeps growing. Stakeholders make more requests, and the team works harder to keep up and deliver at the pace that stakeholders have come to expect from them. The team does not have time to invest in improving their practices. Manual regression testing becomes a bigger and bigger burden and is more challenging to complete within a Sprint. The codebase becomes more complex and brittle. The more bloated the product becomes, the more the team struggles to deliver at the same pace. To try to meet expectations, the team begins to cut corners. They end up carrying testing work over from one Sprint to the next. And there is no time for validating if what is being delivered actually produces value. This is just as well, as no analytics have been set up anyway. In the meantime, whilst the team is so busy trying to build new features and carrying out manual integration and regression testing, they do not have time either to look at improvements to their original build pipeline or to build in any automation. A release to production involves several hours of downtime, so this has to be done overnight and manually. To make matters worse, the market has been changing. The sales team has made deals with new suppliers, but this means further customizations to the site are needed for their products. Finally, the company has pushed for the platform to be available in different timezones, so the downtime for a release is a big problem and must be minimized, so they are only allowed to happen once every six months. Progress comes to a standstill. The product is riddled with technical debt. The team has lost the ability to get early feedback and the ability to react to what their attackers are doing, i.e., customers' changing needs, competitors’ actions, changing market conditions, etc. Just implementing the mechanics of a framework like Scrum does not ensure agility and does not automatically lead to a more beneficial way of working. The Agile Manifesto includes principles such as “continuous delivery of valuable software,” “continuous attention to technical excellence,” and “at regular intervals, the team reflects on how to become more effective.” Following these principles is a greater enabler of agility than following the Scrum framework alone. One effective way of enabling greater agility is to complement something like Scrum with agile engineering practices to get those expected benefits that organizations are looking for with agile. Over the years, I have encountered many Agile adoptions at companies where a lot of passion, energy, focus, and budget went into training and coaching people in implementing certain frameworks such as Scrum. However, what I do not encounter so much are companies spending the same amount of passion, energy, focus, and budget into implementing good Agile engineering practices such as Pair Programming, Test Driven Development, Continuous Integration, and Continuous Delivery. When challenged on this, responses are typically something like “We don’t have time for that now," or “Let’s just deliver this next release, and we’ll look at doing it at a later date when we have time." And, of course, that time usually never arrives. Now, of course, something like Scrum can be used without using any Agile engineering practices. After all, Scrum is just a framework. However, without good engineering practices and without striving for technical excellence, a Scrum team developing software will only get so far. Agile engineering practices are essential to achieve agility as they allow the shortening of validation cycles and get early feedback. For example, validation and feedback in real-time on quality when pairing, or that comes with proper Continuous Integration being in place. Many of the Agile engineering practices are viewed as daunting and expensive to implement, and doing so will get in the way of delivery. However, I would argue that investing in engineering practices that help to build quality or allow tasks to be automated, for example, enables sustainable development in the long run. While investing in Agile engineering practices may seem to slow things down in the short term, the aim is to be able to maintain or actually increase speed into the future while still retaining the ability to pivot. To me, it is an obvious choice to invest in implementing Agile engineering practices, but surprisingly, many companies do not. Instead, they choose to sacrifice long-term sustainability and quality for short-term speed. Creating a shared understanding of the challenges that teams face and the trade-offs for short-term speed without good engineering practices in place versus the problems that can arise without them can help to start a conversation. It is important everyone, including developers and a team’s stakeholders, understands the importance of investing in good Agile engineering practices and the impact on agility if you don’t do it. These investments can also be thought of as experiments — trying out or investigating a certain practice for a few iterations to see how it might help can be a way to get started and make it less daunting. Either way, it is questionable that an Agile process without underlying agile practices applied can be agile at all. For sustainability, robustness, quality, customer service, fitness for purpose, and true agility in software development teams, it is important for continuous investment in agile engineering practices. [1] J. M. Sheppard & W. B. Young (2006) Agility literature review: Classifications, training and testing, Journal of Sports Sciences, 24:9, 919-932, DOI: 10.1080/02640410500457109
Test automation is essential for ensuring the quality of software products. However, test automation can be challenging to maintain, especially as software applications evolve over time. Self-healing test automation is an emerging concept in software testing that uses artificial intelligence and machine learning techniques to enable automated tests to detect and self-correct issues. This makes test automation more reliable and cost-effective and reduces the amount of time and resources required to maintain test scripts. In this article, we will discuss the benefits of self-healing test automation, how it works, and how to implement it in your organization. What Is Self/Auto-Healing Test Automation? Self-healing test automation is a new approach to test automation that uses artificial intelligence (AI) and machine learning (ML) to make test scripts more robust and adaptable. With self-healing test automation, test scripts can automatically detect and repair themselves when changes are made to the application under test, including shifting layouts and broken selectors. This makes it possible to automate tests for complex applications with frequently changing user interfaces without having to constantly maintain and update the test scripts. Why Is Self-Healing Test Automation Necessary? Test automation scripts can easily break when changes are made to the user interface. This is because test automation scripts are typically designed to interact with specific elements on the screen, such as buttons, text fields, and labels. When these elements change, the script may no longer be able to find them or interact with them correctly. This can lead to test failures and false positives, which can be time-consuming and frustrating to resolve. Also, the user interfaces are constantly evolving, with new features and bug fixes being added frequently. This means that test automation scripts need to be updated regularly to adapt to these changes. However, updating test automation scripts can be a manual and time-consuming process, which can be challenging to keep up with the pace of change. Self-healing test automation addresses this fragility of traditional test automation scripts by adapting to changes in the user interface automatically to make test scripts more robust and adaptable. Self-healing test scripts can automatically detect and repair themselves when changes are made to the application under test, which can help to reduce test maintenance costs, improve test quality, and increase test coverage. How Does Self-Healing Mechanism Work? Step 1: The self-healing mechanism gets triggered whenever “NoSuchElement” or a similar error occurs for an element mentioned in the automation script. Step 2: The algorithm analyzes the test script to identify the root cause of the error. Step 3: The algorithm uses AI-powered data analytics to identify the exact object in the test script that has changed. An object can be any interface item like a webpage, navigation button, or text box. Step 4: The algorithm updates the test script with the new identification parameters for the affected object(s). Step 5: The updated test case is re-executed to verify that the remediation has been successful. How Self-Healing Test Automation Adds Value to Your Software Delivery Process Leveraging self-healing capabilities allows test automation to adapt to changes, improving test coverage, reducing maintenance efforts, and enabling faster feedback. Saves Time and Effort Self-healing test automation can save organizations a significant amount of time and effort in software testing. Traditional test automation approaches require manual intervention to fix errors or failures that occur during test execution. This can be a time-consuming and error-prone process, especially when dealing with large and complex test suites. Self-healing test automation eliminates the need for manual intervention, allowing tests to recover automatically from failures or errors. Improves Test Coverage Self-healing test automation can help to improve test coverage by allowing testers to focus on writing new tests rather than maintaining existing tests. This is because self-healing tests can automatically adapt to changes in the software under test, which means that testers do not need to spend time updating their tests every time the software changes. As a result, testers can focus on writing new tests to cover new features and functionality. Self-healing automation can improve test coverage by 5-10% by eliminating unnecessary code, resulting in shorter delivery times and higher returns on investment. Prevents Object Flakiness Object flakiness is a common problem in test automation, especially for GUI testing. Object flakiness occurs when a test fails because it is unable to locate an object on the page. This can happen for a variety of reasons, such as changes to the UI, changes to the underlying code, or network latency. Self-healing test automation can detect and prevent object flakiness by analyzing test results and identifying patterns that indicate flaky tests. By preventing object flakiness, teams can reduce the number of false positives and negatives, improving the overall accuracy and reliability of test results. Faster Feedback Loop Self-healing test automation also enables a faster feedback loop. With traditional test automation approaches, tests are often run manually or through a continuous integration pipeline. However, with self-healing test automation, tests can be run continuously, providing immediate feedback on the quality of the application under test. This enables teams to identify and fix issues faster, improving the overall quality and reliability of the application. Conclusion In Agile methodology, applications are continuously developed and tested in short cycles. This can make it difficult to maintain test cases, as the application is constantly changing. Self-healing test automation can help to overcome this challenge by automatically updating test cases when the application under test changes.
A sprint retrospective is one of the four ceremonies of the Scrum. At the end of every sprint, a product owner, a scrum master, and a development team sit together and talk about what worked, what didn’t, and what to improve. The basics of a sprint retrospective meeting are clear to everyone, but its implementation is subjective. Some think the purpose of a sprint retrospective meeting is to evaluate work outcomes. However, as per the Agile Manifesto, it is more about evaluating processes and interactions. It says, “At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.” Many scrum teams are not making most of the sprint retrospective meetings due to a lack of understanding. In this post, we will look at what to avoid in a sprint retrospective meeting and what to do to run an effective sprint retrospective meeting. What To Avoid in a Sprint Retrospective Meeting? A sprint retrospective meeting is an opportunity for the scrum team to come together and discuss the previous sprint with the purpose of making improvements in processes and interactions. But scrum teams often end up making a sprint retrospective meeting a negative talk of beating one another and less interesting due to the lack of implementation of outcomes of sprint retrospective meetings. Here are a few things that you need to avoid in a sprint retrospective meeting: 1. Focusing on the Outcomes The end goal of a sprint retrospective is undoubtedly increasing the sprint velocity of the team, but the process to do so is not talking about the outcome of the sprint. The focus is on finding areas that can be improved in processes and people to make it easy and efficient for the scrum team to work together. 2. Not Involving All Team Members’ Voice The output of a scrum team is evaluated as a team, not as each individual. Therefore, it is important that each member of the Scrum team is heard. Thus, the equal participation of the team members is required in retros. If someone has issues and they are not addressing them, it is going to impact the sprint output as the members of the sprint team are highly dependent on each other to achieve sprint goals. 3. Talking Only About What Went Wrong The purpose of a sprint retrospective is to make improvements, but it does not mean you do not talk about good things. We all are human beings and need appreciation. If you talk only about what did not work, a sprint retrospective will become more of a tool of blaming and beating each other rather than an instrument of improvement. Above all, it is important to talk about what went well so that you can replicate good things in the next sprint. 4. Not Taking Action on Retro Outcomes The worst thing that can happen for a sprint retrospective is not taking action items derived from it. This will lead to a loss of interest and trust in sprint retrospectives as it sends a message to the team that their feedback is not valuable. What To Do To Run an Effective Sprint Retrospective Meeting There are some basics you can follow to run an effective sprint retrospective meeting. Have a look at them. 1. Create a Psychologically Safe Space for Everyone To Speak It is the responsibility of a product owner and a scrum master to create a psychologically safe environment for everyone to speak up in the meeting to make a sprint retrospective successful. If you are asking questions like what went well during the last sprint, what didn’t go well, and what should we do differently next time, everyone should feel safe to share their views without any repercussions. 2. Use a Framework The best way to conduct an effective sprint retrospective meeting is to follow a template. Experts have created various frameworks for conducting effective sprint retrospective meetings. The top frameworks include: Mad, Sad, Glad Speed car retrospective 4 L's Retrospective Continue, stop, and start-improve What went well? What didn't go well? What is to improve? These frameworks help ensure that you are talking about processes, not people. For example, the Mad, Sad, Glad framework talks about asking what makes the team mad, sad, and glad during the sprint and how we can move from mad, sad columns to glad columns. Use a framework that works for your scrum team. 3. Have a Facilitator-In-Chief Like any other meeting, a sprint retrospective meeting needs to have a goal, a summary, and a facilitator. Have a facilitator-in-chief to make sprint retrospectives valuable. Usually, the role is dedicated to the scrum master whose responsibility in sprint retrospective is to: Set the agenda and goals of the sprint retrospectives. Collect feedback from all the team members on the action items to talk about in the retro. Defining the length of the meeting. Follow up with action items implemented in the last sprints. Summarizing the key action items for the next sprint. 4. Implement the Action Items The responsibility of the scrum master does not end with a sprint retrospective. A scrum master needs to make sure that action items found in the sprint retrospective are implemented in the upcoming sprint. Daily stand-up meetings are a great tool for the scrum master to ensure that the team is implementing what is agreed upon & discussed and making improvements. Also, you can see the results of sprint retrospectives in tangible terms with metrics like sprint velocity. 5. Positivity, Respect, and Gratitude for Everyone Lack of engagement is the biggest challenge of sprint retrospectives in the long run. It occurs when action items are not worked on, people are not heard, and the focus is on negatives. Cultivate positivity and have respect and gratitude for everyone. Talks about what can be improved rather than blaming individuals. Listen to others to mark respect and express gratitude to address everyone’s contributions. Paired with the implementation of action items, you can ensure that your scrum team sees sprint retrospectives as an opportunity to improve. Conclusion Sprint retrospective is a great opportunity to look past what worked well, what went wrong, and what we can do ahead to improve. It is a great instrument for a business to improve efficiency, keep its workforce happy, and build products that both clients and end-customers love. The only challenge is you need to utilize it appropriately. With insights shared in this post, there are high chances you will be able to run effective sprint retrospective meetings and bring actual value to the table.
In software testing, we have traditionally grouped requirements into two main areas: functional and non-functional. Commonly, people define functional requirements as to how the system should work. In contrast, non-functional requirements refer to how well the system should behave or perform. But let’s discuss the big elephant in the room. The name “non-functional requirements” is terrible. The name has been a source of confusion and may not accurately convey the essence of what these requirements encompass. When working on software projects, teams put less emphasis on non-functional requirements, which is a grave mistake. If you quickly browse articles online related to non-functional requirements, some authors even mention that non-functional requirements are optional. Non-functional requirements become “non-important requirements,” but I can argue that they are just as important, if not more important, than functional requirements since they can impact the overall quality of your application. Neglecting the prioritization of non-functional requirements has tangible repercussions, especially on the user experience. This is exemplified in real-world scenarios such as: Amazon encountered system issues during Prime Day in 2018, struggling to handle a substantial surge in traffic, resulting in a compromised user experience. Nationwide, a major bank in the UK, experienced a payday payment outage in 2022. This unfortunate incident left thousands of customers unable to access their accounts and fulfill essential financial transactions, underscoring the critical importance of robust non-functional considerations. Pokemon Go, a popular game released in 2016, still gets reports of their application crashing from gamers. How can we ensure that non-functional requirements deserve the spotlight they deserve? In this article, I’ll talk about the naming nuance of non-functional requirements and explore if there is a better terminology for the dreaded term “non-functional.” Focusing on this terminology shift is not merely about semantics but aims to reflect better the comprehensive nature of these requirements and their impact across various system dimensions. Naming Nuance It is no secret that the tech industry has had its share of horrible naming conventions, one example being the slave and master terminologies used widely for decades in technical contexts. The naming for non-functional requirements is also another example. If you look up the term non-functional in a dictionary, the Cambridge Dictionary defines non-functional as “not working, or not working in the correct or usual way.” If non-functional refers to something that is not working, why did we adopt non-functional requirements as an industry-wide term in the first place? Most users will deem the functional requirements unusable without the so-called non-functional requirements. Suppose your application has a login feature that works well from a functionality perspective but is not keyboard-accessible or responsive when the incoming traffic is higher than usual. In that case, the login functionality will be unusable to most users. We work on projects where performance, scalability, availability, accessibility, and security, qualities that fall under the umbrella of non-functional requirements, are essential. However, this slight naming nuance is more harmful than good, especially when educating people about this topic, since it leads people to believe these qualities are less important. Functional requirements are important, and non-functional requirements are considered less important or optional. Non-functional requirements remain a widely accepted and recognized term in the industry, and teams understand the importance of neglecting these requirements. But, as an industry, we should frame non-functional requirements better, even though we do not intend to treat non-functional requirements as less important. Naming Alternatives Non-functional requirements often encompass the whole system rather than specific behavior. For example, backend performance refers to how well your backend application can process user requests under a given workload. Scalability attribute refers to how well your system can expand its capacity to handle growing demands. In contrast, the resiliency attribute refers to how well your system reacts to failures. These non-functional requirements, just like any other non-functional requirements, look at a system as a whole. Cross-Functional Requirements A better naming alternative is to call them “cross-functional” requirements, which Sarah Taraporewalla coined in her blog post I don’t believe in NFRs, which has been adopted widely in Thoughtworks. By adopting the term cross-functional requirements, there is a deliberate shift in perspective. Cross-functional emphasizes that these requirements are not secondary or less important but are integral to the entire functioning of the system. The cross-functional name implies that these requirements cut across various functions and aspects of the system, ensuring it performs well as a cohesive unit. In addition, changing the name to cross-functional communicates a holistic approach to system development, encouraging teams to prioritize and integrate these considerations from the outset, ultimately leading to a more robust and reliable end product. Human Qualities Apart from cross-functional, Jeff Nyman proposed to call non-functional “human qualities” instead when he crafted this insightful article on “reframing non-functional as human qualities” to represent the different human feelings that we feel when using the system and how it would negatively make us feel when so-called non-functional requirements are not prioritized. Similar to how a person's qualities collectively shape their character, non-functional requirements also shape the character of a system as a whole. Naming non-functional as human qualities emphasizes their impact on the overall user experience and system quality from a user's perspective. In addition, the term human qualities can make these technical aspects more relatable and understandable to a broader audience, including non-technical stakeholders. Quality Aspects Beren Van Daele refers to all requirements as “quality aspects” as part of the RiskStorming workshop to force testers to think about all the aspects that can impact the quality of your application. Each non-functional requirement represents a specific aspect that, when fulfilled, enhances the system's overall quality. Here is an illustration created by software tester Jędrzej Kubala. Quality aspects represent the requirements important to your projects, including the common attributes grouped as part of non-functional requirements. Beyond the Name Personally, I prefer the term cross-functional requirements since it’s more apparent that there are requirements that cross multiple areas of your application. Whichever naming alternative you use, teams should still make a lot of effort to ensure that these requirements are prioritized and treated the same way we discuss functional requirements. For example, from a performance perspective, one way to do this is to have a continuous testing approach, where various performance activities are embedded throughout the software development lifecycle. Framing non-functional requirements as cross-functional requirements, human qualities, or quality aspects is a great way to remove the stigma that non-functional requirements are unimportant. It may be a name, but this can frame our mindsets differently when we use the correct term.
TL; DR: Getting Hired as a Scrum Master or Agile Coach Are you considering a new Scrum Master or Agile Coach job, but you are not sure that it is the right organization? Don’t worry: there are four steps of proactive research to identify suitable employers or clients for getting hired as a Scrum Master and avoid disappointment later. I have used those four steps for years to identify organizations I would like to work with, and they never failed me. Read on and learn how to employ search engines, LinkedIn’s people search, reach out to peers in the agile community, and analyze the event markets in the quest for your next Scrum Master job. The Scrum Master Job Market Is Challenging We have all heard the news that organizations question the usefulness of employing Scrum Masters and cutting back on offering job opportunities. In some cases, they even laid off all Scrum Masters and Agile Coaches. Times are challenging, and many peers must make ends meet, considering more “tolerance” regarding job opportunities. While I understand the approach, I like to advocate for preparing yourself properly in advance of getting hired as a Scrum Master to avoid future disappointment with new clients or employers. Therefore, if looking for a new Scrum Master job, consider two questions: Do I want to work for a developing agile organization (of the late majority) where my work will likely be met with resistance at multiple levels? Alternatively, how do I identify an organization that established agile practices compatible with my mindset? The two questions are relevant to applying to available positions and identifying suitable employers or clients for a proactive application. How To Get an Idea of an Organization’s Maturity Regarding Scrum or “Agile” While it is impossible to assess an organization’s “agile maturity” — if there is such a thing — solely from the outside, it is possible to acquire enough of an understanding of its agile practices this way. That understanding would allow for asking the right questions at a later stage; for example, during an initial job interview. Or, you may conclude after your research — thus early in the assessment process (see below) — that the organization is not compatible with your expectations of a future employer or client. (Consider the popular saying: "There is no job interesting enough that you just couldn’t walk away from it.") The good news is that all organizations that genuinely embrace agile practices are usually openly talking about their journeys (unless they need to honor compliance rules) and are transparent and actively supporting the agile community. The reason for this support is simple: being transparent and supportive is the best way to pitch the organization (and its agile culture) to prospective new team members. The war for talent is even more imminent for agile practitioners. The necessity of critical information is the basis for all research activity during the three distinct phases of your assessment process prior to getting hired as a Scrum Master: Proactive research Job advertisement Job interview Getting Hired as a Scrum Master, Phase 1: Proactive Research The proactive research comprises four elements: search engines (Google, Bing, YouTube, etc.), LinkedIn people search, reaching out to peers in the industry or communities, and analyzing the event markets. Source 1: An Opportunistic Search via Google, Bing, or YouTube As a first step, always search the organization’s name in combination with a variety of agile-related keywords, such as: Agile Lean Scrum Scrum Master ScrumMaster Product Owner Kanban SAFe LeSS Nexus DevOps Continuous integration Continuous delivery Design thinking Lean startup Tip Use additional search parameters to narrow down the search results. For example, the query "scrum master" site:age-of-product.com will return all articles on Age-of-Product.com that include the term “scrum master.” (Learn more about advanced search on Google.) The purpose of this exercise before getting hired as a Scrum Master is to discover an organization’s use of agile practices and the associated fluency level by shedding some light on questions such as: Scrum, Kanban, XP, Lean UX, Design Thinking — What are they practicing? Are the current Scrum Masters or Agile Coaches working at the organization? How many engineers or engineering teams are working for the organization? What is the size ratio between the product management and engineering teams? Is the organization practicing continuous product discovery? Is the organization practicing DevOps? The initial search results will provide a first impression, directing further searches of blog posts, videos of conferences or local meetups, slide decks, podcasts, or threads in communities. A truly agile organization will leave traces of a large variety of content. The mere quantity of results, though, does not signal that the organization in question has already passed the test, so to speak. There is no way to avoid checking the content. Here’s an example: InfoQ — a community news site facilitating the spread of knowledge and innovation in professional software development — has a rigorous editorial process and focuses on delivering quality content to its audience. Contrary to InfoQ’s standards, there are quite a few articles on Medium.com, for example, that could raise eyebrows for scrutiny. A good rule of thumb when scanning search results is noting the diversity of sources. If you find content only on the company blog, and it has barely been shared or commented upon, it might hint that the content is not relevant enough to be of interest within the Agile community. Advanced Tips Search for the title of a particular content piece on X or Twitter and have a look at the search results: Who from the Agile community is sharing this content? Use sites like BuzzSumo for content research. While BuzzSumo is a paid service, they offer a generous 30-day free trial period. Source 2: LinkedIn’s People Search Another good source for research on the target organization before getting hired as a Scrum Master is LinkedIn’s people search. You can list results by search term and then filter them, for example, by company name and location. (Here is an example of Scrum Masters working for Accenture in North America which lists currently about 2,800 results.) And while you’re at it, why not reach out to someone listed in the search results who is in your LinkedIn network? Or ask someone from your network who may introduce you to a person from the target organization you would like to interview about their Agile mindset? Please note, though, that internal job titles may differ from your vocabulary and impact the accuracy of the search results. Source 3: Ask Peers for Help via Reddit, Hacker News, the Hands-on Agile Slack Community, and LinkedIn Groups It is also beneficial to extend the initial Google search to online communities such as Reddit or Hacker News (HN), to name a couple. Both communities allow for posting articles as well as questions. The archive of HN is of particular interest. It is not just because of the sheer number of available articles or threads there but also the partly heated discussions going on in the comments. Be aware, though, that "Scrum" as a concept is challenged by a lot of outspoken community members (namely, independent developers) both on Reddit and HN. Beyond passively scanning the archives, posting a direct question to peers is an alternative. HN is likely a waste of time, and if using Reddit – choose the Subreddits r/agile and r/scrum for a possibly better outcome. Note: Don’t forget – haters will hate, and trolls just want to play. Do not take it personally if your search on Reddit or HN is not taking the direction you desire. You can probably expect more support by asking the 19,000 members of the "Hands-on Agile" Slack community for help on getting hired as a Scrum Master. This is a worldwide community of Scrum Masters, agile coaches, and Product Owners that has proven to be very supportive. There are also LinkedIn groups available that focus on Scrum and agile practices — some with more than 100,000 members. After having joined them, post your question(s), remembering to be compliant with the group rules. Expect your first posts to be moderated, though. Some recommended LinkedIn groups — in no particular order — getting hired as a Scrum Master are: Agile Clinic Scrum Practitioners Agile World Group Scrum.org Agile Agile Project Management Agile Coaching Scrum Practitioners, Scrum Masters If posting a question to a LinkedIn group, expect to monitor it carefully and interact with answering members in a timely manner: not interacting with responding group members may be considered rude and possibly lead to being banned from posting in the group again. (Read More: Etiquette in technology (Netiquette).) Also, try Quora, directing a question on getting hired as a Scrum Master to Quora members active in the agile realm as to whether the organization of interest has an agile mindset. (Note: In doing so, avoid asking anonymous questions — which tend to have a significantly lower answering rate.) Lastly, the two main Scrum certification bodies — Scrum.org and ScrumAlliance — provide access to directories of certificate holders. In both cases, you need the certificate holder’s email address access to a public profile via the search function. A faster way to access a known individual’s public profile is often the advanced Google search, see above. Source 4: Is the Organization Sponsoring or Organizing Meetups, Barcamps, or Conferences? In my eyes, supporting public or virtual events is the highest form of contribution to the agile community by an organization. There are four different levels of engagement — no matter whether the event is a virtual event or an in-person event: Organizing conferences (or Barcamps) Sponsoring conferences Providing speakers at conferences Sponsoring local Meetups and Barcamps by providing a venue Suppose an organization provides this level of support to the agile community. In that case, the talk about this will undoubtedly be on the company blog, an engineering or product-management-related blog, or in a press release in their public relations section. In the unlikely case that any reference cannot be found, just contact the Public Relations department who will provide the required information. A. Browsing Conference Sites for Sponsors Conference sites are a good ground for identifying prospective organizations when considering applying for a Scrum Master position. Check carefully for two things: sponsors and speakers. Search for sponsors that are practicing agile in their daily operations. Usually, a larger sponsor package will also include a speaking slot at the conference. Attending such a session — whether in person or virtually — will provide direct access to the speaker and thus a first contact in the inner circle of that organization’s agile practitioners. This tends to be valuable: People departments often rely on the private networks of the organization’s available agile practitioners to identify suitable candidates for job openings as a Scrum Master. (Accordingly, attending local Meetups can also be a worthwhile investment for job seekers.) B. Browsing Conference Sites for Speakers Personally, a more promising approach, by comparison, is to search for non-professional speakers who are aligned with an organization that is not sponsoring the conference. These speakers may indicate a suitable prospective employer or client after already having gone through the selection process for speaking proposals and vetting their contribution for originality. The same approach can apply to contributions at Barcamps, although a disadvantage is that the critical information is only available during an event. While the speaker list of a conference is available in advance to stimulate ticket sales, it is the nature of a Barcamp that the schedule, and hence the speaker list, is available only on the day of the Barcamp. If you are already planning to attend a Barcamp, it may just be an inconvenience and not a concern. Timing is crucial, though, so please keep in mind that tickets for Barcamps are often sold out within minutes. (For example, the 600-plus tickets for the UXCamp Europe were regularly gone in a few minutes until the organizers switched to a lottery.) There are numerous conferences regarding agile practices, so here are just some of the listings: Agile Alliance Agile Testing Days Agile on the Beach QCon New York For an additional listing of conferences, check the Top 10 Agile conferences to attend in 2024. Lastly, the big conferences are often considered must-attend events — useful, for example, to gain or improve professional visibility within the agile community. Alternatively, smaller conferences often prove to be more effective by providing information that helps identify a suitable, prospective agile organization. The larger the conference, the more possibility of noise camouflaging that information, complicating getting hired as a Scrum Master. C. Browsing Meetup.com Meetup.com is a great site to discover which events of the Agile community are happening locally and who is organizing them. There are thousands of meetups worldwide covering the topics of agile frameworks and practices, software engineering, and product development in general. Since the pandemic, many Meetup groups switched to virtual events, attracting more members from outside their original reach. For example, the Hands-on Agile Meetup community has grown from about 1,500 members in March 2020 to more than 6,500 members worldwide in February 2024. The new members from all over the globe — from Vietnam to Brazil to the United States — added tremendous expertise and diversity, making the events more inclusive and a much better experience for everyone. Therefore, Meetup.com is an excellent place to look for answers and peer support. Conclusion: Getting Hired as a Scrum Master Suppose you are looking for a Scrum Master job. In that case, it is possible to understand the agile mindset of an organization in advance by applying the research approaches sketched above. Investing a few hours in advance may save you from later disappointment when your Scrum Master job may turn out to be vastly different from what was pitched or promised to you before.
Hello DZone Community! 2023 was certainly an exciting year for us here at DZone, and I hope it was filled with lots of love, laughter, and learning for you as well! One of the coolest things we did during the year was our latest DZone Community Survey! At DZone, our community is the heart and soul of who we are and what we do. We literally would not exist without each and every one of you, and the strength of our community is what sets us apart and makes us the go-to resource for developers around the world. And as with any relationship, the best way to grow and improve is to learn more about each other. That was our goal with the 2023 DZone Community Survey: to learn more about you, our community, so we can better serve you content that is relevant, helpful, and engaging for you while continuing to build the best platform on the planet for software developers to grow, connect, and share knowledge. We learned quite a lot about our community from the survey, and I wanted to share some of the highlights with you: Java is still the most dominant language, with Python being a close second. AI and applications for automating code processes are the topics of greatest interest. 90% of respondents prefer to learn through online communities like DZone and StackOverflow. This is just a small preview of what we learned, but what this means for you is that Java and AI content is continuing to see a lot of engagement on DZone, and AI specifically will be a hot topic to discuss on the site this year. (Read: If you’re looking for a topic to write about, Java and AI would be a great place to start.) It also reiterates why DZone is such a great place for developers to gather and share knowledge. We also saw some interesting changes from our last survey in 2020, such as nearly double the number of respondents working at the C-suite level and that 60% of you have a significant impact on the technology your company purchases and implements. We love that the DZone community is filled with so many expert, experienced developers. The level of knowledge here is unmatched, so when you add your voice to the conversation, you know you’re in strong company. In conclusion, we’re really excited about the results of our 2023 Community Survey, mainly because what we learned will help us continue to improve the content and experience we provide on DZone. We can’t tell you how much we appreciate everyone who took the time to respond to our survey, and we look forward to the 2024 DZone Community Survey! Thank you and Happy New Year! -The DZone Team
In the fast-evolving landscape of the Internet of Things (IoT), edge computing has emerged as a critical component. By processing data closer to where it's generated, edge computing offers enhanced speed and reduced latency, making it indispensable for IoT applications. However, developing and deploying IoT solutions that leverage edge computing can be complex and challenging. Agile methodologies, known for their flexibility and efficiency, can play a pivotal role in streamlining this process. This article explores how Agile practices can be adapted for IoT projects utilizing edge computing in conjunction with cloud computing, focusing on optimizing the rapid development and deployment cycle. Agile in IoT Agile methodologies, with their iterative and incremental approach, are well-suited for the dynamic nature of IoT projects. They allow for continuous adaptation to changing requirements and rapid problem-solving, which is crucial in the IoT landscape where technologies and user needs evolve quickly. Key Agile Practices for IoT and Edge Computing In the realm of IoT and edge computing, the dynamic and often unpredictable nature of projects necessitates an approach that is both flexible and robust. Agile methodologies stand out as a beacon in this landscape, offering a framework that can adapt to rapid changes and technological advancements. By embracing key Agile practices, developers and project managers can navigate the complexities of IoT and edge computing with greater ease and precision. These practices, ranging from adaptive planning and evolutionary development to early delivery and continuous improvement, are tailored to meet the unique demands of IoT projects. They facilitate efficient handling of high volumes of data, security concerns, and the integration of new technologies at the edge of networks. In this context, the right tools and techniques become invaluable allies, empowering teams to deliver high-quality, innovative solutions in a timely and cost-effective manner. Scrum Framework with IoT-Specific Modifications Tools: JIRA, Asana, Microsoft Azure DevOps JIRA: Customizable Scrum boards to track IoT project sprints, with features to link user stories to specific IoT edge development tasks. Asana: Task management with timelines that align with sprint goals, particularly useful for tracking the progress of edge device development. Microsoft Azure DevOps: Integrated with Azure IoT tools, it supports backlog management and sprint planning, crucial for IoT projects interfacing with Azure IoT Edge. Kanban for Continuous Flow in Edge Computing Tools: Trello, Kanbanize, LeanKit Trello: Visual boards to manage workflow of IoT edge computing tasks, with power-ups for automation and integration with development tools. Kanbanize: Advanced analytics and flow metrics to monitor the progress of IoT tasks, particularly useful for continuous delivery in edge computing. LeanKit: Provides a holistic view of work items and allows for easy identification of bottlenecks in the development process of IoT systems. Continuous Integration/Continuous Deployment (CI/CD) for IoT Edge Applications Tools: Jenkins, GitLab CI/CD, CircleCI Jenkins With IoT Plugins: Automate building, testing, and deploying for IoT applications. Plugins can be used for specific IoT protocols and edge devices. GitLab CI/CD: Provides a comprehensive DevOps solution with built-in CI/CD, perfect for managing source code, testing, and deployment of IoT applications. CircleCI: Efficient for automating CI/CD pipelines in cloud environments, which can be integrated with edge computing services. Test-Driven Development (TDD) for Edge Device Software Tools: Selenium, Cucumber, JUnit Selenium: Automated testing for web interfaces of IoT applications. Useful for testing user interfaces on management dashboards of edge devices. Cucumber: Supports behavior-driven development (BDD), beneficial for defining test cases in plain language for IoT applications. JUnit: Essential for unit testing in Java-based IoT applications, ensuring that individual components work as expected. Agile Release Planning with Emphasis on Edge Constraints Tools: Aha!, ProductPlan, Roadmunk Aha!: Roadmapping tool that aligns release plans with strategic goals, especially useful for long-term IoT edge computing projects. ProductPlan: For visually mapping out release timelines and dependencies, critical for synchronizing edge computing components with cloud infrastructure. Roadmunk: Helps visualize and communicate the roadmap of IoT product development, including milestones for edge technology integration. Leveraging Tools and Technologies Development and Testing Tools Docker and Kubernetes: These tools are essential for containerization and orchestration, enabling consistent deployment across various environments, which is crucial for edge computing applications. Example - In the manufacturing sector, Docker and Kubernetes are pivotal in deploying and managing containerized applications across the factory floor. For instance, a car manufacturer can use these tools for deploying real-time analytics applications on the assembly line, ensuring consistent performance across various environments. GitLab CI/CD: Offers a single application for the entire DevOps lifecycle, streamlining the CI/CD pipeline for IoT projects. Example - Retailers use GitLab CI/CD to automate the testing and deployment of IoT applications in stores. This automation is crucial for applications like inventory tracking systems, where real-time data is essential for maintaining stock levels efficiently. JIRA and Trello: For Agile project management, providing transparency and efficient tracking of progress. Example - Smart city initiatives utilize JIRA and Trello to manage complex IoT projects like traffic management systems and public safety networks. These tools aid in tracking progress and coordinating tasks across multiple teams. Edge-Specific Technologies Azure IoT Edge: This service allows cloud intelligence to be deployed locally on IoT devices. It’s instrumental in running AI, analytics, and custom logic on edge devices. Example - Healthcare providers use Azure IoT Edge for deploying AI and analytics close to patient monitoring devices. This approach enables real-time health data analysis, crucial for critical care units where immediate data processing can save lives. AWS Greengrass: Seamlessly extends AWS to edge devices, allowing them to act locally on the data they generate while still using the cloud for management, analytics, and storage. Example - In agriculture, AWS Greengrass facilitates edge computing in remote locations. Farmers deploy IoT sensors for soil and crop monitoring. These sensors, using AWS Greengrass, can process data locally, making immediate decisions about irrigation and fertilization, even with limited internet connectivity. FogHorn Lightning™ Edge AI Platform: A powerful tool for edge intelligence, it enables complex processing and AI capabilities on IoT devices. Example - The energy sector, particularly renewable energy, uses FogHorn’s Lightning™ Edge AI Platform for real-time analytics on wind turbines and solar panels. The platform processes data directly on the devices, optimizing energy output based on immediate environmental conditions. Challenges and Solutions Managing Security: Edge computing introduces new security challenges. Agile teams must incorporate security practices into every phase of the development cycle. Tools like Fortify and SonarQube can be integrated into the CI/CD pipeline for continuous security testing. Ensuring Scalability: IoT applications must be scalable. Leveraging microservices architecture can address this. Tools like Docker Swarm and Kubernetes aid in managing microservices efficiently. Data Management and Analytics: Efficient data management is critical. Apache Kafka and RabbitMQ are excellent for data streaming and message queuing. For analytics, Elasticsearch and Kibana provide real-time insights. Conclusion The application and adoption of Agile methodologies in edge computing for IoT projects represent both a technological shift and a strategic imperative across various industries. This fusion is not just beneficial but increasingly necessary, as it facilitates rapid development, deployment, and the realization of robust, scalable, and secure IoT solutions. Spanning sectors from manufacturing to healthcare, retail, and smart cities, the convergence of Agile practices with edge computing is paving the way for more responsive, efficient, and intelligent solutions. This integration, augmented by cutting-edge tools and technologies, is enabling organizations to maintain a competitive edge in the IoT landscape. As the IoT sector continues to expand, the amalgamation of Agile methodologies, edge computing, and IoT is set to drive innovation and efficiency to new heights, redefining the boundaries of digital transformation and shaping the future of technological advancement.
Software architecture is a critical aspect of software development. It involves the high-level structuring of software systems to meet technical and business requirements. Software architects play a pivotal role in this process by making design choices, dictating technical standards, and leading implementation efforts. This paper proposes a description of different architecture types. However, as this has been done many times before, I want to add the perspective of the C4 model to help understand who will intervene and at each level and with whom they will have to interact. There are various types of software architects, each specializing in different aspects of software systems. This article analyzes the standard types of software architects, highlighting their roles, responsibilities, and impact on software development processes. Types of Software Architects Enterprise Architect The Enterprise Architect ensures that the organization’s technological infrastructure aligns with its business strategy. This role integrates the IT strategy with business goals and governs compliance with company policies and regulations. Their role involves overseeing the integration of various IT components to ensure they function cohesively in support of organizational objectives. Broad Vision: Focuses on aligning IT strategy with business goals. Governance: Ensures that all aspects of the technological environment adhere to the company’s policies and regulations. Integration: Oversees the integration of various IT aspects to ensure they work harmoniously towards the organizational objectives. Key Differences with the CTO: I have been asked many times what is the difference between the Enterprise Architect and the CTO. The key differences between these roles lie in their organizational hierarchy, focus, responsibilities, and approaches. As a top-level executive, the CTO directs the broader business strategy and technological vision, engaging in strategic decision-making, innovation, and external advocacy for technology. In contrast, the Enterprise Architect, typically occupying a senior-level management position within the IT department, concentrates more internally on the design, governance, and optimization of IT infrastructure, ensuring its alignment with business processes. While the CTO adopts a strategic view, focusing on technology in the context of business growth, the Enterprise Architect takes a more tactical stance, dealing with the specifics of IT infrastructure and its operational effectiveness. Both roles are vital for an organization’s technological success, with the CTO shaping the overarching technology direction and the Enterprise Architect focusing on the practical design and efficiency of IT systems in support of the company’s strategy. The Enterprise Architect could be seen as the technical right hand of the CTO. Solution Architect The Solution Architect acts as a link between business challenges and technological solutions. They design and lead the implementation of solution architecture across projects or programs, providing essential technical guidance and coaching to developers and engineers. Additionally, they manage project scope, ensuring that solutions align precisely with specific business requirements. Solution Development: Designs and leads the implementation of a solution architecture across a project or program. Technical Guidance: Provides technical guidance and coaching to developers and engineers. Project Scope Management: Ensures that the solutions meet the specific business needs within the defined scope. Technical Architects Technical Architects are experts who possess in-depth knowledge in a specific domain and specialize in specific areas of expertise, often functioning within the broader framework of the Enterprise Architecture Team or contributing to various delivery projects. The term “domain” in this context refers to a niche area of knowledge, encompassing a range of specialized skill sets. These architects play critical roles in different aspects of software architecture, each focusing on a unique domain: Application Architect: Concentrates on the design and structure of individual applications, ensuring they meet both technical and business requirements. Technical Architect: Deals with the technical infrastructure and hardware aspects, ensuring that the technology infrastructure supports the specific requirements of the domain. Data Architect: Responsible for managing, strategizing, and structuring the organization’s data architecture, ensuring data accuracy, and integrity. Security Architect: Focuses on designing robust security structures, ensuring that the domain’s architecture is safeguarded against potential threats and vulnerabilities. Business Architect: Focuses on aligning business strategy with technological solutions, ensuring that business processes are optimally supported by technology. Data Architect: Plays a crucial role in ensuring effective leverage of data assets to support the organization’s decision-making processes. They develop and manage the data strategy, policies, standards, and practices, design data models and structures to support business operations and ensure data accuracy and integrity across systems. Cloud Architects: They are key in facilitating an organization’s transition to cloud computing, optimizing cloud solutions for performance, cost, and scalability. This role involves designing cloud architecture strategies, developing cloud solutions, overseeing the migration of systems to cloud platforms, and managing relationships with cloud service providers. Summary In essence, the field of software architecture is defined by three primary types of architects: Enterprise Architects, Solution Architects, and a diverse group of Technical Architects. Each type focuses on different aspects of software development, with some emphasizing strategy and others delving into technological details. Enterprise Architects align the organization’s technology with business goals, whereas Solution Architects bridge business needs with technical solutions. Technical Architects, encompassing roles like Application, Data, and Security Architects, specialize in various domains, providing depth in specific technical areas. Together, these architects create a comprehensive approach to software architecture, ensuring both strategic alignment and technical excellence across multiple fields. Focus by architect type The Business Architect is not focused on technology but rather on the business domain . They are a particular case.Parallel with the C4 model The C4 Model for software architecture provides a framework for visualizing and documenting the software architecture of a system at different levels of abstraction. It consists of four hierarchical levels: Context, Containers, Components, and Code. Each level targets a specific set of concerns, and different types of architects can play key roles at each of these levels. You can read my paper on the C4 model here if you are not familiar with C4 model : Architecture Patterns: C4 Model Let’s parallel this with the roles of various architects and see where they might intervene. Context Focus: The highest level shows how the system in focus interacts with users and other systems. Relevant Architect: Chief Technology Officer (CTO) and Enterprise Architect. Role: The CTO ensures that the system aligns with the broader business and technological strategy. The Enterprise Architect focuses on how the system fits within the wider IT landscape and its external interactions. Containers Focus: Zooms into the system to illustrate the high-level technology choices, showing how responsibilities are distributed across it. Relevant Architect: Enterprise Architect and Solution Architect. Role: The Enterprise Architect designs the high-level structure and identifies integration points with other systems. The Solution Architect defines the technology stack and oversees the architectural decisions for each container (e.g., web applications, mobile apps, databases). Components Focus: Delves deeper into the containers to reveal the internal components and their interactions. Relevant Architect: Solution Architect and Technical Architect. Role: The Solution Architect structures the components within a container, ensuring they align with the solution’s goals. The Technical Architect details the specific technologies and patterns used, focusing on aspects like scalability, reliability, and performance. Code Focus: The lowest level, detailing the implementation of individual components. Relevant Architect: Application Architect and Technical Architect. Role: The Application Architect is involved in defining the code structure, frameworks, and coding standards. The Technical Architect ensures that code-level decisions align with the technology strategy and standards. Summary C4 perspective of Architect types CTO: Involved at the Context level, aligning the system with business strategy and technological innovation. Enterprise Architect: Active from the Context to Containers level, focusing on system integration and alignment with enterprise IT strategy. Solution Architect: Engages primarily at the Containers and Components level, designing the solution architecture within the system. Application Architect: Primarily active at the Code level, ensuring best practices in software development and implementation. Business Architect: Although he may be considered a “Technical Architect”, his domain skills enable him to integrate the context level and assist in decision-making for his specific business domain. He may also appear at other levels under particular conditions. Each architect plays a particular role at different levels of the C4 model, ensuring that the architecture is robust, scalable, and aligned with both the technical and business objectives. This layered approach allows for a clear separation of concerns, making complex software systems easier to understand, communicate, and manage. Conclusion This analysis of the different types of software architects, enriched by the perspective of the C4 Model, not only clarifies their roles and areas of expertise but also opens up a broader understanding of how these roles interconnect and contribute to the overall success of software development. It highlights the essential nature of collaboration and the importance of having diverse architectural expertise within a project. Architects are seekers of solutions, which inherently means they are adept describers of problems. Despite the varied roles and specializations of different types of software architects, there are core attributes and approaches common to all. Primarily, they must embody pragmatism, steering clear of dogmatism in their methodologies. This pragmatism is key in balancing ideal architectural models with the practical constraints and realities of business and technology environments. Furthermore, they are seekers of solutions, which inherently means they are adept describers of problems. Their ability to accurately identify and articulate challenges is as crucial as their skill in devising effective solutions. This dual role of problem identifier and solution provider underpins their effectiveness in driving technological advancements and ensuring the alignment of IT strategies with business objectives. Their expertise not only lies in creating robust architectures but also in foreseeing potential pitfalls and proactively addressing them, thereby ensuring sustainable and adaptable software systems. These common traits form the backbone of their roles, making them indispensable in the ever-evolving landscape of software architecture.
What if you could eliminate productivity obstacles and accelerate delivery from code to production through automated Azure pipelines? The promise of cloud-powered DevOps inspires, yet its complexities often introduce new speed bumps that hamper release velocity. How can development teams transcend these hurdles to achieve continuous delivery nirvana? This guide will illuminate the path to mastery by unraveling the mysteries of Azure Pipelines. You’ll discover best practices for optimizing build and release workflows that minimize friction and downstream delays, unlocking your team’s true agile potential. Simplifying Continuous Integration/Continuous Delivery (CI/CD) Progressive teams once struggled to harness ad hoc scripts, brittle optimizing integration and delivery machinations managing software projects at scale. Azure Pipelines delivers turnkey CI/CD, automating releases reliably through workflows, portably abstracting complexity, and saving hundreds of hours better spent building products customers love. Intuitive pipelines configure triggers, setting code commits or completion milestones into motion, executing sequential jobs like builds, tests, approvals, and deployments according to codified specifications adaptable across environments standardizing engineering rhythm. Integration tasks compile libraries, run quality checks, bundle executables, and publish artifacts consumed downstream. Deploy jobs, then release securely on-prem, multi-cloud, or Kubernetes infrastructure worldwide. Configurable settings give granular control, balancing agility with oversight throughout The software lifecycle. Syntax options toggle manual versus automatic approvals at stage gates while failure policies customize rollback, retry, or continue logic matching risk appetite to safeguard continuity. Runtime parameters facilitate dynamic bindings to frequently changing variables. Azure Pipelines lifts engineering out of deployment darkness into a new era of frictionless productivity! Embedding Quality via Automated Testing Processes Progressive teams focused on chasing innovation often delay hardening software vital to preventing downstream heartaches once systems touch customers. Azure Test Plans embeds robust quality processes directly within developer workflows to catch issues preemptively while automated testing maintains protection consistency, guarding against regressions as enhancements compound exponentially over time. Test plans manage test cases, codifying requirements, validation criteria, setup needs, and scripts developers author collaboratively while sprinting new features. Execution workflow automation links code check-ins to intelligently run related test suites across browser matrices on hosted lab infrastructure without occupying local computing capacity. Tests also integrate within pipelines at various test/staging environments, ensuring capabilities function integrated end-to-end before reaching production. Rich analytics dashboards detail test pass/fail visualizing coverage and historical trends and prioritize yet-to-be mitigated defects. Integrations with partner solutions facilitate specialized test types like user interface flows, load testing, penetration testing, and rounding out assessment angles. Shipping stable and secure software demands discipline; Azure DevOps Server Support Test Plans turn necessity into a habit-forming competitive advantage! Monitoring App Health and Usage With Insights Monitoring application health and usage is critical for delivering great customer experiences and optimizing app performance. Azure Monitor provides invaluable visibility when leveraged effectively through the following approaches: Configure app health checks: Easily set up tests that probe the availability and response times of application components. Catch issues before customers do. Instrument comprehensive telemetry: Trace transactions end-to-end across complex Microservices ecosystems to pinpoint frictions impacting user workflows. Aggregate logs centrally: Pull together operational signals from networks, web servers, databases, etc., into intuitive PowerBI dashboards tracking business priority metrics from marketing clicks to sales conversions. Analyze usage patterns: Reveal how customers navigate applications and uncover adoption barriers early through engagement telemetry and in-app surveys. Tying app experiences to downstream business outcomes allows data-driven development that directly responds to real customer needs through continuous improvement. Collaborating on Code With Azure Repos Before teams scale delivering innovation consistently, foundational practices optimizing productivity and reducing risks start with version-controlling code using robust repositories facilitating collaboration, backed-up assets, and reproducible builds. Azure Repos delivers Git-based repositories securing centralized assets while supporting agile branch management workflows, distributing teams’ access projects in parallel without colliding changes. Flexible repository models host public open source or private business IPs with granular permissions isolation. Developer clones facilitate local experimentation with sandbox branch merging upon review. Advanced file lifecycle management automates asset cleanup while retention policies persist historical snapshots cost-efficiently. Powerful pull requests enforce peer reviews, ensuring changes meet architectural guidelines and performance standards before being accepted into upstream branches. Contextual discussions thread code reviews iterating fixes resolving threads before merging, eliminating redundant issues escaping the down cycle. Dependency management automatically triggers downstream builds, updating executables ready for staging deployment post-merge publication. Share code confidently with Azure Repos! Securing Continuous Delivery With Azure Policies As code progresses, staged environments ultimately update production, consistency in rollout verification checks, and oversight access prevent dangerous misconfigurations going uncaught till post-deployment when customers face disruptions. Azure Pipelines rely on Azure Policies extending guardrails and portably securing pipeline environments using regularization rules, compliance enforcement, and deviation alerts scoped across management hierarchies. Implementing robust security and compliance policies across Azure DevOps pipelines prevents dangerous misconfigurations from reaching production. Azure Policy is invaluable for: Enforcing pipeline governance consistently across environments. Automating Cloud Security Best Practices. Monitoring configuration states against compliance baselines. Codify Pipeline Safeguards With Azure Policy Specifically, leverage Azure Policy capabilities for: Restricting pipeline access to only authorized admins. Mandating tags for operational consistency. Limiting deployment regions and resource SKUs. Policies also scan configurations alerting when controls drift from the desired state due to errors. Automated remediation then programmatically aligns resources back to compliant posture. Carefully Orchestrate Production Upgrades Smoothly rolling out updates to mission-critical, global-scale applications requires intricate staging that maintains continuity, manages risk tightly, and fails safely if issues emerge post-deployment: Implement canary testing on small pre-production user cohorts to validate upgrades and limit blast radius if regressions appear. Utilize deployment slots to hot-swap upgraded instances only after health signals confirm readiness, achieving zero downtime. Incorporate automated rollback tasks immediately reverting to the last known good version at the first sign of problems. Telemetry-Driven Deployment Analysis Azure Monitor plays a powerful role in ensuring controlled rollout success by providing the following: Granular instrumentation across services and dependencies. Holistic dashboards benchmarking key app and business health metrics pre/post deployments. Advanced analytics detecting anomalous signals indicative of emerging user-impacting incidents. Together, these capabilities provide empirical confidence for innovation at scale while preventing disruptions. Monitoring proves most crucial when purpose-built into DevOps pipelines from the initial design stage. Pro Tip: Perfect health signals assessing key user journeys end-to-end, combining app load tests, dependencies uptime verification, and failover validations across infrastructure tiers, detecting deterioration points before manifesting user-facing. Actionable Analytics Powering Decision-Making Actionable analytics empower data-driven decisions by optimizing software delivery by: Translating signals into insightful recommendations that focus on priorities. Answering key questions on whether programs trend on time or risk falling behind. Visualizing intuitive PowerBI dashboards aggregated from 80+ DevOps tools tracking burndown rates, queue depths, and past-due defects. Allowing interactive slicing by team, priority, and system to spotlight constraints and targeted interventions to accelerate outcomes. Infusing predictive intelligence via Azure ML that forecasts delivery confidence and risk, helping leaders assess tradeoffs. Gathering real-time pulse survey feedback that reveals challenges teams self-report to orient culture priorities. Driving Data-Informed Leadership Decisions Robust analytics delivers DevOps success by: Quantifying productivity health and assessing program timelines to meet strategic commitments. Identifying delivery bottlenecks for surgical interventions and removing impediments. Forecasting team capacity, shaping staffing strategies, and risk mitigations. Monitoring culture signals to ensure priorities align with participant feedback. Sustaining Analytics Value To sustain analytics value over time: Measure analytics usage directly in decisions assessing the utility. Iterate dashboards incorporating leader user feedback on enhancing relevance. Maintain consistency in longitudinal tracking, avoiding frequent metric definition churn. Nurture data fluency-building competencies by adopting insights trustfully. Let data drive responsive leadership sacrifice and celebration in balance. Outcomes accelerate when data transforms decisions! Onwards, creative coders and script shakers! What journey awaits propelled by the power of Azure Pipelines? Dare mighty expeditions to materialize ambitious visions once improbable before Cloud DevOps elevated delivery to wondrous new heights. How will your team or company innovate by leveraging Azure’s trusty wisdom and harnessing the winds of change? Let us know in the comments below.
In this digital world where all companies want their products to have a cutting edge over others and they want faster go to market, most companies want their teams to follow Agile scrum methodology; however, we observed most teams are following Agile scrum ceremonies for the name sake only. Among all scrum ceremonies, the Sprint retrospective is the most important and most talked about ceremony but the least paid attention to. Many times, scrum masters keep doing the same canned single routine format of a retrospective, which is: what went well? What didn't go well? and What is to improve? Let us analyze what are the problems the team faces, their impact, and recommendations to overcome. Problems and impact of routine format Sprint retrospective: Doing a routine single format made teams uninterested, and they started losing interest. Either team members stopped attending this ceremony, kept silent, or didn't participate. Often, action items came out of retrospectives not being followed up during the sprint. Status of action items not discussed in next sprint retrospective The team started losing faith in the ceremony when they saw previous sprint action items still existed and kept accumulated This leads to missing key feedback and actions sprint after sprint and hampers the team's improvements. With this, even after 20-30 sprints, teams keep making the same mistakes again and again. Ultimately, the team never becomes mature. Recommendations for Efficient Sprint Retrospective We think visually. Try following fun-filled visual retrospective techniques: Speed car retrospective Speed boat retrospective Build and reflect Mad, Sad, Glad 4 Ls Retrospective One-word Retrospective Horizontal Line retrospective Continue, stop, and start-improve What went well? What didn't go well? What is to improve? Always record, publish, and track action items. Ensure leadership does not join sprint retrospectives, which will make the team uncomfortable in sharing honest feedback. Every sprint retrospective first discusses the status of action items from the previous sprint; this will give confidence to the team that their feedback is being heard and addressed. Now let us discuss these visual, fun-filled sprint retrospective techniques in detail: 1. Speed Car Retrospective This retrospective shows that the car represents the team, the engine depicts the team's strength, the Parachute represents the impediments that slow down the car's speed, the Abyss shows the danger the team foresees ahead, and the Bridge indicates the team's suggestions on how to overcome and cross this abyss without falling into it. 2. Speed Boat Retrospective This retrospective shows that the boat represents the team; the anchors represent problems that are not allowing the boat to move or slowing it down and turn these anchors into gusts of winds, which in turn represents the team's suggestions, which the team thinks will help the boat move forward. 3. Build and Reflect Bring legos set and divide teams into multiple small groups, then ask the team to build two structures. One represents how the sprint went, and one represents how it should be and then ask each group to talk about their structures and suggestions for the sprint. 4. Mad, Sad, Glad This technique discusses what makes the team mad, sad, and glad during the sprint and how we can move from mad, sad columns to glad columns. 5. Four Ls: Liked, Learned, Lacked and Longed This technique talks about four Ls. What team "Liked," What team "Learned," What team "Lacked," and What team "Longed" during the sprint, and then discuss each item with the team. 6. One-Word Retrospective Sometimes, to keep the retrospective very simple, ask the team to describe the sprint experience in "one word" and then ask why they describe sprint with this particular word and what can be improved. 7. Horizontal Line Retrospective Another simple retrospective technique is to draw a horizontal line and, above the line, put items that the team feels are "winning items" and below line items that the team feels are "failures" during the sprint. 8. Continue, Stop, Start-Improve This is another technique to capture feedback in three categories, viz. "Continue" means which team feels the team did great and needs to continue, "Stop" talks about activities the team wants to stop, and "Start-Improve" talks about activities that the team suggested to start doing or improve. 9. What Went Well? What Didn’t Go Well? And What Is To Improve? This is well well-known and most practiced retrospective technique to note down points in the mentioned three categories. We can keep reshuffling these retrospective techniques to keep the team enthusiastic to participate and share feedback in a fun, fun-filled, and constructive environment. Remember, feedback is a gift and should always be taken constructively to improve the overall team's performance. Go, Agile team!!
Jasper Sprengers
Senior Developer,
Team Rockstars IT
Alireza Chegini
DevOps Architect / Azure Specialist,
Coding As Creating
Stelios Manioudakis
Lead Engineer,
Technical University of Crete
Stefan Wolpers
Agile Coach,
Berlin Product People GmbH