DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations

Methodologies

Agile, Waterfall, and Lean are just a few of the project-centric methodologies for software development that you'll find in this Zone. Whether your team is focused on goals like achieving greater speed, having well-defined project scopes, or using fewer resources, the approach you adopt will offer clear guidelines to help structure your team's work. In this Zone, you'll find resources on user stories, implementation examples, and more to help you decide which methodology is the best fit and apply it in your development practices.

icon
Latest Refcards and Trend Reports
Trend Report
Performance and Site Reliability
Performance and Site Reliability
Refcard #050
Scrum
Scrum
Refcard #376
Cloud-Based Automated Testing Essentials
Cloud-Based Automated Testing Essentials

DZone's Featured Methodologies Resources

Balancing Shift Left and Shift Right in Your DevOps Strategy

Balancing Shift Left and Shift Right in Your DevOps Strategy

By Ruchita Varma
Shift Left and Shift Right are two terms commonly used in the DevOps world to describe approaches for improving software quality and delivery. These approaches are based on the idea of identifying defects and issues as early as possible in the development process. This way, teams can address the issues quickly and efficiently, allowing software to meet user expectations. Shift Left focuses on early testing and defect prevention, while Shift Right emphasizes testing and monitoring in production environments. Here, in this blog, we will discuss the differences between these two approaches: Shift Left and Shift Right. The Shift-Left Approach Shift Left meaning in DevOps, refers to the practice of moving testing and quality assurance activities earlier in the software development lifecycle. This means that testing is performed as early as possible in the development process. Ideally, it is applied at the start, during the requirements-gathering phase. Shift-Left allows teams to identify and fix defects earlier in the process. This reduces the cost and time required for fixing them later in the development cycle. The goal of Shift Left is to ensure that software is delivered with higher quality and at a faster pace. Shifting left meaning in DevOps involves different aspects. Here are the key aspects of the Shift-Left Approach in DevOps: Early Involvement: The Shift-Left Approach involves testing and quality assurance teams early in the development process. This means that testers and developers work together from the beginning rather than waiting until the end. Automated Testing: Automation plays a key role in the Shift-Left Approach. Test automation tools are used to automate the testing process and ensure that defects are detected early. Collaboration: Collaboration is key to the Shift-Left Approach. Developers and testers work together to ensure that quality is built into the product from the beginning. Continuous Feedback: The Shift-Left Approach emphasizes continuous feedback throughout the development process. This means that defects are identified and fixed as soon as they are discovered, rather than waiting until the end of the SDLC. Continuous Improvement: The Shift-Left Approach is focused on continuous improvement. By identifying defects early, the development team can improve the quality of the software and reduce the risk of defects later in the SDLC. After knowing the shift left meaning, let’s see some examples too. Here are some examples of Shift Left practices in DevOps: Test-Driven Development (TDD): Writing automated tests before writing code to identify defects early in the development process. Code Reviews: Conducting peer reviews of code changes to identify and address defects and improve code quality. Continuous Integration (CI): Automating the build and testing of code changes to catch bugs early and ensure that the software is always in a deployable state. Static Code Analysis: Using automated tools to analyze code for potential defects, vulnerabilities, and performance issues. The Shift Right Approach Shift Right in DevOps, on the other hand, refers to the practice of monitoring and testing software in production environments. This approach involves using feedback from production to improve the software development process. By monitoring the behavior of the software in production, teams can identify and resolve issues quickly. This allows users to gain insights into how the software is used by end users. The goal of Shift Right is to ensure that software is reliable, scalable, and provides a good user experience. This approach involves: Monitoring production systems, Collecting feedback from users, and Using that feedback to identify areas for improvement. Here are the key aspects of the Shift Right Approach in DevOps: Continuous Monitoring: Continuous monitoring of the production environment helps to identify issues in real time. This includes monitoring system performance, resource utilization, and user behavior. Real-World Feedback: Real-world feedback from users is critical to identifying issues that may not have been detected during development and testing. This feedback can be collected through user surveys, social media, and other channels. Root Cause Analysis: When issues are identified, root cause analysis is performed to determine the underlying cause. This involves analyzing logs, system metrics, and other data to understand what went wrong. Continuous Improvement: Once the root cause has been identified, the DevOps team can work to improve the system. This may involve deploying patches or updates, modifying configurations, or making other changes to the system. Here are some examples of the Shift Right Approach: Monitoring and Alerting: Setting up monitoring tools to collect data on the performance and behavior of the software in production environments. Also, setting up alerts to notify the team when issues arise. A/B Testing: Deploying multiple versions of the software and testing them with a subset of users. This helps teams to determine which version performs better in terms of user engagement or other metrics. Production Testing: Testing the software in production environments to identify defects that may only occur in real-world conditions. Chaos Engineering: Introducing controlled failures or disruptions to the production environment to test the resilience of the software. Both Shifts Left, and Shift Right approaches are important in DevOps. They are often used together to create a continuous feedback loop that allows teams to improve software delivery. The key is to find the right balance between the two. This can easily be done using the right DevOps platform and analyzing business needs. Understanding the Differences Between Shift Left and Shift Right Shift Left and Shift Right are two different approaches in DevOps that focus on different stages of the software development and deployment lifecycle. Here are some of the key differences between these two approaches: Focus Shift Left focuses on testing and quality assurance activities that are performed early in the software development lifecycle. While Shift Right focuses on monitoring and testing activities that occur in production environments. Goals The goal of Shift Left is to identify and fix defects early in the development process. This helps to ensure that software is delivered with higher quality and at a faster pace. The goal of Shift Right is to ensure that software is secure, reliable, scalable, and provides a good user experience. Activities Shift Left activities include unit testing, integration testing, and functional testing, as well as automated testing and continuous integration. Shift Right activities include monitoring, logging, incident response, and user feedback analysis. Timing Shift Left activities typically occur before the software is deployed, while Shift Right activities occur after deployment. Risks The risks associated with Shift Left are related to the possibility of missing defects that may only be discovered in production environments. The risks associated with Shift Right are related to the possibility of introducing changes that may cause production incidents or disrupt the user experience. Conclusion Both Shifts Left, and Shift Right approaches are critical for the success of microservices. Hope, after reading this article, you’ve got a clear idea of Shifting left meaning and Shifting Right meaning. By using Shift Left and Shift Right, developers can ensure that their microservices are reliable, scalable, and efficient. In addition, these approaches help to ensure that microservices are adopted with security and compliance. More
Jira Anti-Patterns

Jira Anti-Patterns

By Stefan Wolpers CORE
If you ask people to come up with popular attributes for “Agile” or “agility,” Scrum and Jira will likely be among the top ten featured. Moreover, in any discussion about the topic, someone will mention that using Scrum running on top of Jira does not make an organization Agile. However, more importantly, this notion is often only a tiny step from identifying Jira as a potential impediment to outright vilifying it. So, in March 2023, I embarked on a non-representative research exercise to learn how organizations misuse Jira from a team perspective as I wanted to understand Jira anti-patterns. Read on and learn more about how a project management tool that is reasonably usable when you use it out of the box without any modifications turns into a bureaucratic nightmare, what the reasons for this might be, and what we can do about it. The Organizational Rationale Behind Regulating Jira Organizations might use Jira in restrictive ways for various reasons, although these reasons rarely align with the agile mindset. Some reasons include the following: Control and Oversight: Management might want to maintain control and supervision over a Scrum team’s work, ensuring that the team follows established processes and guidelines. A desire for predictability and standardization across the organization can drive this. Risk Aversion: Organizations may be risk-averse and believe tighter controls will help minimize risks and prevent project failures. This approach might stem from previous negative experiences or a need to understand agile principles better. Compliance and Governance: In some industries, organizations must adhere to strict regulatory and governance requirements. This requirement can lead to a more controlled environment, with less flexibility to adopt agile practices fully. Hierarchical Culture: Organizations with a traditional, hierarchical structure may have a top-down approach to decision-making. This culture can make it challenging to embrace agile principles, which emphasize team autonomy and self-organization. Inadequate Understanding of Agile Principles Such as Scrum: Some organizations may not fully understand them or misconstrue them as lacking discipline or structure. This misunderstanding can result in excessive control to compensate for the perceived lack of process. Metrics-Driven management: Management might focus on measurable outputs, such as story points or velocity, to assess a Scrum team’s performance. This emphasis on metrics can lead to prioritizing numbers over the actual value delivered to customers. Resistance to Change: Organizations that have successfully used traditional project management methods may resist adopting agile practices. This resistance can manifest as imposing strict controls to maintain the status quo. After all, one purpose of any organization is to exercise resilience in the face of change. While these reasons might explain why organizations use Jira in restrictive ways, curtailing the agile mindset and a Scrum team’s autonomy or self-management will have negative consequences. For example, restrictive practices can: Reduce a team’s ability to adapt to change, Hinder collaboration, Decrease morale, and Diminish customer value created. Contrary to this, agile practices promote flexibility, autonomy, and continuous improvement, which organizations will undermine when imposing excessive control, for example, by mandating the use of Jira in a particular way. Jira Anti-Patterns Gathering Qualitative Data on Jira Anti-Patterns I did not run a representative survey to gather qualitative data for this article. Instead, I addressed the issue in a LinkedIn post on March 16, 2023, that received almost 100 comments. Also, I ran a short, non-representative survey on Google Forms for about two weeks, which resulted in 21 contributions, using the following prompt: “Jira has always been a divisive issue, particularly if you have to use Jira due to company policy. In my experience, Jira out-of-the-box without any modification or customization is a proper tool. If everyone can do anything, Jira is okay despite its origin as a ticket accounting app. The problems appear once you start submitting Jira to customization. When roles are assigned and become subject to permissions. Then, everything starts going south. I want to aggregate these Jira anti-patterns and make them available to provide teams with a data-backed starting point for a fruitful discussion. Then, they could improve their use of the ticketing tool. Or abandon it for a better choice?” Finally, I aggregated the answers to identify the most prevailing Jira anti-patterns among those who participated in the LinkedIn thread or the survey. Categories of Jira Anti-Patterns When I aggregated the effects of a mandated rigid Jira regime, they fall into four main categories: Loss of autonomy: Imposing strict controls on the Jira process can reduce a team’s autonomy and hinder their ability to self-manage, a fundamental principle of agile development. Reduced adaptability: Strict controls may prevent the team from adapting their processes based on feedback or changing requirements, resulting in diminished value creation. Bureaucracy: Increased oversight and control can introduce unnecessary bureaucracy, slowing the team’s work by creating unnecessary work or queues. Misalignment with agile principles: Imposing external controls can create misalignment between the organization’s goals and agile principles, potentially hindering the teams from reaching their true potential and undermining the return on investment of an agile transformation. Jira Anti-Patterns in Practice The most critical Jira anti-patterns mentioned by the participants are as follows: Overemphasis on Hierarchy: Using Jira to enforce a hierarchical structure, thus stifling collaboration, self-management, and innovation. For example, roles and permissions prevent some team members from moving tickets. Consequently, teams start serving the tool; the tool no longer supports the teams. Rigid Workflows: Creating inflexible and over-complicated workflows that limit a Scrum team’s ability to inspect and adapt. For example, every team has to adhere to the same global standard workflow, whether it fits or not. Administration Permissions: Stripping teams of admin rights and outsourcing all Jira configuration changes to a nearshore contractor. Micromanagement: Excessive oversight that prevents team members from self-managing. For example, by adding dates and time stamps to everything for reporting purposes. Over-Customization: Customizing Jira to the point where it becomes confusing and difficult to use; for example, using unclear issue types or useless dashboards. Over-Reliance on Tools: Relying on Jira to manage all aspects of the project and enforcing communication through Jira, thus neglecting the importance of face-to-face communication. Siloed Teams: Using Jira to create barriers between teams, hindering collaboration and communication. Turning Teams Into Groups of Individuals: Dividing Product Backlog items into individual tasks and sub-tasks defies the idea of teamwork, mainly because multiple team members cannot own tasks collectively. Lack of Visibility I: Hiding project information or limiting access to essential details, reducing transparency. Lack of Visibility II: Fostering intransparent communication, resulting from a need to bypass Jira to work effectively. Fostering Scope Creep: Allowing the project scope to grow unchecked as Jira is excellent at administering tasks of all kinds. Prioritizing Velocity over Quality: Emphasizing speed of delivery over the quality of the work produced. For example, there is no elegant way to integrate a team’s Definition of Done. Focus on Metrics Over Value: Emphasizing progress tracking and reporting instead of delivering customer value. For example: Using prefabricated Jira reports instead of identifying the usable metrics at the team level. Inflexible Estimation: Forcing team members to provide overly precise task time estimates while lacking capabilities for probabilistic forecasting. Some Memorable Quotes from Participants There were some memorable quotes from the participants of the survey; all participants agree to a publication: Jira is a great mirror of the whole organization itself. It is a great tool (like many others) when given to teams, and it is a nightmare full of obstacles if given to old-fashioned management as an additional means of controlling and putting pressure on the team. The biggest but most generalized one is the attempt to standardize Jira across an org and force teams to adhere to processes that make management’s life easier (but the teams’ life more difficult). It usually results in the team serving Jira rather than Jira serving the team and prevents the team from finding a way of working or using the tool to serve their individual needs. This manifests in several ways: forcing teams to use Company Managed Projects (over team Managed ones), mandating specific transitions or workflows, requiring fields across the org, etc. Stripping project admins of rights, forcing every change to a field to be done by someone at a different timezone. The biggest anti-patterns I have seen in Jira involve over-complicating things for the sake of having workflows currently match how organizations currently (dys)function vs. organizations challenging themselves to simplify their processes. The other biggest anti-pattern is using Jira as a “communication” device. People add notes, tag each other, etc., instead of having actual conversations with one another. Entering notes on a ticket to create a log of what work was completed, decisions made, etc., is incredibly appropriate but the documentation of these items should be used to memorialize information from conversations. I can trace so many problems back to people saying things like, “Everyone should know what to do; I put a note on the Jira ticket.” Breaking stories up into individual tasks and sub-tasks destroys the idea of the team moving the ball down the court to the basket together. Developer: “Hey, I’ve wanted to ask you some questions about the PBI I’m working on.” Stakeholder: “I’ve already written everything in the task in Jira.” Another anti-pattern is people avoiding Jira and coming directly to the team with requests, which makes the request “covert” or “Black Ops” work. Jira is seen as “overhead” or “paperwork.” If you think “paperwork” is a waste of time, just skip the “paperwork” the next time you go to the bathroom! Implementing the tool without any Data Management policies in place, turning into hundreds of fields of all types (drop-down, free text, etc.). As an example, there are 40 different priority options alone. Make sure to have a Business Analyst create some data policies BEFORE implementing Jira. “A million fields”: having hundreds of custom fields in tickets, sometimes with similar names, some with required values. I have seen tickets of type “Task” with more than 300 custom fields. “Complex board filters with business rules”: backlog items are removed from boards based on weird logic, for example a checkbox “selected for refinement.” How to Overcome Jira Anti-Patterns When looking at the long list of Jira anti-patterns, the first thought that comes to mind is: What can we do to counter these Jira anti-patterns? Principally, there are two categories of measures: Measures at the organizational level that require the Scrum teams to join a common cause and work with middle managers and the leadership level. Measures at the Scrum team level that the team members can take autonomously without asking for permission or a budget. Here are some suggestions on what to do about Jira anti-pattern in your organization: Countermeasures at the Organizational Level The following Jira anti-patterns countermeasures at the organizational level require Scrum teams to join a common cause and work with middle managers and the leadership level: Establish a Community of Practice and Promote Cross-Team Collaboration: Create a cross-functional community of practice (CoP) to share knowledge, experiences, and best practices related to Jira and agile practices. Revisit Governance Policies: Work with management to review and adapt governance policies to support agile practices such as Scrum better and reduce unnecessary bureaucracy. Train and Educate: Support the middle managers and other stakeholders by providing training and educational resources to increase their understanding and adoption of agile principles. Encourage Management Buy-In: Advocate for the benefits of “Agile” and demonstrate its value to secure management buy-in and reduce resistance to change. Share Success Stories: Promote successes and improvements from agile practices and how Jira helped achieve them to inspire and motivate other teams and departments. Foster a Culture of Trust: Work with leadership to promote a culture of trust, empowering Scrum teams to make decisions and self-manage. Review Metrics and KPIs: Collaborate with management to review and adjust the metrics and KPIs used to evaluate team performance, prioritizing outcome-oriented customer value over output-based measures. Customize Jira Thoughtfully: Engage with management and other Scrum teams to develop a shared understanding of how to customize Jira to support agile practices without causing confusion or adding complexity while delivering value to customers and contributing to the organization’s sustainability. Address Risk Aversion: Work with leadership to develop a more balanced approach to risk management, embracing the agile mindset of learning and adapting through experimentation. Countermeasures at the Organizational Level Even if a Scrum team cannot customize Jira independently due to an organizational policy, there are some measures the team can embrace to minimize the impact of this impediment: Improve Communication: Encourage open communication within the team and use face-to-face or video calls when possible to discuss work, reducing the reliance on Jira for all communications. Adapt to Constraints: Find creative ways to work within the limitations of the Jira setup, such as using labels or comments to convey additional information or priorities, and share these techniques within the team. Limit Work-In-Progress: Encourage team members to work on a limited number of tasks to balance workload and avoid task hoarding, even if the team cannot enforce WIP limits within Jira. Emphasize collaboration: Encourage a collaborative mindset within the team, promoting shared ownership of tasks and issues, although Jira does not technically support co-ownership. Adopt a Team Agreement: Develop an agreement for using Jira effectively and consistently within the team. This Jira working agreement can help establish a shared understanding of best practices and expectations. Conclusion To use a metaphor, Jira reminds me of concrete: it depends on what you make out of it. Jira is reasonably usable when you use it out of the box without any modifications: no processes are customized, no rights and roles are established, and everyone can apply changes. On the other hand, there might be good reasons for streamlining the application of Jira throughout an organization. However, I wonder if mandating a strict regime is the best option to accomplish this. Very often, this approach leads to the Jira anti-patterns mentioned above. So, when discussing how to use Jira organization-wide, why not consider an approach similar to the Definition of Done? Define the minimum of standard Jira practices, get buy-in from the agile community to help promote this smallest common denominator, and leave the rest to the teams. How are you using Jira in your organization? Please share your experience with us in the comments. More
Agile vs. Scrum
Agile vs. Scrum
By Deepali chadokar
Testing Level Dynamics: Achieving Confidence From Testing
Testing Level Dynamics: Achieving Confidence From Testing
By Stelios Manioudakis
Accelerating Enterprise Software Delivery Through Automated Release Processes in Scaled Agile Framework (SAFe)
Accelerating Enterprise Software Delivery Through Automated Release Processes in Scaled Agile Framework (SAFe)
By Praveen Kumar Mannam
Tracking Software Architecture Decisions
Tracking Software Architecture Decisions

Maybe this sounds familiar to you: joining a new software engineering company or moving from your current team to a different team and being asked to keep evolving an existing product. You realized that this solution is using an uncommon architectural pattern in your organization. Let’s say it is applying event sourcing for persisting the state of the domain aggregates. If you like event sourcing; but do not like it for the specific nature of the product, most likely, it wouldn’t have been your first choice. As a software architect, you start to find the rationale behind that solution, find documentation with no success, and ask the software engineers that do not have the answer you were looking for. This situation might have a relevant negative impact. Software architectural decisions are key and drive the overall design of the solution, impacting maintainability, performance, security, and many other “-alities.” There is no perfect software architecture decision designing architectures is all about trade-offs, understanding their implications, sharing their impacts with the stakeholders, and having mitigations to live with them. Therefore, having a well-established process for tracking those kinds of decisions is key for the success and proper evolution of a complex software product, even more, if that product is created in a highly regulated environment. However, today’s software is designed and developed following agile practices, and frameworks, like SAFe, try to scale for large solutions and large organizations. It is key to maintain a good balance between the decision-making process and agility, ensuring the former does not become an impediment to the latter. Architecture Decision Records (ADRs) My organization uses Architecture Decision Records (ADRs) to register and track architectural decisions. ADRs is a well-known tool with many different writing styles and templates like MADR. Now the question is how to ensure the ADRs are in place and with the right level of governance. As we will see below, ARDs are written in markdown and managed in a git repository, where everybody can contribute, and a consensus shall be reached to accept them and move forward with the architecture decision. For that, we have created the following process: First Swimlane In this process, the first swimlane includes team interactions that triggers architecture concerns, requiring a supportive architecture decision. Those concerns will come mainly from: Product definition and refinement: At any level (e.g., epic, capability, feature, stories), architecture concerns are identified. Those concerns shall be captured by the corresponding software architect. Feedback from agile teams: Architecture decision-sharing sessions and inspect and adapt sessions (e.g., system demos, iteration reviews, and retrospectives) are moments where architecture concerns can be identified. It is key to understand if agile teams are facing problems with the architecture decisions made so far; if the teams do not believe in the architecture, it will not be materialized. Second Swimlane The second swimlane involves mainly the architecture team and optionally technical leads from agile teams. This team will meet regularly in an architecture sync meeting where the following steps will be taken: Step 1 Architecture backlog management: Concerns are registered in the architecture backlog as enablers, prioritized, assigned. The first task associated with enablers is creating the Architecture Decision Record. Step 2 Gather inputs and perform the analysis: The architect assigned to work on the ADR will gather additional inputs from the stakeholders, colleagues, and agile team members, working on spikes with the team to go deeper into the analysis when needed. During this state, the architect will collaborate closely with the agile teams to perform the required analysis and alternatives evaluation of several backlog enablers and spikes that might be needed. Step 3 Work on the ADR: The outcome of the previous state is used to write the ADR, condensing the decisions to be taken, the context around the decision, alternatives assessed, final decisions, and consequences, both positive and tradeoffs. The ADR is created in the source control system. In our case, using GitHub because it has a main branch for accepted ADRs and a feature branch for the ongoing ADR. Step 4 Publish ADR: Once the ADR is ready for decision, a pull request is created, assigning a reviewer all the relevant stakeholders. Revision is not limited to architects but is open to a wider audience, like software engineers from agile teams, product owners, product managers, etc. Third Swimlane The third swimlane goal is agreeing on the decision under discussion. In this context, the ADR is reviewed by the architecture team during their regular meetings (e.g., architecture alignment/architecture board). Ideally, the solution shall be reached by consensus, but if an agreement isn't reached in the expected timelines, the designated software architect (depending on the decision level, it can be an enterprise architect, solution architect, or system architect) will make the final decision. Step 5 Review ADR on Architecture alignment: The ADR owner provides a brief presentation of the ADR to their mates that will provide feedback until the next alignment. Step 6 Collect and review comments: ADR reviewers add comments to the pull request, providing feedback to the ADR owner that replies to the comments and applies the corresponding adjustments. This approach ensures all the concerns during the ADR definition are tracked and available for review at any time in the future by simply accessing the ADR’s pull request. Step 7 The designated Software Architect makes the final decision: This state is only needed if, for any reason, there is no agreement between architects and engineers. At some point in time, there should be accountability in the decision, and this accountability resides in the corresponding Software Architect. Ideally, this state will not be needed, but it is also true that a decision cannot be delayed forever. Step 8 Involve stakehoders: It will be bad news if you reach this state, which is there as a safeguard in case the decision taken by the architect is clearly wrong. Stakeholders are involved in the decision process to reevaluate the ADR and reach final agreement. Step 9 Sign ADR: Once the ADR is accepted by the majority of reviewers, it is merged to main. From this point, the ADR becomes official, and the corresponding decision shall be realized by the engineering teams, leveraging analysis and spikes performed in step 2. ADRs are now immutable. Step 10 Superseded former decision: If the new decision replaces a previously accepted ADR, it can be modified to change its status to “Superseded,” indicating by which ADR it is replaced. Conclusion This process might look a bit cumbersome, but it should not take more than a few days to decide once the analysis phase (step 2) is completed. The pros of such a process outweigh the cons by having a clear architecture decision history, easy to track from well know tools (e.g., GitHub, GitLab), and providing the highest value for a long-lasting solution. Also, it is important to note that this is a collaborative process that should help balance intentional architecture with an emergent design by the involvement of agile team members in the architecture concerns identification, decision analysis phase, and feedback sharing. I hope this can help you improve how architecture decisions are made and evolve. I am happy to hear from you in the comments!

By David Cano
What Will Come After Agile?
What Will Come After Agile?

I think that probably most development teams describe themselves as being “agile” and probably most development teams have standups, and meetings called retrospectives. There is also a lot of discussion about “agile,” much written about “agile,” and there are many presentations about “agile.” A question that is often asked is, what comes after “agile?” Many testers work in “agile” teams, so this question matters to us. Before we can consider what comes after agile, we need to consider what agile is — an iterative, incremental development methodology. Agile teams develop software in iterations and each iteration makes an increment toward the team’s goal. An agile team can decide, after an iteration or two, that the goal they are working towards should be changed and start to work on a new goal. Working iteratively makes the team agile as it can change direction quickly and easily. There are several agile methodologies and one of the most widely used methodologies is scrum. What Is Agile? When thinking about how to define what agile is we tend to be drawn to the Agile Manifesto which was created in 2001, but there were agile ways of working before the “Agile Manifesto.” The earliest Iterative and incremental development I found was at Bell Telephone labs in the 1930s. Walter Shewhart was an engineer at Bell Telephone Labs In his lectures in the 1930s, he introduced the concept of a straight-line, three-step scientific process of specification, production, and inspection. He went on to revise this idea into a cycle. The creation of this cycle has been described as part of the evolution of the scientific method and it became known as the Shewhart Cycle. The cycle is shown in the diagram below: The cycle is sometimes known as the Plan-Do-Study-Act-Cycle. A team using the Shewhart Cycle will Plan a change or test. The team will then Do, which means to carry out the change or test. Then the team will Study the results of the change or test to consider what they have learned before Acting on what they have learned. The team will then repeat the cycle and move onward. W. Edwards Deming said that the cycle is a helpful procedure to follow for the improvement of anything in production stage. He also said that at the end of a cycle the team might tear up the work that they had done previously and start again with fresh ideas and that doing this was “a sign of advancement.” Deming said the reason to study was to “try to learn how to improve tomorrow’s product.” Sometimes the Deming Cycle is referred to as the Plan Do Check Act. Deming did not like replacing the word study with the word check as studying is an important part of the cycle. He felt that the word check was inaccurate because it meant to “hold back.” The Shewhart Cycle was included by Deming in his lectures to senior management in Japan in 1950, and the cycle went into use in Japan as the Deming Cycle. What Is the Deming Cycle? The Deming Cycle has also been described as the Deming Wheel, as it just rolls on without a beginning or an end. All four parts of the Deming Cycle can be drawn inside a circle as below. This means that the four parts of the cycle are related to one another and that there is no hierarchy, as can be seen in the diagram below: Scrum is one of the most widely used agile methodologies, and Jeff Sutherland, one of the co-creators of scrum, has written that the Deming Cycle is how scrum development is done. He also says that retrospectives are the “check” part of the “Plan-Do-Check-Act cycle,” and says that it is important to get the team to change and improve their process by moving on to the act part of the cycle. It is useful for software testers that retrospectives were designed to be used in this way as we want to help the teams we work with to improve quality. Testers can use retrospectives to raise issues that help to improve quality. Sutherland says that he trains people to use Scrum by asking them to use the Deming Cycle to build paper airplanes and that by the third iteration they are making much better paper airplanes. The Deming Cycle is the heart of agile as it is a cycle that enables teams to change and improve quickly. The cycle enables change to be made at each iteration of the cycle. However, is this how agile is understood? Do we sometimes work in teams that describe themselves as “agile” but do not use The Deming Cycle? Is “agile” sometimes described through its ceremonies rather than through using the Cycle? Are teams using “agile” for continual improvement as Deming and Sutherland recommended? New ideas, such as Jobs to be Done, continue to be influenced by the Deming Cycle. Alan Klement describes the system of progress in Jobs to be Done as a cycle and says that his cycle is not an original idea as it comes from the Deming Cycle. Lean has also been influenced by the Deming Cycle. Lean is an American description of Japanese production systems and comes from a study by MIT. The Toyota Production System was of special interest in the study. Deming worked in Japan after World War Two where he helped to rebuild the Japanese economy. Jeffrey K. Liker says that “the Deming cycle embodies the learning cycle in the Toyota Production System.“ Teams, and testers, can develop their understanding of the cycle by reading the books in the references below, by using the resources of the Deming Institute, and by using the Deming Cycle. Teams can learn to use the cycle by planning an initiative, then carrying out the planned work or test, then studying the result of their work, and then acting on what they learned before repeating the cycle. Testers can help their teams to gain an understanding of the Deming Cycle by using plan-do-study-act for testing. When we test we plan the test, for example, we write a testing charter, then perform software testing, then we study the result of the test, and then act on the result as shown in the diagram below: Teams should not be put off by the Deming Cycle creating a new structure for their team. The Deming Cycle creates a new structure for a team because a team using the Deming Cycle must plan first, then do the work or test that they have planned, then study the effect of the work or test, and then act on what the team has learned. Using the Deming Cycle can sound demanding as it places a new structure on the team. However, all teams have structures that place constraints on them. If a team always has its planning meeting on a certain day of the week this practice places a constraint on the team. How often a team releases its work also puts a constraint on the team. If a team releases once a month then that monthly release will force the team to work towards that release. If a team releases many times a day with continuous delivery then that will create a different constraint for the team. All teams want to improve how they work and improve their product, and they will find that using the Deming Cycle will help them to improve their processes and product. Undoubtedly, there will be something after “agile.” It will have a new name, and I guess it will have to have new “ceremonies.” However, will the Deming Cycle be replaced by what replaces agile? The Deming Cycle is a profound philosophical insight that has been used by engineering teams to improve quality for nearly one hundred years and is continuing to influence new ideas. It seems unlikely that the Deming Cycle will be replaced by what comes after agile because it was so innovative, so useful, and is still being used after so many years. It would be great if the new way of working that comes after agile created a deeper understanding of the Deming Cycle, as this would help teams to learn, improve how they work, and improve the products they make.

By Mike Harris
How Agile Architecture Spikes Are Used in Shift-Left BDD
How Agile Architecture Spikes Are Used in Shift-Left BDD

An architecture spike in agile methodologies usually implies a software development method, which originates in the extreme programming offshoot of agile. It boils down to determining how much effort is needed to solve a particular problem or discover a workaround for an existing software issue. So, let us explore the benefits and see how these spikes can help in improving quality and making testing easier—by shifting our attention to the left—challenging the specification at a very early phase, asking questions, and getting the ground ready for sensible software architecture, which will, in turn, improve the testability of our application under test. More Details About Spikes There are many benefits of spiking—to get to know the unknown unknowns, discover risks, reduce complexity, and provide proof for proving or disapproving a theory. Taking a deep dive into the idea behind the solution can help us better understand the potential architectural solutions and the likelihood of whether it will work. A spike is not there to provide a finished working product or even an MVP. Its purpose is mainly to test a theory, so even though this concept is used (in the long run) to produce working software, the code written for spikes is often discarded after it has served its purpose. Spiking is usually done by ignoring architecture styles (which might seem odd at first as it can help discover the right architectural approaches for the system we are building), coding styles, design patterns, and general clean coding practices in favor of speed. Even though the spike may not directly produce software that will be delivered to the customer, in the long run, it still helps us ship better code in the end. Spiking is a good tool for handling risks by discovering unknown risks and provides a great way to learn and reduce complexity. A very common approach is to come up with spikes around a theory and follow the code with a small number of tests. Even though the spikes are seen as discardable code, we don’t just throw them aside. While they don’t end up in the actual code that gets delivered, they provide insights and can serve as documentation to show how a solution was reached. A Simple Example Let us assume that we have a new feature we need to develop, so we need to allow the users to be able to save a photo in their profile. To do that, a developer can make a spike where the following could be done: Have the JavaScript on the frontend communicate with the database: Set up a database server locally. Set up Node.js (or another server). Use ODBC (Open Database Connectivity) API to connect to the DB. Test the spike: Run a few sample queries. Test the CRUD functionality. What is mentioned in this simple example is all we need for a spike; it does not require any detailed documentation. The developer working on a spike will need to do some Googling, run a few commands from the terminal, and write a few lines of code for each theory. The spike would provide a possible direction for solving the challenge at hand; it can also include links for resources used, installs scripts, and the produced code to be used as a blueprint. Trying things out is way more beneficial than simply their sizing about them. The team was able to reduce the risk related to this feature—in this example, especially from the technical integrations side and even discovered new risks, such as accessing the DB using local JS! How Does This Impact Testing? Allowing us to explore spikes helps us identify the unknown unknowns, so, in a sense, spikes are a tool for early testing (used often when shifting testing to the left). By getting answers to what works and what will not work, we avoid many potential issues and delays by probing the requirements to distill them further. In turn, there are fewer bugs to report, fix, verify, and keep track of. Also, the earlier the testing is done; the more economical and fast it will be. Can QA Use Spikes? There is no real reason why not to. I have seen testers use spikes to try out and experiment with different approaches to automating some parts of the system under tests to determine the best approach. An architecture spike can help us try out different testing tools, such as new frameworks and libraries, and give us a first-hand experience of how a tool would behave without a system (when we try to automate some business rule, for example). Spikes are generally regarded as technical tasks (different than user stories), usually under an epic that is in the early development stages. Conclusion Spikes in agile are one of the tools that allow us to do what agile is intended to do in the first place: short, quick feedback cycles give us answers early in the development process. We focus on doing and trying instead of long, overly detailed planning. That is not to say that code architecture does not matter in agile (as we know, in waterfall architecture, it is very important and usually done in the design phase). In agile, we use a different approach. Agile practices, such as spikes, allow us to get an idea about architectural solutions that may work and info about the ones that may not work. Software produced in the above-mentioned manner helps us reduce risk in our user stories and enabled the team to discover the right solutions using collaboration, constructive discussion, frequent experimentation, and compromise. In an informal sense, a lot of people happen to be using spikes without even realizing it! As long as you are trying to identify the unknown unknowns, have short feedback cycles, and try to determine technical and functional risks, you are doing agile. Spikes will help us in situations where we are not certain about the requirements and if there are a lot of unknowns and answers that need answers.

By Mirza Sisic
Observability-Driven Development vs Test-Driven Development
Observability-Driven Development vs Test-Driven Development

The concept of observability involves understanding a system’s internal states through the examination of logs, metrics, and traces. This approach provides a comprehensive system view, allowing for a thorough investigation and analysis. While incorporating observability into a system may seem daunting, the benefits are significant. One well-known example is PhonePe, which experienced a 2000% growth in its data infrastructure and a 65% reduction in data management costs with the implementation of a data observability solution. This helped mitigate performance issues and minimize downtime. The impact of Observability-Driven Development (ODD) is not limited to just PhonePe. Numerous organizations have experienced the benefits of ODD, with a 2.1 times higher likelihood of issue detection and a 69% improvement in the mean time to resolution. What Is ODD? Observability-Driven Development (ODD) is an approach to shift left observability to the earliest stage of the software development life cycle. It uses trace-based testing as a core part of the development process. In ODD, developers write code while declaring desired output and specifications that you need to view the system’s internal state and process. It applies at a component level and as a whole system. ODD is also a function to standardize instrumentation. It can be across programming languages, frameworks, SDKs, and APIs. What Is TDD? Test-Driven Development (TDD) is a widely adopted software development methodology that emphasizes the writing of automated tests prior to coding. The process of TDD involves defining the desired behavior of software through the creation of a test case, running the test to confirm its failure, writing the minimum necessary code to make the test pass, and refining the code through refactoring. This cycle is repeated for each new feature or requirement, and the resulting tests serve as a safeguard against potential future regressions. The philosophy behind TDD is that writing tests compels developers to consider the problem at hand and produce focused, well-structured code. Adherence to TDD improves software quality and requirement compliance and facilitates the early detection and correction of bugs. TDD is recognized as an effective method for enhancing the quality, reliability, and maintainability of software systems. Comparison of Observability and Testing-Driven Development Similarities Observability-Driven Development (ODD) and Testing-Driven Development (TDD) strive towards enhancing the quality and reliability of software systems. Both methodologies aim to ensure that software operates as intended, minimizing downtime and user-facing issues while promoting a commitment to continuous improvement and monitoring. Differences Focus: The focus of ODD is to continuously monitor the behavior of software systems and their components in real time to identify potential issues and understand system behavior under different conditions. TDD, on the other hand, prioritizes detecting and correcting bugs before they cause harm to the system or users and verifies software functionality to meet requirements. Time and resource allocation: Implementing ODD requires a substantial investment of time and resources for setting up monitoring and logging tools and infrastructure. TDD, in contrast, demands a significant investment of time and resources during the development phase for writing and executing tests. Impact on software quality: ODD can significantly impact software quality by providing real-time visibility into system behavior, enabling teams to detect and resolve issues before they escalate. TDD also has the potential to significantly impact software quality by detecting and fixing bugs before they reach production. However, if tests are not comprehensive, bugs may still evade detection, potentially affecting software quality. Moving From TDD to ODD in Production Moving from a Test-Driven Development (TDD) methodology to an Observability-Driven Development (ODD) approach in software development is a significant change. For several years, TDD has been the established method for testing software before its release to production. While TDD provides consistency and accuracy through repeated tests, it cannot provide insight into the performance of the entire application or the customer experience in a real-world scenario. The tests conducted through TDD are isolated and do not guarantee the absence of errors in the live application. Furthermore, TDD relies on a consistent production environment for conducting automated tests, which is not representative of real-world scenarios. Observability, on the other hand, is an evolved version of TDD that offers full-stack visibility into the infrastructure, application, and production environment. It identifies the root cause of issues affecting the user experience and product release through telemetry data such as logs, traces, and metrics. This continuous monitoring and tracking help predict the end user’s perception of the application. Additionally, with observability, it is possible to write and ship better code before it reaches the source control, as it is part of the set of tools, processes, and culture. Best Practices for Implementing ODD Here are some best practices for implementing Observability-Driven Development (ODD): Prioritize observability from the outset: Start incorporating observability considerations in the development process right from the beginning. This will help you identify potential issues early and make necessary changes in real time. Embrace an end-to-end approach: Ensure observability covers all aspects of the system, including the infrastructure, application, and end-user experience. Monitor and log everything: Gather data from all sources, including logs, traces, and metrics, to get a complete picture of the system’s behavior. Use automated tools: Utilize automated observability tools to monitor the system in real-time and alert you of any anomalies. Collaborate with other teams: Collaborate with teams, such as DevOps, QA, and production, to ensure observability is integrated into the development process. Continuously monitor and improve: Regularly monitor the system, analyze data, and make improvements as needed to ensure optimal performance. Embrace a culture of continuous improvement: Encourage the development team to embrace a culture of continuous improvement and to continuously monitor and improve the system. Conclusion Both Observability-Driven Development (ODD) and Test-Driven Development (TDD) play an important role in ensuring the quality and reliability of software systems. TDD focuses on detecting and fixing bugs before they can harm the system or its users, while ODD focuses on monitoring the behavior of the software system in real-time to identify potential problems and understand its behavior in different scenarios. Did I miss any of the important information regarding the same? Let me know in the comments section below.

By Hiren Dhaduk
How Does SAFe Differ From LeSS?
How Does SAFe Differ From LeSS?

During a practice meeting at my organization, a team member mentioned taking a class on LeSS (Large-Scale Scrum). Many questions were asked as to how LeSS differed from SAFe. I volunteered to present a comparison in a later meeting. The larger Agile community might benefit from this information as well. The below article will attempt to answer the following questions: What are the differences? Why do companies choose one over the other? How do the roles differ? How do the events differ? How do the certifications? What percentage of organizations use SAFe vs. LeSS? Does the organizational structure differ? What are the pros and cons of implementation? What is the average cost and time of education? What is the average time to fully implement? When was SAFe vs. LeSS published? Geographically, where is SAFe vs. LeSS being adopted? What Are the Differences Between SAFe and LeSS Frameworks? SAFe (Scaled Agile Framework) and LeSS (Large-Scale Scrum) are both frameworks used for scaling Agile practices to large organizations, but they have different approaches and principles. SAFe emphasizes a more prescriptive approach, providing detailed guidance and structure for implementing Agile at scale. For example, SAFe defines three levels of planning and execution: portfolio, program, and team, and offers specific roles, artifacts, and ceremonies for each level. It also includes Lean-Agile principles, such as Lean systems thinking, Agile development, and Lean portfolio management. On the other hand, LeSS emphasizes simplicity and adapting to each organization's unique context. It promotes a single-team mindset, emphasizing that all teams should work towards a shared goal and collaborate closely. LeSS defines two frameworks: basic LeSS, which is for up to eight teams, and LeSS Huge, which can support up to thousands of team members. Why Do Companies Choose One Over the Other? The choice between SAFe and LeSS depends on several factors, such as the organization's size, culture, and goals. For example, companies with a more traditional management culture that want a more prescriptive approach may prefer SAFe. In contrast, those with a more Agile mindset and desire more flexibility may prefer LeSS. SAFe is generally better suited for larger organizations, while LeSS may be more appropriate for smaller or mid-sized organizations. Ultimately, the decision between SAFe and LeSS should be based on the organization's specific needs and goals and involve carefully considering and evaluating both frameworks. How Do the Roles Differ From SAFe to LeSS? Framework Level Role Description SAFe Portfolio Portfolio Manager Responsible for setting the strategic direction of the organization Enterprise Architect Responsible for defining the technical direction of the organization Epic Owner Responsible for defining the business value and prioritization of epics Program Release Train Engineer (RTE) Responsible for coordinating and facilitating the Agile Release Train (ART) Product Owner (PO): Responsible for defining the product vision and priorities Scrum Master (SM) Responsible for coaching the team and facilitating the Scrum process Agile Team The cross-functional team responsible for delivering value Team Product Owner (PO) Responsible for defining and prioritizing user stories Scrum Master (SM) Responsible for coaching the team and facilitating the Scrum process Development Team The cross-functional team responsible for delivering user stories LeSS Key Roles Product Owner Responsible for maximizing the value of the product and managing the product backlog Scrum Master Responsible for facilitating the Scrum process and removing impediments Development Team The cross-functional team responsible for delivering the product Other Roles Area Product Owner Responsible for managing the product backlog for a specific area of the product Chief Product Owner Responsible for coordinating the work of multiple Product Owners across the organization How Do the Events Differ From SAFe to LeSS? In SAFe, there are three levels of planning and execution: Portfolio, Program, and Team. Each level has its own set of events. Portfolio level Program level Team level Portfolio Kanban: Visualize and manage the flow of epics and features across the organization Program Increment (PI) Planning: Two-day planning event where teams plan the work for the next Program Increment Sprint Planning: Meeting where the team plans the work for the upcoming Sprint Portfolio Sync: Regular meetings to align the portfolio backlog with the organization's strategy Daily Stand-up: Daily meeting where teams synchronize their work and identify any obstacles Daily Stand-up: Daily meeting where team members synchronize their work and identify any obstacles Portfolio Review: Meeting to review progress and adjust the portfolio backlog Iteration Review: Meeting to review progress and demonstrate the working software Sprint Review: Meeting to review progress and demonstrate the working software Iteration Retrospective: Meeting to reflect on the previous iteration and identify areas for improvement Sprint Retrospective: Meeting to reflect on the previous Sprint and identify areas for improvement In LeSS, the key events are: Event Description Sprint Planning A meeting where the team plans the work for the upcoming Sprint Daily Scrum A daily meeting where team members synchronize their work and identify any obstacles Sprint Review A meeting to review progress and demonstrate the working product Sprint Retrospective A meeting to reflect on the previous Sprint and identify areas for improvement Overall Retrospective A meeting to reflect on the overall progress of the organization Sprint Review (Whole Group) A meeting where multiple teams come together to review progress and demonstrate their work Sprint Planning (Whole Group) A meeting where multiple teams come together to plan their work for the upcoming Sprint SAFe and LeSS have similar events such as Sprint Planning, Daily Stand-up, Sprint Review, and Sprint Retrospective. However, SAFe also includes additional events such as Portfolio Kanban, Portfolio Sync, and PI Planning, while LeSS includes events such as the Overall Retrospective and the Sprint Review (Whole-Group). The choice of events to use will depend on the specific needs of the organization and the scale of the Agile implementation. How Do the Certifications Differ From SAFe to LeSS? Both frameworks offer different certifications to help practitioners develop their skills and knowledge. Here are some key differences between the certifications offered by SAFe and LeSS: Framework Certification Levels Focus Approach Requirements Community SAFe Agilist Practitioner Program Consultant Product Owner/Product Manager Scrum Master Advanced Scrum Master Lean Portfolio Manager Release Train Engineer DevOps Practitioner Architect Agile Product Manager Government Practitioner Agile Software Engineer Focuses on implementing agile practices in large organizations using a framework that integrates several agile methodologies Uses a top-down approach to implementing agile practices at scale, with a prescribed framework and set of practices Certification requires candidates to complete a two-day training course and pass an online exam. Has a large and active community of practitioners and trainers, with numerous resources available for certification candidates LeSS LeSS Practitioner (CLP) LeSS for Executives (CLFE) LeSS Basics (CLB) Focuses exclusively on applying Scrum practices to large-scale projects Takes a more flexible approach, emphasizing the need to adapt Scrum practices to the specific needs of the organization Certification requires candidates to attend a three-day training course, pass an online exam, and demonstrate practical experience applying LeSS practices. It has a smaller community of practitioners and trainers, but it proliferates and offers a supportive and engaged network of practitioners. Overall, the certifications offered by SAFe and LeSS differ in their focus, approach, and requirements. However, both frameworks offer valuable tools and practices for implementing agile at scale, and certification can help practitioners develop their skills and knowledge in this area. What Percentage of Organizations Use SAFe vs. LeSS? There is no definitive answer to what percentage of organizations use SAFe vs. LeSS, as there is no publicly available data on this topic. It can vary depending on factors such as industry, size of the organization, and geographical location. However, according to some surveys and reports, SAFe is more widely adopted than LeSS. For example, the 14th Annual State of Agile Report by VersionOne found that SAFe was the most popular scaling framework, used by 30% of respondents, while LeSS was used by 6%. Similarly, a survey by Agile Alliance found that SAFe was the most used scaling framework, used by 29% of respondents, while LeSS was used by 6%. It's worth noting that both SAFe and LeSS have their proponents and critics, and the choice of scaling framework depends on various factors, including the organization's goals, culture, and context. Therefore, it's essential to evaluate each framework's strengths and weaknesses and choose the one that best fits the organization's needs. Is the Organizational Structure Different Between SAFe and LeSS? Yes, the organizational structure in SAFe and LeSS can differ in some ways. However, both frameworks are designed to help large organizations scale Agile principles and practices. In SAFe, the framework is designed around three levels of organizational structure: Team Level Program Level Portfolio Level cross-functional Agile teams work together to deliver value, following the principles of Scrum or Kanban Agile teams work together to deliver more significant initiatives, called Agile Release Trains (ARTs), aligned with the organization's strategic goals. Strategic planning and governance are performed to align the organization's initiatives and investments with its long-term objectives. LeSS is: Design The Framework Organization Designed around the principles of Scrum, with a focus on simplicity and minimizing unnecessary bureaucracy Encourages organizations to adopt a flat, decentralized organizational structure where all teams work together as part of a single product development effort Organize around a product, rather than a functional or departmental structure, to foster collaboration and focus on delivering value to customers. Overall, while both SAFe and LeSS are designed to help organizations scale Agile practices, they have different approaches to organizational structure, with SAFe being more hierarchical and LeSS emphasizing a flatter, decentralized structure. How Does the Organizational Structure Between SAFe and LeSS Differ? While both SAFe and LeSS are designed to help organizations scale Agile practices, they have different approaches to organizational structure, and how they address organizational change can differ. SAFe: Emphasizes a virtual reporting structure, where Agile teams are organized into Agile Release Trains (ARTs), which are virtual teams that work together to deliver value. The ARTs are aligned with the organization's strategic goals and have clear accountability for the value they deliver. SAFe encourages organizations to keep the existing reporting structure in place but to establish new roles and responsibilities that support Agile practices. LeSS: Emphasizes a physical, organizational change, where organizations restructure themselves to be organized around products or product lines rather than functional or departmental silos. It recommended that organizations adopt a flat, decentralized structure, with all teams working as part of a single product development effort. LeSS emphasizes that this physical reorganization is essential to break down barriers and silos between teams and foster collaboration and innovation. While both SAFe and LeSS can require some organizational change, they have different approaches to addressing it. For example, SAFe emphasizes a virtual reporting structure, while LeSS emphasizes a physical, organizational change to break down silos and foster collaboration. What Are the Pros and Cons of Implementing SAFe vs. LeSS? Implementing SAFe vs. LeSS has several pros and cons. Here are some of the key advantages and disadvantages of each framework: Framework Pros Cons SAFe Provides a structured approach to scaling Agile practices to larger organizations Offers a comprehensive framework with multiple layers of management and control, which can help manage complexity and align the organization's initiatives with its strategic goals Provides a standardized vocabulary and set of practices, which can help facilitate communication and collaboration between teams Implementing it can be complex and challenging, particularly for organizations that still need to start using Agile practices. It may be perceived as too hierarchical and bureaucratic by some Agile practitioners. Implementing it can be expensive, particularly if the organization needs to train many people. LeSS Emphasizes simplicity and decentralized decision-making, which can foster collaboration, innovation, and continuous improvement Encourages a flat, cross-functional organizational structure, which can help break down silos and improve communication and collaboration between teams Offers a flexible framework that can be adapted to the organization's specific needs and context It may require significant organizational change, which can be difficult and time-consuming. Some organizations that prefer more standardized practices may perceive it as too loose and unstructured. It may require a higher level of maturity and expertise in Agile practices to implement effectively. The choice between SAFe and LeSS depends on the organization's specific needs, context, and goals. SAFe may be a better fit for organizations that need a more structured approach to scale Agile practices. In comparison, LeSS may be a better fit for organizations prioritizing flexibility, collaboration, and continuous improvement. What Is the Average Cost and Time of Education for SAFe vs. LeSS? The cost and time of education for SAFe vs. LeSS can vary depending on several factors, such as the level of certification or training, the location, and the training provider. However, here are some general estimates based on the most common training programs: Framework Certification Cost Days SAFe Agilist $995 to $1,295 2-3 days Program Consultant (SPC) $3,995 to $4,995 4-5 days Product Owner/Product Manager (POPM) $995 to $1,295 Two days LeSS LeSS Practitioner (CLP) $1,500 to $3,500 Three days LeSS for Executives (CLFE) $500 to $1,500 One day LeSS Basics (CLB) $500 to $1,500 One day It's important to note that these estimates are only general guidelines, and the actual cost and time of education can vary depending on several factors. Organizations may also incur additional costs for implementing SAFe or LeSS, such as hiring consultants or trainers, purchasing tools or software, and investing in infrastructure and resources to support Agile practices. What Is the Average Time to Fully Implement SAFe vs. LeSS? The time to fully implement SAFe or LeSS can vary depending on several factors, such as the size and complexity of the organization, the level of experience with Agile practices, and the level of commitment from leadership and teams. However, here are some general estimates based on the most common implementation programs: Framework Timeframe Description SAFe Implementation Roadmap 12-24 months Provides a step-by-step guide for implementing SAFe in an organization. The roadmap includes several milestones, such as setting up Agile teams, establishing a portfolio management process, and aligning the organization's strategy with its Agile initiatives. LeSS Implementation Guide 6-12 months Guides on how to implement LeSS in an organization. The guide includes several steps, such as forming cross-functional teams, creating a shared product backlog, and establishing a continuous improvement process. It's important to note that these estimates are only general guidelines, and the actual time to fully implement SAFe or LeSS can vary depending on several factors. Additionally, organizations may implement these frameworks in phases, starting with a pilot project or a specific business unit and gradually expanding to other parts of the organization. Nevertheless, this approach can help manage the complexity and risk of implementing Agile practices at scale. When Was SAFe vs. LeSS Published? Framework Title Year Author(s) SAFe Agile Software Requirements: Lean Requirements Practices for Teams, Programs, and the Enterprise 2011 Dean Leffingwell LeSS Scaling Lean & Agile Development: Thinking and Organizational Tools for Large-Scale Scrum 2010 Craig Larman and Bas Vodde Since their initial publication, SAFe and LeSS has evolved and expanded to incorporate new ideas, best practices, and feedback from the Agile community. Today, both frameworks have a significant following and are widely used by organizations worldwide. Geographically, Where is SAFe vs. LeSS Being Adopted? Framework Strong Presence in Other regions deployed SAFe United States Europe, Asia, and Australia LeSS Europe United States, Asia, and Australia Both frameworks have been translated into multiple languages, and active communities of users and practitioners worldwide exist. However, adopting either framework may depend on factors such as the local business culture, regulatory environment, and availability of trained professionals. References 15 Bureaucratic Leadership Style Advantages and Disadvantages Agile Software Requirements: Lean Requirements Practices for Teams Petrini, Stefano, and Jorge Muniz. "Scrum Management Approach Applied In Aerospace Sector." IIE Annual Conference. Proceedings, Institute of Industrial and Systems Engineers (IISE), Jan. 2014, p. 434. Scaling Lean & Agile Development: Thinking and Organizational Tools for Scrum Fundamentals Certified exam Answers - Everything Trending.

By Dr. Thomas Baxter
Key Elements of Site Reliability Engineering (SRE)
Key Elements of Site Reliability Engineering (SRE)

Site Reliability Engineering (SRE) is a systematic and data-driven approach to improving the reliability, scalability, and efficiency of systems. It combines principles of software engineering, operations, and quality assurance to ensure that systems meet performance goals and business objectives. This article discusses the key elements of SRE, including reliability goals and objectives, reliability testing, workload modeling, chaos engineering, and infrastructure readiness testing. The importance of SRE in improving user experience, system efficiency, scalability, and reliability, and achieving better business outcomes is also discussed. Site Reliability Engineering (SRE) is an emerging field that seeks to address the challenge of delivering high-quality, highly available systems. It combines the principles of software engineering, operations, and quality assurance to ensure that systems meet performance goals and business objectives. SRE is a proactive and systematic approach to reliability optimization characterized by the use of data-driven models, continuous monitoring, and a focus on continuous improvement. SRE is a combination of software engineering and IT operations, combining the principles of DevOps with a focus on reliability. The goal of SRE is to automate repetitive tasks and to prioritize availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning. The benefits of adopting SRE include increased reliability, faster resolution of incidents, reduced mean time to recovery, improved efficiency through automation, and increased collaboration between development and operations teams. In addition, organizations that adopt SRE principles can improve their overall system performance, increase the speed of innovation, and better meet the needs of their customers. SRE 5 Why's 1. Why Is SRE Important for Organizations? SRE is important for organizations because it ensures high availability, performance, and scalability of complex systems, leading to improved user experience and better business outcomes. 2. Why Is SRE Necessary in Today's Technology Landscape? SRE is necessary for today's technology landscape because systems and infrastructure have become increasingly complex and prone to failures, and organizations need a reliable and efficient approach to manage these systems. 3. Why Does SRE Involve Combining Software Engineering and Systems Administration? SRE involves combining software engineering and systems administration because both disciplines bring unique skills and expertise to the table. Software engineers have a deep understanding of how to design and build scalable and reliable systems, while systems administrators have a deep understanding of how to operate and manage these systems in production. 4. Why Is Infrastructure Readiness Testing a Critical Component of SRE? Infrastructure Readiness Testing is a critical component of SRE because it ensures that the infrastructure is prepared to support the desired system reliability goals. By testing the capacity and resilience of infrastructure before it is put into production, organizations can avoid critical failures and improve overall system performance. 5. Why Is Chaos Engineering an Important Aspect of SRE? Chaos Engineering is an important aspect of SRE because it tests the system's ability to handle and recover from failures in real-world conditions. By proactively identifying and fixing weaknesses, organizations can improve the resilience and reliability of their systems, reducing downtime and increasing confidence in their ability to respond to failures. Key Elements of SRE Reliability Metrics, Goals, and Objectives: Defining the desired reliability characteristics of the system and setting reliability targets. Reliability Testing: Using reliability testing techniques to measure and evaluate system reliability, including disaster recovery testing, availability testing, and fault tolerance testing. Workload Modeling: Creating mathematical models to represent system reliability, including Little's Law and capacity planning. Chaos Engineering: Intentionally introducing controlled failures and disruptions into production systems to test their ability to recover and maintain reliability. Infrastructure Readiness Testing: Evaluating the readiness of an infrastructure to support the desired reliability goals of a system. Reliability Metrics In SRE Reliability metrics are used in SRE is used to measure the quality and stability of systems, as well as to guide continuous improvement efforts. Availability: This metric measures the proportion of time a system is available and functioning correctly. It is often expressed as a percentage and calculated as the total uptime divided by the total time the system is expected to be running. Response Time: This measures the time it takes for the infrastructure to respond to a user request. Throughput: This measures the number of requests that can be processed in a given time period. Resource Utilization: This measures the utilization of the infrastructure's resources, such as CPU, memory, Network, Heap, caching, and storage. Error Rate: This measures the number of errors or failures that occur during the testing process. Mean Time to Recovery (MTTR): This metric measures the average time it takes to recover from a system failure or disruption, which provides insight into how quickly the system can be restored after a failure occurs. Mean Time Between Failures (MTBF): This metric measures the average time between failures for a system. MTBF helps organizations understand how reliable a system is over time and can inform decision-making about when to perform maintenance or upgrades. Reliability Testing In SRE Performance Testing: This involves evaluating the response time, processing time, and resource utilization of the infrastructure to identify any performance issues under BAU scenario 1X load. Load Testing: This technique involves simulating real-world user traffic and measuring the performance of the infrastructure under heavy loads 2X Load. Stress Testing: This technique involves applying more load than the expected maximum to test the infrastructure's ability to handle unexpected traffic spikes 3X Load. Chaos or Resilience Testing: This involves simulating different types of failures (e.g., network outages, hardware failures) to evaluate the infrastructure's ability to recover and continue operating. Security Testing: This involves evaluating the infrastructure's security posture and identifying any potential vulnerabilities or risks. Capacity Planning: This involves evaluating the current and future hardware, network, and storage requirements of the infrastructure to ensure it has the capacity to meet the growing demand. Workload Modeling In SRE Workload Modeling is a crucial part of SRE, which involves creating mathematical models to represent the expected behavior of systems. Little's Law is a key principle in this area, which states that the average number of items in a system, W, is equal to the average arrival rate (λ) multiplied by the average time each item spends in the system (T): W = λ * T. This formula can be used to determine the expected number of requests a system can handle under different conditions. Example: Consider a system that receives an average of 200 requests per minute, with an average response time of 2 seconds. We can calculate the average number of requests in the system using Little's Law as follows: W = λ * T W = 200 requests/minute * 2 seconds/request W = 400 requests This result indicates that the system can handle up to 400 requests before it becomes overwhelmed and reliability degradation occurs. By using the right workload modeling, organizations can determine the maximum workload that their systems can handle and take proactive steps to scale their infrastructure and improve reliability and allow them to identify potential issues and design solutions to improve system performance before they become real problems. Tools and techniques used for modeling and simulation: Performance Profiling: This technique involves monitoring the performance of an existing system under normal and peak loads to identify bottlenecks and determine the system's capacity limits. Load Testing: This is the process of simulating real-world user traffic to test the performance and stability of an IT system. Load testing helps organizations identify performance issues and ensure that the system can handle expected workloads. Traffic Modeling: This involves creating a mathematical model of the expected traffic patterns on a system. The model can be used to predict resource utilization and system behavior under different workload scenarios. Resource Utilization Modeling: This involves creating a mathematical model of the expected resource utilization of a system. The model can be used to predict resource utilization and system behavior under different workload scenarios. Capacity Planning Tools: There are various tools available that automate the process of capacity planning, including spreadsheet tools, predictive analytics tools, and cloud-based tools. Chaos Engineering and Infrastructure Readiness in SRE Chaos Engineering and Infrastructure Readiness are important components of a successful SRE strategy. They both involve intentionally inducing failures and stress into systems to assess their strength and identify weaknesses. Infrastructure readiness testing is done to verify the system's ability to handle failure scenarios, while chaos engineering tests the system's recovery and reliability under adverse conditions. The benefits of chaos engineering include improved system reliability, reduced downtime, and increased confidence in the system's ability to handle real-world failures and proactively identify and fix weaknesses; organizations can avoid costly downtime, improve customer experience, and reduce the risk of data loss or security breaches. Integrating chaos engineering into DevOps practices (CI\CD) can ensure their systems are thoroughly tested and validated before deployment. Methods of chaos engineering typically involve running experiments or simulations on a system to stress and test its various components, identify any weaknesses or bottlenecks, and assess its overall reliability. This is done by introducing controlled failures, such as network partitions, simulated resource exhaustion, or random process crashes, and observing the system's behavior and response. Example Scenarios for Chaos Testing Random Instance Termination: Selecting and terminating an instance from a cluster to test the system response to the failure. Network Partition: Partitioning the network between instances to simulate a network failure and assess the system's ability to recover. Increased Load: Increasing the load on the system to test its response to stress and observing any performance degradation or resource exhaustion. Configuration Change: Altering a configuration parameter to observe the system's response, including any unexpected behavior or errors. Database Failure: Simulating a database failure by shutting it down and observing the system's reaction, including any errors or unexpected behavior. By conducting both chaos experiments and infrastructure readiness testing, organizations can deepen their understanding of system behavior and improve their resilience and reliability. Conclusion In conclusion, SRE is a critical discipline for organizations that want to deliver highly reliable, highly available systems. By adopting SRE principles and practices, organizations can improve system reliability, reduce downtime, and improve the overall user experience.

By Srikarthick Vijayakumar
Application Development: Types, Stages, and Trends
Application Development: Types, Stages, and Trends

Application development has become an integral part of modern business operations. With the rapid growth of technology and the widespread use of mobile devices, the demand for software applications has increased manifold. Besides, from mobile apps to web applications, businesses require custom solutions that can cater to their specific needs and provide a seamless user experience. In this article, we will discuss the various types of application development, the stages involved in the development process, and the latest trends in the industry. What Is Application Development? Application development is the process of designing, building, and deploying software applications for various platforms such as web, mobile, desktop, and cloud. It involves several stages: requirements gathering, design, development, performance testing, deployment, and maintenance. Furthermore, application development aims to provide software solutions to meet the needs and requirements of businesses and users. It also requires a team of developers, designers, testers, project managers, and other professionals to work collaboratively to ensure the application meets the required quality standards. Application development is a complex process requiring technical skills, creativity, and project management expertise. However, a well-designed and developed application can provide significant benefits to the users and the business, including increased productivity, improved efficiency, and enhanced customer experience. Let's check the type of application development. Types of Application Development There are primarily two types of application development – mobile and web. Mobile applications are designed specifically for mobile devices, whereas web applications are accessible through a web browser. Mobile Application Development Mobile app development involves the creation of software applications specifically for mobile devices such as smartphones and tablets. These apps can be built for various platforms, such as Android, iOS, and Windows. Mobile apps can be native, hybrid, or web-based. The future of Mobile Applications is bright for sure, as mobile users are increasing daily. Developers build native apps for a particular platform using the language specific to that platform. For instance, Java or Kotlin is used to develop Android apps, whereas Swift or Objective-C is used to create iOS apps. Native apps provide better performance, speed, and security than other types of apps. Hybrid apps, on the other hand, are a combination of native and web apps. They are built using web technologies such as HTML, CSS, and JavaScript and are wrapped in a native app container. Hybrid apps provide a better user experience than web apps but may not be as fast as native apps. Web apps are accessed through a web browser and do not require installation on the device. They are written in web technologies such as HTML, CSS, and JavaScript. Web apps are accessible from any device with an internet connection and are platform-independent. However, they may not provide the same level of functionality as native or hybrid apps. Web Application Development Web application development involves the creation of software applications that is accessible through a web browser. Developers build applications that can run on various devices, such as desktops, laptops, and mobile devices. Besides, they use various technologies, such as HTML, CSS, JavaScript, and server-side scripting languages like PHP, Ruby on Rails, and Node.js to build web applications. In some cases, developers also use ready-to-use admin templates as well. An admin template is a collection of web pages created with HTML, CSS, and JavaScript or any JavaScript libraries that are used to form the user interface of a web application's backend. It can save lots of time and money that one needs to invest during web app development. In addition, you can build progressive web apps and SPA using it. There are basically two types of web applications – static and dynamic. Static web applications are basic websites that provide information to the user. They do not require any interaction with the server and are primarily used for informational purposes. On the other hand, dynamic web applications provide a more interactive experience to the user. They require interaction with the server and can provide various functionalities such as user authentication, data storage, and retrieval. Dynamic web applications can be built using various JavaScript frameworks such as AngularJS, ReactJS, and Vue.js. Methodologies of Application Development Successful projects are well-managed. To effectively manage a project, the manager or development team must select the software development methodology that is best suited to the project at hand. Every methodology has different strengths and weaknesses and exists for various reasons. Here's an overview of the most commonly used software development methodologies and why they exist. Agile Development Methodology The Agile development methodology is an iterative and flexible approach to software development that emphasizes collaboration, customer satisfaction, and rapid delivery. The methodology involves breaking the project down into smaller, manageable pieces called sprints, which typically last between one and four weeks. At the end of each sprint, the team reviews and adjusts its progress and priorities based on feedback and changing requirements. There are several benefits of Agile methodology. First, it emphasizes communication and collaboration between developers, customers, and stakeholders, promoting flexibility and adaptability in response to changing business needs. The result is a more customer-focused approach that delivers high-quality software in a shorter timeframe. DevOps Deployment Methodology DevOps deployment methodology is a software development approach that focuses on collaboration and automation between development and operations teams to improve software delivery speed, quality, and reliability. The methodology involves using Continuous Integration and Continuous Deployment tools to automate the build, test, and deployment process, ensuring code changes are thoroughly validated before they are deployed to production. DevOps deployment methodology enables teams to reduce the time and effort required to release new features or updates, allowing them to respond quickly to changing customer needs and market demands. Waterfall Development Method The Waterfall model is a traditional linear software development methodology that flows sequentially through conception, initiation, planning, design, construction, testing, deployment, and maintenance phases. The software development team defines requirements upfront and completes each phase before moving on to the next. The Waterfall methodology can become inflexible when there are changes to requirements or a need for iterative development. Additionally, the methodology may not catch errors until later in the project cycle, making it more difficult and costly to address them. Rapid Application Development Rapid Application Development (RAD) is a software development methodology that emphasizes speed and flexibility in the development process. The methodology involves breaking down the project into small modules and using iterative development to quickly build, test, and refine the software. RAD typically involves the use of visual tools, prototyping tools, and user feedback to speed up the development process. RAD aims to deliver a working prototype of the software to customers early in the development cycle, allowing for early feedback and adjustments. This approach enables developers to quickly respond to changing customer needs and deliver a high-quality product in a short amount of time. Now, let's head toward the stages of Application Development. Stages of Application Development Application development involves several stages, each of which is essential for the success of the project. The following are the various stages of application development: Planning The planning stage is crucial to the success of the project as it sets the foundation for the rest of the development process. During this stage, the development team works with the client to define the project objectives, target audience, and the features and functionalities that the application should have. The team also determines the project scope, budget, and timelines. The outcome of this stage is a comprehensive project plan that outlines the requirements, scope, timelines, and budget. The plan serves as a roadmap for the development team and ensures that everyone is on the same page before proceeding to the design stage. Design The design stage involves creating wireframes and prototypes of the application. This stage consists of the user interface (UI) and user experience (UX) of the application. Next, the design team works with the development team to ensure that the design is technically feasible and aligns with the project requirements. The design team here uses different tools, such as UI Kits, prototyping tools (like adobe and Figma), wireframes, etc. design an appealing app. The outcome of this stage is a visual representation of the application, including the layout, color scheme, typography, and other design elements. Usually, the client reviews the design for approval before proceeding to the development stage. A well-designed application is critical to its success as it directly impacts user engagement and retention. Development The development stage is where the actual coding takes place. Here, the development team follows the design guidelines and uses the required technologies and tools to build the application. The development team usually works in sprints, with each sprint delivering a set of features or functionalities. During the development stage, the team adheres to the best practices for coding, documentation, and version control. The team also ensures the optimization of the code for better performance, security, and scalability. Regular communication and collaboration between the development team, the design team, and the client are essential to ensure that the development aligns with the project requirements. The outcome of this stage is a working application that meets the project requirements. To make sure that application is bug-free, the developer team rigorously tests it with various testing methods. If they find any issues or bugs, then they fix them before proceeding to the deployment stage. A well-developed application is critical to its success as it directly impacts user experience and satisfaction. Testing The testing stage involves validating the functionality, performance, and usability of the application. The testing team uses various testing methods, including unit testing, integration testing, system testing, usability testing, and user acceptance testing (UAT), to ensure that the application works as expected. During the testing stage, the team identifies and documents any issues or bugs and communicates them to the development team for fixing. The team also ensures that the application is accessible, responsive, and user-friendly. The software testing stage is critical to the success of the project as it ensures that the application is ready for deployment. The outcome of this stage is a tested and validated application that meets the project requirements. Next, the testing team provides a report of the testing results, including any issues or bugs, to the development team for fixing. Finally, after resolving all the issues, the application becomes ready for deployment. Deployment The deployment stage involves releasing the application to the production environment. The deployment team follows a deployment plan that outlines the steps required to deploy the application. The team ensures deployment of the application goes without any downtime or disruption to the users. The deployment stage is critical to the success of the project as it ensures that the application is available to the users. The deployment team also ensures that the application is secure and meets the required standards and regulations. In addition, the team monitors the application after deployment to ensure it performs optimally. Any issues or challenges that arise during the deployment process are documented and communicated to the development team for future reference. After the deployment of the application, the development team provides ongoing maintenance and support to ensure that the application continues to function optimally. A successful deployment ensures that the application is accessible to the users and meets their expectations. Maintenance The maintenance stage involves the ongoing support and maintenance of the application. The development team provides ongoing maintenance and support to ensure that the application continues to function optimally. In addition, the team monitors the application for any issues or bugs and fixes them promptly. During the maintenance stage, the development team also ensures that the application is updated with the latest technologies and security patches. In addition, the team also adds new features and functionalities as required by the client. The maintenance stage is critical to the success of the project as it ensures that the application continues to meet the user's requirements and expectations. The outcome of this stage is a well-maintained application that continues to function optimally and meet the user's expectations. In addition, a successful maintenance stage ensures that the application remains relevant and continues to provide value to the users. Now, let's check the app development trends you should know. App Development Trends in 2023 In 2023, many changes will be expected in the app development world. Well, the following are some burning application development trends that will indeed rule the world. Adoption of Cloud Technology The Future of Cloud Engineering is evolving as cloud technology is a game changer in application development. It enables businesses to easily scale their IT infrastructure, reduce costs, and increase agility. The adoption of cloud technology is increasing daily as it provides access to resources and services on-demand, allowing businesses to focus on their core competencies. Application developers can use cloud technology to build and deploy applications in a distributed environment, which allows users to easily access them from any location using any device with an internet connection. As more businesses recognize the advantages of cloud technology and transfer their IT operations to the cloud, this trend will definitely continue. Usage of AI and Machine Learning Technologies AI and machine learning technologies are transforming the way we interact with applications. From personalized recommendations to intelligent chatbots, AI and machine learning are revolutionizing the user experience. ChatGPT is the latest example of it. These technologies enable applications to learn from user behavior and preferences, providing a more personalized experience. Developers use these technologies to improve application performance, optimize resource utilization, and reduce maintenance costs. As more data becomes available, AI and machine learning algorithms become more accurate and effective. This application development trend will continuously evolve as AI and machine learning technologies become more accessible to developers and businesses alike. Metaverse-Like Experiences Metaverse-like experiences are a new trend in application development. These experiences are immersive and interactive, providing users with a virtual environment to explore and interact with. This trend will remain for upcoming years with the increasing popularity of virtual and augmented reality technologies. The metaverse will become a major part of the digital landscape, providing users with a new way to engage with applications and each other. Developers are exploring new ways to incorporate metaverse-like experiences into their applications, creating new opportunities for businesses to engage with their customers. Integration of Mobile Apps With Other Devices and Platforms Integration of mobile apps with other devices and platforms is another trend in application development. The proliferation of mobile devices has led to increasing demand for applications that can be accessed from multiple devices and platforms. As a result, developers are using technologies such as APIs and SDKs to enable seamless integration between mobile apps and other devices and platforms. The core reason behind this application integration trend is the need to offer users a consistent experience, regardless of their device or platform. Developers and businesses can expect this trend to persist as more devices become connected and provide new opportunities. Improved Native Cybersecurity Improved native cybersecurity is a critical trend in application development as privacy and security have become major concerns for businesses and users alike. Furthermore, with the increasing number of cyber threats, it is important for applications to be secure and resilient. Developers are incorporating security features into their applications from the ground up, making security an integral part of the development process. This includes features such as encryption, authentication, and authorization. In addition, as cyber threats continue to evolve, developers are expected to continue to improve native cybersecurity, ensuring that applications remain secure and resilient. Low Code/No Code Is the Future As per a report, the low-code development market will generate $187 billion by 2030. Low-code/no-code platforms are becoming increasingly popular among businesses and app developers. These platforms allow developers to create applications using visual interfaces and drag-and-drop components without requiring extensive programming knowledge. This trend will continue in upcoming years as more businesses and developers embrace the benefits of low-code/no-code platforms, such as faster development times and reduced costs. Conclusion Well, here we briefly discussed application development, consisting of various types, stages, and trends. The intention here is to provide you with a comprehensive overview of the Application Development process, what it requires, and what trends will be vital in 2023. I hope you find this article noteworthy and helpful. If you have any inputs, you can share them with me through the comment section.

By Saanvi Sen
What's DevOps, SRE, Shift Left, and Shift Right?
What's DevOps, SRE, Shift Left, and Shift Right?

I had the opportunity to catch up with Andi Grabner, DevOps Activist at Dynatrace, during day two of Dynatrace Perform. I've known Andi for seven years, and he's one of the people that has helped me understand DevOps since I began writing for DZone. We covered several topics that I'll share in a series of posts. How Do DevOps and SRE Work Together? SRE is the term itself that comes from Google. But essentially, it is the automation that DevOps put into deployment for deploying changes faster by automating the pipeline. SRE is about automating the operational aspects of the software. And automating the operational aspects of software means as an as an SRE, maybe five years ago, you were just calling ITOps. Now, it's called SRE, or Site Reliability Engineering. I think both DevOps and SRE have evolved to use automation and code to automate something in a smart way and also in a codified way. Code is important because you can source control code. You can keep history of all of your pipelines. The same is true for SRE. SRE tries to use the same things automated through code for the operational aspects of your software. Therefore, SRE and DevOps work really nicely in tandem. I have a slide where DevOps and SRE are holding hands. They're holding hands because in the end, it's all about automating delivery through automation. SRE really focuses more on automating the resiliency of the stuff that comes out of DevOps. How About Shift Left Versus Shift Right? Is That an "And," or Is It "And/Or?" It's an "and." Shift left is really about thinking about all of these constraints earlier, how we deal with observability, and encouraging the developers to think about what type of data they need to figure out if the system is healthy. Traces, logs, and starting testing earlier is the classical shifting left. Shifting right is about knowing how my system is performing. It's like knowing the heart rate of my system – like my response time. In development, shifting right means I want to make sure the SRE team that is responsible for running my software, the time shifting, this is how you run it, this is what I want to see from an observability perspective, and these are my thresholds. If these are not met, then I want you to execute these actions from a performance, availability, and reliability perspective. I think we always had the classical Dev and Ops divide. Development would build something and throw it over the wall. Then Operations had to figure out how to run it properly, how to scale it, and how to do capacity control. Now, we're saying we need to look at all of these aspects much earlier. We need to figure out upfront how we do observability in development, not just in operations. That's why we define observability, to test it out. We are taking all of these ingredients and identifying what we are going to observe. Let's also observe it in production. We know what the thresholds are. We know what makes our system healthy. Let's make sure we are also validating this in production. We know if something is failing in testing, what do we do to bring the system back to an ideal state. Let's codify this also in production to bring the system back in an automated way. That's my definition of shifting right.

By Tom Smith CORE
From Data Stack to Data Stuck: The Risks of Not Asking the Right Data Questions
From Data Stack to Data Stuck: The Risks of Not Asking the Right Data Questions

Companies are in continuous motion: new requirements, new data streams, and new technologies are popping up every day. When designing new data platforms supporting the needs of your company, failing to perform a complete assessment of the options available can have disastrous effects on a company’s capability to innovate and make sure its data assets are usable and reusable in the long term. Having a standard assessment methodology is an absolute must to avoid personal bias and properly evaluate the various solutions across all the needed axes. The SOFT Methodology provides a comprehensive guide of all the evaluation points to define robust and future-proof data solutions. However, the original blog doesn’t discuss a couple of important factors: why is applying a methodology like SOFT important? And, even more, what risks can we encounter if we’re not doing so? This blog aims to cover both aspects. The Why Data platforms are here to stay: the recent history of technology has told us that data decisions made now have a long-lasting effect. We commonly see a frequent rework of the front end, but radical changes in the back-end data platforms used are rare. Front-end rework can radically change the perception of a product, but when the same is done on a backend the changes are not immediately impacting the end users. Changing the product provider is nowadays quite frictionless, but porting a solution across different backend tech stacks is, despite the eternal promise, very complex and costly, both financially and time-wise. Some options exist to ease the experience, but the code compatibility and performances are never a 100% match. Furthermore, when talking about data solutions, performance consistency is key. Any change in the backend technology is therefore seen as a high-risk scenario, and most of the time refused with the statement “don’t fix what isn’t broken." The fear of change blocks both new tech adoption as well as upgrades of existing solutions. In summary, the world has plenty of examples of companies using backend data platforms chosen ages ago, sometimes with old, unsupported versions. Therefore, any data decision made today needs to be robust and age well in order to support the companies in their future data growth. Having a standard methodology helps understand the playing field, evaluate all the possible directions, and accurately compare the options. The Risks of Being (Data) Stuck Ok, you’re in the long-term game now. Swapping back-end or data pipeline solutions is not easy, therefore selecting the right one is crucial. But what problems will we face if we fail in our selection process? What are the risks of being stuck with a sub-optimal choice? Features When thinking about being stuck, it’s tempting to compare the chosen solution with the new and shiny tooling available at the moment, and their promised future features. New options and functionalities could enhance a company’s productivity, system management, integration, and remove friction at any point of the data journey. Being stuck with a suboptimal solution without a clear innovation path and without any capability to influence its direction puts the company in a potentially weak position regarding innovation. Evaluating the community and the vendors behind a certain technology could help decrease the risk of stagnating tools. It’s very important to evaluate which features and functionality is relevant/needed and define a list of “must haves” to reduce time spent on due diligence. Scaling The SOFT methodology blog post linked above touches on several directions of scaling: human, technological, business case, and financial. Hitting any of these problems could mean that the identified solution: Could not be supported by a lack of talent Could hit technical limits and prevent growth Could expose security/regulatory problems Could be perfectly fine to run on a sandbox, but financially impractical on production-size data volumes Hitting scaling limits, therefore, means that companies adopting a specific technology could be forced to either slow down growth or completely rebuild solutions starting from a different technology choice. Support and Upgrade Path Sometimes the chosen technology advances, but companies are afraid or can’t find the time/budget to upgrade to the new version. The associated risk is that the older the software version, the more complex (and risky) the upgrade path will be. In exceptional circumstances, the upgrade path could not exist, forcing a complete re-implementation of the solution. Support needs a similar discussion: staying on a very old version could mean a premium support fee in the best case or a complete lack of vendor/community help in a vast majority of the scenarios. Community and Talent The risk associated with talent shortage was already covered in the scaling chapter. New development and workload scaling heavily depend on the humans behind the tool. Moreover, not evaluating the community and talent pool behind a certain technology decision could create support problems once the chosen solution becomes mature and the first set of developers/supporters leave the company without proper replacement. The lack of a vibrant community around a data solution could rapidly decrease the talent pool, creating issues for new features, new developments, and existing support. Performance It’s impossible to know what the future will hold in terms of new technologies and integrations. But selecting a closed solution, with limited (or no) capabilities of integration forces companies to run only “at the speed of the chosen technology,” exposing companies to a risk of not being able to unleash new use cases because of technical limitations. Moreover, not paying attention to the speed of development and recovery could expose limits on the innovation and resilience fronts. Black Box When defining new data solutions, an important aspect is an ability to make data assets and related pipelines discoverable and understandable. Dealing with a black box approach means exposing companies to repeated efforts and inconsistent results which decrease the trust in the solution and open the door to misalignments in the results across departments. Overthinking The opposite risk is overthinking: the more time spent evaluating solutions, the more technologies, options, and needs will pile up, making the final decision process even longer. An inventory of the needs, timeframes, and acceptable performance is necessary to reduce the scope, take a decision, and start implementing. Conclusion When designing a data platform, it is very important to address the right questions and avoid the “risk of being stuck." The SOFT Methodology aims at providing all the important questions you should ask yourself in order to avoid pitfalls and create a robust solution. Do you feel all the risks are covered? Have a different opinion? Let me know!

By Francesco Tisiot
Behaviors To Avoid When Practicing Pair Programming
Behaviors To Avoid When Practicing Pair Programming

When two people get together to write code on a single computer, it is given the name of pair programming. Pair programming was popularized by the eXtreme programming book by Kent Beck, in there he reports the technique to develop software in pairs which spiked the interest of researchers in the subject. Lan Cao and Peng Xu found that pair programming leads to a deeper level of thinking and engagement in the task at hand. Pair programming also carries different approaches. There are different styles of pair programming, such as the drive/navigator, ping-pong, strong-style, and pair development. All of them are well described by Birgitta Böckeler and Nina Siessegger. Their article describes the approach to how to practice each style. Here, we will focus especially on only two of them: the drive/navigator and ping-pong, as it seems that both are the most commonly used. The objective is to have a look at what should be avoided when developing software in pairs. First, we briefly introduce each pair programming style, and then we follow the behaviors to avoid. Driver/Navigator For my own taste, the driver and navigator are the most popular among practitioners. In this style, the driver is the one that is writing the code and thinking about the solution in place to make concrete steps to advance in the task at hand. The navigator, on the other hand, is watching the driver and also giving insights on the task at hand. But not only that, the navigator is the one thinking in a broader way, and she's also in charge of giving support. The communication between the driver and navigator is constant. This style also is the one that fits well with the Pomodoro technique. Ping/Pong Ping-pong is the style that "embraces the Test Driven Development" methodology; the reason behind that is the way in which that dynamic works. Let's assume we have a pair that will start working together, Sandra and Clara. The ping/pong session should go something similar to the following: Sandra start writing a failing test Clara makes the test to pass Now, Clara can decide if she wants to refactor Clara now writes a failing test for Sandra The loop repeats It is also possible to expand the ping/pong to a broader approach. One might start a session writing a class diagram, and the next person in the pair implements the first set of classes. Regardless of the style, what is key to the success of pair programming is collaboration. Behaviors To Avoid Despite its popularity, pair programming seems to be a methodology that is not wildly adopted by the industry. When it is, it might vary on what "pair" and "programming" means given a specific context. Sometimes pair programming is used in specific moments throughout the day of practitioners, as reported by Lauren Peate on the podcast Software Engineering Unlocked hosted by Michaela Greiler to fulfill specific tasks. But, in the XP, pair programming is the default approach to developing all the aspects of the software. Due to the variation and interpretation of what pair programming is, companies that adopt it might face some miss conceptions of how to practice it. Often, this is the root cause of having a poor experience while pairing. Lack of soft (social) skills Lack of knowledge in the practice of pair programming In the following sections, we will go over some miss conceptions of the practice. Avoiding those might lead to a better experience when pairing. Lack of Communication The driver and navigator is the style that requires the pair to focus on a single problem at once. Therefore, the navigator is the one that should give support and question the driver's decisions to keep both in sync. When it does not happen, the collaboration session might suffer from a lack of interaction between the pair. The first miss conception of the driver/navigator approach is that the navigator just watches the driver and does nothing; it should be the opposite. As much communication as possible is a sign that the pair is progressing. Of course, we haven't mentioned the knowledge variance that the drive and navigator might have. Multi-Tasking Checking the phone for notifications or deviating attention to another thing that is not the problem at hand is a warning that the pair is not in sync. The advent of remote pair programming sessions might even facilitate such distraction during the session. The navigator should give as much support as possible and even more when the driver is blocked for whatever reason. Some activities that the navigator might want to perform: Checking documentation for the piece of code that the driver is writing Verifying if the task at hand goes towards the end goal of the task (it should prevent the pair from going into a path that is out of scope) Control the Pomodoro cycle if agreed On the other hand, the driver is also expected to write the code and not just be the navigator's puppet. When it happens, the collaboration in the session might suffer, leading to a heavy load on the navigator.

By Matheus Marabesi

Top Methodologies Experts

expert thumbnail

Stefan Wolpers

Agile Coach,
Berlin Product People GmbH

Professional Scrum Trainer with Scrum.org, agile coach, and Scrum Master based in Berlin. Stefan also curates the weekly ”Food for Agile Thought” newsletter on best posts on agile practices, product management, and innovation—with 35,000-plus subscribers. (See @AgeOfProduct.) Also, he hosts the Hands-on Agile Slack community with more than 12,000 peers.
expert thumbnail

Søren Pedersen

Co-founder,
BuildingBetterSoftware

Søren Pedersen is a co-founder of BuildingBetterSoftware, a strategic leadership consultant, and international speaker. Pedersen is a digital transformation specialist with more than fifteen years of software development experience at LEGO, Bang & Olufsen, and Systematic. Using Agile methodologies, he specializes in value stream conversion, leadership coaching, and transformation project analysis and execution. Søren has spoken at DevOps Days London, is a contributor for The DevOps Institute, and is a Certified Scrum Master and Product Owner.
expert thumbnail

Hiren Dhaduk

CTO,
Simform

Hiren is VP of Technology at Simform, with extensive experience in helping enterprises and startups streamline their business performance through data-driven innovation.
expert thumbnail

Daniel Stori

Software Development Manager,
AWS

Software developer since I was 13 years old when my father gave me an Apple II in 1989. In my free time I like to go cycling and draw, to be honest I like to draw in my working hours too :) twitter: @turnoff_us

The Latest Methodologies Topics

article thumbnail
Scaling Site Reliability Engineering (SRE) Teams the Right Way
Most SRE teams hit a moment when they seem to be juggling more demands than they can manage. This is when these teams may need to scale. Facing similar challenges? Let's unpack what scaling a team is all about.
May 27, 2023
by Biju Chacko
· 1,587 Views · 1 Like
article thumbnail
Effortlessly Streamlining Test-Driven Development and CI Testing for Kafka Developers
Here’s how you can simplify test-driven development and continuous integration with Redpanda and Testcontainers and Quarkus.
May 25, 2023
by Christina Lin CORE
· 3,269 Views · 5 Likes
article thumbnail
DevOps Midwest: A Community Event Full of DevSecOps Best Practices
DevOps Midwest 2023 brought together experts in scale, availability, and security best practices. Read some of the highlights from this DevSecOps-focused event.
May 24, 2023
by Dwayne McDaniel
· 1,996 Views · 1 Like
article thumbnail
SRE vs. DevOps
In this article, you will gain an understanding of the distinctions between Site Reliability Engineering (SRE) and DevOps.
May 24, 2023
by Pradeep Gopalgowda
· 2,116 Views · 1 Like
article thumbnail
DevOps and SRE, Chapter 3: Models for Cultural Change
The third post in this series discusses some of the methods for cultural transformation, as well as the way tools are used to affect change in enterprises.
December 27, 2019
by Ricardo Coelho de Sousa CORE
· 11,644 Views · 3 Likes
article thumbnail
Mocking With the Mockito Framework and Testing REST APIs [Video]
This video tutorial covers the Mockito framework and testing REST APIs/services.
October 30, 2018
by Tarun Telang CORE
· 36,150 Views · 6 Likes
article thumbnail
Mistakes Product Managers Make When Writing User Stories
Avoid these mistakes!
November 18, 2019
by Vikash Koushik
· 62,727 Views · 5 Likes
article thumbnail
Measuring Productivity Of Your Software Development Team With Agile Metrics
Learn how!
Updated October 25, 2019
by Graham Church
· 19,215 Views · 2 Likes
article thumbnail
Managing Technical Debt
When it comes to dealing with your technical debt, treat it like any other debt, and work to rid yourself of it incrementally, in a simply fashion.
May 9, 2017
by $$anonymous$$ CORE
· 4,784 Views · 3 Likes
article thumbnail
Low-Code Development: The Future of Software Development
Stay ahead of the competition and deliver apps faster with low code.
March 17, 2023
by Silas James
· 3,037 Views · 1 Like
article thumbnail
Liskov Substitution Principle: How to Create Beautiful Abbreviations
Want to learn more about the Liskov Substitution Principle? Check out this tutorial to learn more about creating abbreviations using SOLID principles and the LSP.
August 8, 2018
by Vadim Samokhin
· 10,850 Views · 6 Likes
article thumbnail
Linking User Stories to a Data Model: A Tutorial for Agile Development With Jira and ERBuilder
In the following tutorial, we'll walk you through the process of creating user stories on Jira, exporting them as a CSV file, and importing them into ERBuilder.
March 14, 2023
by John Slaven
· 1,866 Views · 2 Likes
article thumbnail
Kanban: The Scrum Master's Not-So-Secret Weapon
Growing your Flow and Kanban competencies are recommended steps in Scrum.org's Scrum Master learning journey.
June 27, 2019
by Yuval Yeret
· 4,628 Views · 1 Like
article thumbnail
4 Kanban Core Practices to Improve Your Scrum Workflow
If you're frustrated with the state of your Scrum workflow, these Kanban core practices can help.
June 6, 2019
by Khoa Doan Tien
· 11,728 Views · 4 Likes
article thumbnail
Should I Still Be a Programmer After 40?
This profession is hard on people who are in their thirties and beyond because we have more things to consider before we make each move. Plus, ain't nobody got time for robot HR people.
Updated August 29, 2019
by Deepak Karanth
· 129,097 Views · 208 Likes
article thumbnail
Is My Product Backlog Just a Laundry List?
If you're treating your product backlog as a catch-all, you're going to have a tough time managing it.
June 28, 2019
by Sivasankari Rajkumar
· 8,362 Views · 6 Likes
article thumbnail
Is Agile Accountability an Oxymoron?
How does your organization incorporate accountability? Is it part of your organization's value statement or part of your enterprise culture?
January 29, 2020
by Dwight Kingdon CORE
· 10,284 Views · 5 Likes
article thumbnail
Retrospective Facilitation: A Simple Hack to Go from Good to Great
Check out the advantages of rotating the facilitator role among your teammates.
May 22, 2023
by Stefan Wolpers CORE
· 883 Views · 1 Like
article thumbnail
What Is Test Pyramid: Getting Started With Test Automation Pyramid
A comprehensive Test Pyramid tutorial that covers what Test Pyramid is, its importance, benefits, best practices, and more.
May 22, 2023
by Praveen Mishra
· 1,425 Views · 2 Likes
article thumbnail
Introduction to App Automation for Better Productivity and Scaling
Learn to analyze code stability in the development lifecycle, how to incorporate test automation into DevOps, and the limitations of manual regression testing.
September 1, 2017
by Soumyajit Basu CORE
· 5,839 Views · 5 Likes
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • ...
  • Next

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com

Let's be friends: