In our Culture and Methodologies category, dive into Agile, career development, team management, and methodologies such as Waterfall, Lean, and Kanban. Whether you're looking for tips on how to integrate Scrum theory into your team's Agile practices or you need help prepping for your next interview, our resources can help set you up for success.
The Agile methodology is a project management approach that breaks larger projects into several phases. It is a process of planning, executing, and evaluating with stakeholders. Our resources provide information on processes and tools, documentation, customer collaboration, and adjustments to make when planning meetings.
There are several paths to starting a career in software development, including the more non-traditional routes that are now more accessible than ever. Whether you're interested in front-end, back-end, or full-stack development, we offer more than 10,000 resources that can help you grow your current career or *develop* a new one.
Agile, Waterfall, and Lean are just a few of the project-centric methodologies for software development that you'll find in this Zone. Whether your team is focused on goals like achieving greater speed, having well-defined project scopes, or using fewer resources, the approach you adopt will offer clear guidelines to help structure your team's work. In this Zone, you'll find resources on user stories, implementation examples, and more to help you decide which methodology is the best fit and apply it in your development practices.
Development team management involves a combination of technical leadership, project management, and the ability to grow and nurture a team. These skills have never been more important, especially with the rise of remote work both across industries and around the world. The ability to delegate decision-making is key to team engagement. Review our inventory of tutorials, interviews, and first-hand accounts of improving the team dynamic.
Kubernetes in the Enterprise
In 2014, Kubernetes' first commit was pushed to production. And 10 years later, it is now one of the most prolific open-source systems in the software development space. So what made Kubernetes so deeply entrenched within organizations' systems architectures? Its promise of scale, speed, and delivery, that is — and Kubernetes isn't going anywhere any time soon.DZone's fifth annual Kubernetes in the Enterprise Trend Report dives further into the nuances and evolving requirements for the now 10-year-old platform. Our original research explored topics like architectural evolutions in Kubernetes, emerging cloud security threats, advancements in Kubernetes monitoring and observability, the impact and influence of AI, and more, results from which are featured in the research findings.As we celebrate a decade of Kubernetes, we also look toward ushering in its future, discovering how developers and other Kubernetes practitioners are guiding the industry toward a new era. In the report, you'll find insights like these from several of our community experts; these practitioners guide essential discussions around mitigating the Kubernetes threat landscape, observability lessons learned from running Kubernetes, considerations for effective AI/ML Kubernetes deployments, and much more.
Things Developers Hate About Being Developers
What drives leaders to adopt the Product Operating Model (POM) after an Agile transformation, particularly one using the Scaled Agile Framework (SAFe), has failed? This article uncovers the psychological, organizational, and strategic reasons behind this seeming contradiction, exploring what motivates leaders to believe that a new approach will succeed where others have not. Exploring the Contradiction Next to resilience, one of the organizations’ foremost objectives is pursuing performance enhancement, fostering innovation, and maintaining a competitive edge in a competitive market. When prior initiatives—such as Agile transformations utilizing frameworks like SAFe — fail to yield the anticipated result, it might seem contradictory for leaders to pivot to another new approach, like the product operating model. Nevertheless, pursuing the product operating model is the talk of the town. This apparent contradiction invites a deeper exploration of psychological, organizational, and strategic factors. Leaders might be influenced by an optimism bias, believing that “this time will be different,” or they may be driven by a bias for action, feeling compelled to implement change regardless of past outcomes. The allure of industry trends and peer pressure can also motivate leaders to adopt new models to avoid being left behind. Additionally, there might be a misattribution of past failures, where leaders perceive that the issues with Agile transformations were due to poor implementation rather than flaws in the methodologies themselves. They may believe that the product operating model addresses specific shortcomings of Agile frameworks, such as better alignment with organizational culture or a more comprehensive approach to transformation. Let us delve into five categories of reasons: I. Psychological Drivers Behind Leaders’ Shift to the Product Operating Model After Agile Failures Several key factors explain why leaders embrace the product operating model despite previous Agile setbacks: 1. Perception of Fundamental Differences Reason: Leaders may perceive the product operating model as fundamentally different from previous Agile initiatives, believing it addresses shortcomings that Agile did not. Background: Leaders might view the product operating model as not just another methodology but a holistic approach to redefining the organization’s operations. They may believe that while Agile focuses on process and delivery efficiency, the product operating model emphasizes value creation through customer-centricity, cross-organizational strategic alignment, and autonomous, empowered teams. This perception leads them to think that the product operating model inherently solves problems that Agile frameworks like Scrum or SAFe could not, such as bridging the gap between strategy and execution by alignment across business entities or fostering true innovation. 2. The Allure of a New Solution Reason: The product operating model may be perceived as a panacea, offering an attractive solution to complex problems. Background: There is often a temptation to believe that a new methodology or framework will solve entrenched problems. The product operating model, emphasizing customer-centricity and empowered teams, can appear as an ideal solution. This allure can overshadow practical considerations about implementation challenges or compatibility with the organization’s context. Leaders may focus on the potential benefits without thoroughly assessing the risks or effort required. 3. Fresh Start Mentality Reason: The desire for a fresh start can drive leaders to adopt a new model, hoping it will revitalize the organization and overcome previous obstacles. Background: Leaders might seek a new beginning to reset the organization’s trajectory after the disappointment of failed Agile initiatives. The product operating model represents an opportunity to wipe the slate clean and approach challenges differently. This fresh start mentality can be fueled by optimism bias, where leaders believe that past failures were anomalies and that the new approach will usher in success. It also provides a psychological boost to teams weary of previous efforts, reigniting enthusiasm and commitment. 4. Optimism Bias and Overconfidence Reason: Leaders may exhibit optimism bias, overestimating their ability to implement the product operating model despite past failures successfully. Background: Optimism bias can lead individuals to believe they are less likely to experience negative outcomes than others. Leaders might think that they can overcome obstacles that hindered previous initiatives with their skills, determination, or new strategies. Overconfidence in their capabilities or the organization’s resilience can result in underestimating the challenges of implementing the product operating model effectively. 5. Change in Leadership or Organizational Structure Reason: New leaders may bring different perspectives and experiences, leading them to favor the product operating model over previous methodologies. Background: Leadership changes often lead to shifts in strategic direction. New executives might have prior success with the POM or be influenced by their professional networks. They may view the previous Agile initiatives as misaligned with their vision or unsuitable for the organization’s current challenges. This fresh leadership perspective can drive the adoption of new approaches, believing that their leadership will make a difference in successful implementation. 6. Appeal of a Comprehensive Organizational Change Reason: The product operating model promises a more comprehensive transformation that aligns strategy, culture, and operations, which can be appealing to leaders seeking significant change. (Even though very few positive examples of a successful “big leap” forward exist.) Background: Unlike Agile frameworks focusing primarily on processes within teams or departments, the product operating model often entails a broader organizational shift. It encompasses changes in structure, roles, governance, and culture. Leaders who recognize the need for a more systemic change may find this approach more attractive. They might believe that only a comprehensive transformation can address entrenched issues and drive the organization toward its strategic goals. II. Addressing Past Failures: Leaders’ Renewed Hope in the POM Leaders turn to the product operating model, seeking solutions to previous Agile shortcomings and unaddressed root causes: 7. Learning From Past Failures Reason: Leaders may believe that lessons learned from unsuccessful Agile initiatives can be applied to ensure the success of the product operating model. Background: Reflecting on why Agile initiatives failed, leaders might identify specific factors such as inadequate training, lack of leadership support, or poor change management. Armed with this knowledge, they may feel better prepared to implement the product operating model effectively. This time, they might plan for more comprehensive training, secure executive buy-in, or allocate more resources to change management. The belief is that by avoiding past mistakes, the organization can realize the full benefits of the new model. 8. Misattribution of Agile Failures Reason: Closely related to #7, leaders might attribute the failure of Agile initiatives to poor implementation rather than flaws in the methodologies themselves, believing that a new model will avoid these pitfalls. Background: There may be a tendency to externalize the reasons for past failures, blaming them on factors such as insufficient training, employee resistance, or inadequate tooling. Leaders might believe that Agile methodologies were sound but were not executed properly. Consequently, if implemented correctly, they might think that the product operating model will succeed. This perspective allows them to maintain confidence in their ability to drive change without addressing deeper systemic or cultural issues that may have undermined previous efforts. 9. Misunderstanding Root Causes of Past Failures Reason: Without a deep analysis of why Agile initiatives failed, leaders might mistakenly assume that a new model will succeed without addressing underlying issues. Background: If organizations do not conduct thorough retrospectives to understand the root causes of past failures, they may overlook systemic problems such as cultural resistance, inadequate resources, or misaligned incentives. Leaders might attribute failure to the methodology itself rather than these deeper issues. Consequently, adopting the product operating model without addressing these root causes sets the stage for repeating the same mistakes. 10. Desire to Address Specific Shortcomings of Agile Initiatives Reason: Leaders may identify specific limitations in Agile frameworks like Scrum or SAFe that the POM can overcome. Background: Agile methodologies sometimes face criticism for issues such as scaling challenges, lack of strategic alignment, or overemphasizing processes at the expense of innovation and autonomy. Leaders may believe that the product operating model addresses these shortcomings by providing a more flexible, scalable framework that integrates strategic considerations with day-to-day operations. This targeted approach aims to retain the benefits of Agile while mitigating its perceived weaknesses. III. Strategic Pressures and Market Demand Leaders embrace the product operating model to meet transformative challenges and align with external expectations for innovation: 11. Need for Organizational Transformation Reason: Leaders may recognize that incremental changes are insufficient and that a transformative approach, like the product operating model, is necessary. Background: When organizations face significant market disruption, declining performance, or strategic shifts, leaders may conclude that radical change is required. The product operating model offers a comprehensive transformation that promises to revitalize the organization. This recognition can drive leaders to embrace the model, believing that only a bold approach can meet the challenges ahead. 12. External Market Signals and Investor Expectations Reason: Market analysts, investors, or industry reports may advocate for the product operating model, influencing leaders to adopt it. Background: External stakeholders can exert significant influence on organizational strategy. If investors or analysts view the product operating model favorably, leaders may adopt it to meet their expectations. This alignment can be essential for maintaining investor confidence, securing funding, or enhancing the organization’s market valuation. 13. Pressure to Demonstrate Innovation and Progress Reason: Leaders may feel internal or external pressure to adopt new practices to showcase the organization’s commitment to innovation and continuous improvement. Background: Stakeholders, including boards of directors, investors, customers, and employees, often expect organizations to evolve and improve continuously. Leaders may adopt the product operating model to signal their dedication to staying at the forefront of industry practices. This move can be part of a broader strategy to enhance the organization’s reputation, attract talent, or satisfy shareholder expectations. The adoption becomes not just an operational decision but also a strategic and symbolic one. 14. Belief in Enhanced Customer Focus Reason: Leaders might be attracted to the product operating model’s emphasis on customer value and believe it will improve market responsiveness. Background: In highly competitive markets, customer satisfaction and responsiveness are critical. Leaders may feel that previous methodologies did not sufficiently prioritize the customer, leading to products that missed the mark. The product operating model’s focus on continuous discovery and direct customer engagement can be appealing. Leaders might believe this approach will lead to better products, higher customer satisfaction, and improved business outcomes. IV. External Advocacy and Influences Leaders may adopt the product operating model due to influences from industry trends, internal champions, and trusted consultants: 15. Influence of Industry Trends and Peer Organizations Reason: Observing competitors or industry leaders successfully implementing the product operating model can motivate leaders to follow suit, fearing they might be left behind. Background: The bandwagon effect plays a significant role in organizational decision-making. If leaders see that other companies, especially industry frontrunners, adopt the product operating model and achieve positive results, they may feel compelled to do the same. This external validation reinforces the belief that the model is effective. Additionally, industry conferences, publications, and thought leaders often highlight success stories, creating a narrative that the product operating model is the next essential evolution in organizational practices. 16. Pressure From Internal Champions Reason: Passionate advocates within the organization may promote the product operating model, influencing leaders to adopt it. Background: Employees or middle managers enthusiastic about the product operating model may lobby for its adoption. Their passion and conviction can persuade leaders to consider the new approach. These internal champions might present compelling arguments, pilot results, or research highlighting the model’s advantages. Leaders may be swayed by this internal momentum, especially if it aligns with their own inclinations. 17. Consultant and Vendor Influence Reason: External consultants or vendors may promote the POM as the solution to the organization’s challenges, influencing leaders to adopt it. Background: Consultants and solution providers often play a pivotal role in shaping organizational strategies. They bring expertise, frameworks, and success stories that can persuade leaders to embrace new approaches. In some cases, consultants may downplay the reasons for previous failures and position the product operating model as the superior alternative. Leaders may trust these external advisors, especially if they have a track record of delivering results in similar contexts. V. Fostering Cultural Alignment and Simplifying Complexity Leaders may use the product operating model to align with their culture and streamline organizational complexities: 18. Anticipation of Cultural Benefits Reason: Leaders may expect the product operating model to foster a more innovative, collaborative, and agile culture. Background: Cultural transformation is often a strategic goal for organizations seeking to enhance innovation and agility. The product operating model emphasizes empowered teams, cross-functional collaboration, and a learning mindset. Leaders might believe adopting this model will catalyze the desired cultural shift, improving employee engagement, retention, and performance. 19. Belief in Better Alignment With Organizational Culture Reason: Leaders might believe that the product operating model aligns more closely with the organization’s culture and values than previous Agile methodologies. Background: Every organization has a unique culture shaped by its history, values, and people. Leaders may perceive that the product operating model resonates better with their organization’s way of working. For example, the product operating model might seem like a natural fit if the culture emphasizes customer focus, innovation, and cross-functional collaboration. This perceived alignment can bolster confidence in its potential success. 20. Desire to Simplify Complexities Reason: Leaders may perceive the product operating model as a way to simplify complexities in organizational processes and decision-making. Background: Large organizations often struggle with bureaucracy, slow decision-making, and fragmented processes. The product operating model promises to streamline operations by aligning teams around products, clarifying roles, and reducing silos. Leaders might believe this simplification will increase efficiency, speed, and clarity, making the organization more agile and responsive. Food for Thought Our exploration has delved into why leaders might adopt the product operating model after previous Agile transformations have failed to deliver the desired results. We’ve examined psychological motivations, cognitive biases, organizational dynamics, and external influences that drive these decisions. Yet, you may ask many more questions to understand a leader’s decision to pursue the product operating model where previous “Agile initiatives” have failed. Here are some questions to get you started: Are We Addressing the Root Causes of Past Failures? Have we thoroughly analyzed why previous initiatives, like Agile transformations, didn’t deliver the expected results? We risk repeating the same mistakes under a new banner without understanding and addressing these underlying issues — cultural resistance, misaligned incentives, or inadequate resources. Does the Product Operating Model Align With Our Organizational Culture and Values? How compatible is the product operating model with our existing culture? Will it build upon our strengths, or will it require a cultural overhaul? Understanding this alignment is crucial to ensure smoother adoption and genuine commitment from all levels of the organization. Are We Prepared for the Deep Organizational Changes Required? Implementing the product operating model isn’t just a procedural adjustment; it demands significant shifts in structure, mindset, and behavior. Are we ready to undertake this comprehensive transformation, and do we have a clear plan to manage the change effectively? Do We Have Strong Leadership Commitment and Capability to Drive This Transformation? Successful transformation requires unwavering support from leadership. Are our leaders equipped with the necessary skills, self-awareness, and adaptability to guide the organization through this change? How will they model the behaviors and values essential for the new operating model to take root? How Will We Engage and Communicate with All Stakeholders to Ensure Buy-In? Effective communication is critical to overcoming resistance and fostering collaboration. What is our plan for engaging employees, customers, investors, and other stakeholders? How will we craft messages that resonate and establish feedback mechanisms to address concerns and adapt as needed? Are We Influenced by Cognitive Biases or External Pressures in Our Decision to Adopt This Model? It’s important to reflect on whether factors like optimism bias, overconfidence, or the allure of industry trends are driving our decision. Are we choosing the product operating model because it’s genuinely the best fit for our organization or because of external influences and a desire to ‘keep up’ with others? Conclusion Leaders’ belief in the potential success of the product operating model, despite previous failures with Agile initiatives, is multifaceted. It encompasses psychological factors like optimism bias, strategic considerations such as the need for comprehensive transformation, and external influences from industry trends or stakeholder expectations. While the product operating model offers promising benefits, it’s crucial for leaders to critically assess their motivations, understand the root causes of past failures, and ensure that they address underlying organizational challenges. Success with the product operating model requires more than adopting new practices; it demands a genuine commitment to cultural change, investment in capabilities, and alignment across the organization. By recognizing and addressing the factors discussed, leaders can increase the likelihood of a successful transformation that delivers lasting value. Therefore, start by analyzing why the previous Agile transformation did not work out as expected. My free Meta-Retrospective Facilitation Template is an excellent tool for that purpose; see the Meta-Retrospective Facilitation Template.
Daily standups are a staple in agile software development, designed to keep teams aligned, unblock issues, and ensure smooth progress. Traditionally, these standups have been synchronous meetings, where the team gathers at a set time each day to share updates. However, with the rise of remote work and distributed teams, asynchronous standups have emerged as a viable alternative. In this blog, we’ll explore the pros and cons of synchronous and asynchronous standups, helping you weigh the options and decide which model best suits your team’s unique needs. Synchronous Standups: Pros and Cons A synchronous standup is a real-time meeting where all team members join at a set time, either in person or via video conferencing, to discuss what they worked on, what they’re working on, and any blockers. Pros Instant communication: Real-time conversations foster quick discussions, clarifications, and resolutions. If someone has a blocker, team members can offer immediate help.Promotes team bonding: Synchronous standups bring the team together, creating a sense of camaraderie. This is especially beneficial for teams that don't often work face-to-face.Encourages accountability: Regular live check-ins ensure that everyone is aligned and accountable for their commitments.Fosters spontaneous discussions: Issues not originally on the agenda can surface and be discussed on the spot, potentially solving problems quicker than in an asynchronous setup. Cons Scheduling difficulties: Time zone differences, personal schedules, and remote work can make it hard to find a time that works for everyone, particularly in global teams.Potential for wasted time: Standups can sometimes veer off-track, leading to longer discussions that take up more time than necessary.Interruptions in deep work: For engineers deeply immersed in coding, pausing for a daily meeting can disrupt focus and productivity. Asynchronous Standups: Pros and Cons In an asynchronous standup, team members provide their updates in a written or recorded format (such as via Slack, a project management tool, or email), allowing each person to respond at their convenience. Pros Flexibility: Team members can provide their updates whenever it works best for them, accommodating different time zones, personal schedules, and varied working hours.Less disruptive: Asynchronous updates allow developers to stay focused on their work without the need to stop mid-task for a meeting.Permanent record: Written updates create a log of daily progress that can be referred back to, aiding transparency and making it easier to track progress over time.Inclusive for all time zones: For distributed teams, asynchronous standups are more equitable, ensuring everyone can participate without scheduling conflicts. Cons Lack of immediacy: Without real-time interaction, urgent issues might not be addressed as quickly, and discussions around blockers might take longer to resolve.Reduced team cohesion: Team members might feel more isolated when not regularly interacting face-to-face, which can hinder team bonding. This is especially a big deal if there is a new member on the team. They may have a really hard time bonding with the team.Potential for miscommunication: Written updates can sometimes lead to misunderstandings or incomplete information, as there’s no immediate opportunity to clarify or ask follow-up questions.Can lead to disengagement: Without the structure of a live meeting, some team members might delay or neglect to provide their updates, reducing the overall effectiveness of the standup. Which Model Is Right for Your Team? Ultimately, whether you choose synchronous or asynchronous standups depends on the specific needs and dynamics of your team. Both approaches have strengths and limitations, and neither is inherently better than the other. Synchronous standups may work best for co-located teams or teams in similar time zones who benefit from spontaneous problem-solving and team bonding.Asynchronous standups could be a better fit for distributed teams that need flexibility and autonomy in their daily routines, especially when time zones make scheduling live meetings challenging. Conclusion There is no one-size-fits-all approach when it comes to running efficient daily standups. The right choice — whether synchronous, asynchronous, or even a hybrid of both — depends on your team’s needs, culture, and workflow. By carefully considering the pros and cons of each model, you can find the balance that best supports your team’s productivity, communication, and overall success.
Leading a cross-functional software development team requires the perfect balance of technical expertise and people management. Most development managers know this from experience, which is quite easy to understand. However, when it comes to execution, things get tricky and difficult to implement. If it were that easy, 66% of the enterprise-scale software development projects would not have faced budget overruns. Software development leadership (such as Scrum Master, Development Manager, and Product Owner) is crucial in successfully achieving the project objectives and managing the cross-functional software development team. This article will explain how development leaders can demonstrate leadership in managing a cross-functional team. 5 Key Ways to Demonstrate Strong Leadership in Cross-Functional Teams Leadership in cross-functional teams requires strong communication, emotional intelligence, and technical understanding. Here are the five ways leaders can effectively guide their teams toward successful project outcomes: 1. Create a Shared Vision The very concept of cross-functional collaboration is to ensure that every department, whether sales, marketing, finance, development, or stakeholders, has a say in product development. The primary goal of the Product Owner is to create a shared vision of the project so that team members from each department are aligned to achieve project goals. Thus, the Product Owner can demonstrate leadership by understanding each department’s priorities and motivation and using accessible language that bridges the gap and brings everyone on the same page. 2. Facilitate Communication Within the Team A cross-functional team comprises members from different departments. An individual team member may be well acquainted with departmental team dynamics but not with cross-functional team dynamics. Thus, the role of the Development Manager is to facilitate collaboration between the heads of the other departments. The Development Manager can demonstrate leadership by establishing clear communication norms so that each individual is aware of the protocols and responsibilities. 3. Cultivate Emotional Intelligence for Team Dynamics Development or engineering is a large part of the overall product development. Thus, team cohesion is very important at the development level in a cross-functional team. Also, nowadays, most development teams are Agile, and they are self-organizing and make decisions on their own. Thus, it becomes more important than ever to have strong cohesion in the team. In this case, the Scrum Master is a leader who can demonstrate leadership by cultivating emotional intelligence in the team. The Scrum Master can facilitate daily standup team meetings, sprint planning, and the establishment of other communication norms within the team. This ensures the Agile development team is highly collaborative, able to make decisions independently, and effective in conflict resolution. 4. Supporting the Team in Technical Challenges Software development is a challenging affair. At some point, the development team needs mentorship and support with technical issues related to product development. The Scrum Master can demonstrate leadership by participating in technical discussions and supporting the team when faced with technical obstacles. They can also support the team by giving them the resources they need to excel and providing training using strong technical skills. Even the template of the sprint retrospective focuses on that. It talks about what went well, where you had problems, and where you can improve. You can be more open, ask questions to the team members, and embrace creativity to solve problems. 5. Clarity Through Effective Communication The Product Owner sets the project direction, the Scrum Master ensures that direction is communicated, and the development team follows the direction in Agile software development. With good communication, all three roles involved in software development can demonstrate leadership at the individual level. For example, customer experience shapes all aspects of the current software development landscape. The Product Owner’s role is to interpret and convey customer requirements in an actionable way for the development team. Similarly, the Scrum Master can clarify things by effectively translating product owner messages to the development team and communicating feedback between engineers, stakeholders, and the Product Owner. Development team members can also take the initiative and start conversations to ensure a free flow of information and effective communication with the team. These are the five main ways leaders can demonstrate leadership in cross-functional software development teams. However, development leaders can be more involved in leading and guiding the team based on the team’s needs and experience. Conclusion In cross-functional software development, team professionals from various functional areas and departments work together. To achieve success, it is important to create a shared vision and understanding so that all members can work together as a team. Demonstrating leadership in cross-functional teams helps you build a team that is focused in one direction, operating efficiently, and aligned with the project objectives.
DevSecOps is an increasingly popular framework for integrating security into every phase of the software development lifecycle. A key phase that organizations should be targeting as soon as possible is their Continuous Integration and Continuous Deployment (CI/CD) pipeline. Doing so can accomplish more than increasing security and decreasing risk. It can also increase productivity and morale. In 2020, the devastating attack on managed software provider SolarWinds brought software supply chain vulnerabilities into public consciousness. Attackers inserted malicious code into the company platform, which was then distributed via standard updates to around 18,000 customers, including several U.S. government agencies and Fortune 500 companies, leading to data theft and possible espionage. A similar attack against Kaseya VSA in 2021 impacted around 1,500 businesses, causing disruption and financial loss, including forcing a Swedish supermarket chain to close temporarily. These attacks demonstrate how a single vulnerability can impact thousands of companies and potentially millions of people. However, despite the increased awareness of the threat, which even led to the 2021 Executive Order on Improving the Nation’s Cybersecurity in the U.S., software supply chain vulnerabilities remain a major problem. Increased reliance on open-source dependencies and third-party integrations continues to expand attack surfaces, making it easier for attackers to infiltrate the systems, while the competitive demand to deliver software within ever shorter timeframes often distracts teams from their security mission. As a result, every company that isn’t actively seeking to improve its software supply chain defenses is vulnerable. DevSecOps and the CI/CD Pipeline A critical component of the software delivery supply chain is the CI/CD pipeline, which is a set of practices development teams use to deliver code changes more frequently and reliably. CI/CD pipelines have become the foundation of fast and efficient software delivery processes that allow for the frequent addition of new features and bug fixes. However, an insecure CI/CD pipeline opens multiple entry points for attacks, including malicious code, compromised source controls, vulnerable dependencies, a compromised build system, the injection of bad artifacts, compromised package management and artifact signing, abuse of privileges, and more. These vulnerabilities introduce a significant security gap in the software supply chain. To secure CI/CD pipelines, many organizations are now integrating DevSecOps best practices into it. A DevSecOps approach ensures that security gets embedded into every phase of the pipeline. There are two main aspects of this: Incorporating application security testing strategies to establish a strong application security posture. These include static code analysis, dynamic application testing, open source and third-party package management, secrets scanning, and vulnerability management. Instituting deployment security measures, including image and container security, infrastructure as code (IaC) and environment security, secrets management, cloud security, and network security. DevSecOps processes and tools enable automated security checks and continuous monitoring that make securing the development process possible. By doing so, they also foster better collaboration among the development, operations, and security teams. This promotes a more secure development environment and more secure and resilient software. The DevSecOps CI/CD Pipeline The DevSecOps CI/CD pipeline uses automation to add key measures and strategies to each of its five stages: code, build, test, deploy, and monitor. 1. Code DevSecOps secures the coding stage of the CI/CD pipeline by automatically evaluating source code to detect vulnerable or faulty code that may have been checked in by developers. These automated evaluations include: Software Composition Analysis (SCA) to detect known vulnerabilities and license issues in open-source dependenciesStatic Application Security Testing (SAST) to automate scanning and analysis of code for early detection of vulnerabilitiesSecrets Scanning and Management to make sure sensitive information, including passwords, API keys, tokens, encryption keys, and more, are not added to the codebase 2. Build This DevSecOps best practice automatically scans binaries and packages after the source code is committed, compiled into executable artifacts, and stored in an artifact repository. Automated scanning of binaries and packages should be continuous until the code is sent to the testing stage, where the code, still in the form of artifacts, is scanned in its entirety. 3. Test At this stage, real-time testing of the application is conducted through attack simulations using PenTesting, SQL injections, cross-site scripting, and more. Artifact scanning and Dynamic Application Security Testing (DAST) are also integrated into the CI/CD pipeline during this stage. 4. Deploy The deployment stage can be leveraged as a key control point to automate security checks and enforce application security measures. Key practices include: Policy checksCompliance verificationDigital scanningInfrastructure as Code (IaC) securityCloud security 5. Monitor Continuous and automated monitoring must be implemented to ensure applications continue to operate as expected. A DevSecOps best practice is to add continuous scanning for Common Vulnerabilities and Exposures (CVEs) and integrate these scans with audit tools that can track and report on compliance with industry and organization rules. The Benefits of Taking a Best Practices DevSecOps CI/CD Approach The most important DevSecOps best practices for CI/CD pipelines are: Shift Left: Integrate security tests and impose security checks as early as possible in the development process to make identifying and addressing vulnerabilities faster and easier.Automate Security Testing: Automatically conduct security tests and scans throughout the pipeline.Monitor Continuously: Identify and mitigate security incidents in real time through continuous monitoring.Collaborate Across Teams: Nurture collaboration among development, operations, and security teams to ensure security remains a top priority for everyone and increase productivity.Utilize Regular Training and Awareness Programs: These programs are essential for ensuring that security best practices remain a priority for everyone with a stake in the CI/CD pipeline. Companies that implement and maintain these DevSecOps best practices for CI/CD pipelines will enjoy several key benefits: Enhanced Security: Early identification and mitigation of vulnerabilities, leading to safer software, increased developer productivity, and cost savings.Accelerated Time-to-Market: Faster delivery of secure software that does not require time-consuming and expensive fixes.Happier and More Productive Teams: The ability to satisfy the needs of the development, operations, and security teams without compromising productivity or security standards.Reduced Security Risks: A dramatically lower risk of security breaches and data loss that can result in fines or damage to the brand.Proactive Compliance: Identification of regulatory and organizational compliance issues before violations occur. Conclusion As part of securing a company’s software supply chain, every software organization should embrace DevSecOps for CI/CD pipelines. However, it’s about more than just security. With DevSecOps for CI/CD pipelines, companies can accelerate software production and, equally important, improve the morale of everyone involved in the software development process, as well as those who track user satisfaction with that software.
By all means, the Product Operating Model (POM) has surged in popularity, especially among traditional organizations keen to prove their adaptability (And, of course, among the McBostonians who, now that “Agile” is dead, need a substitute to bill their junior consultants). On the surface, the product operating model promises a more customer-focused, outcome-driven approach. Empowered teams create value iteratively rather than following rigid, output-focused roadmaps. Best of all, they do so autonomously, well-aligned with the organization’s overall strategy and the possibly myriad other teams working on different initiatives. Think of Scale Agile Framework (SAFe) done right. Yet, for all its promises, the product operating model risks becoming another buzzword rather than an actual driver of transformation. Organizations that tout a “product-led” philosophy often do so without making the profound changes needed to live by it. This hollow adoption of product practices, or what we might call “Product Washing,” leaves companies stuck in the same old dynamics but with a new vocabulary: transformation by reprinting business cards. Does this sound familiar? What Is the Origin of Product Washing? To understand why product washing happens, we need to investigate the motivations of stakeholders who — intentionally or not — preserve the status quo and the influence of consultancies eager to promote the product operating model as the latest cash cow. Here’s how product washing unfolds and what it takes to implement a meaningful, sustainable product operating model: Stakeholder Motivations: Keeping Control While Appearing “Product-Led” Organizations may claim to adopt a product operating model, but the motivations of key stakeholders often reveal a different story. Executives, department heads, and even middle management have vested interests in preserving control resources, project timelines, and strategic decisions. Here’s how these motivations often block authentic POM adoption: Protecting Established Power Structures The product operating model fundamentally shifts decision-making to those closest to the customer and the technology — cross-functional teams that own their outcomes. However, this threatens the traditional hierarchy where C-suite and senior leaders control what gets built, when, and how. Reluctant to relinquish this power, stakeholders may advocate for the new approach rhetorically but resist implementing the autonomy that defines it. As a result, decisions remain centralized, and teams operate under tight directives rather than genuinely owning their work. Budget Control and Predictability Executives accustomed to annual budgets, strict KPIs, and predictable outputs often view the product operating model as threatening the predictability they value. POM requires flexibility, room for experimentation, and a tolerance for short-term uncertainty to achieve long-term outcomes. However, maintaining predictable, centralized budget controls often comes at the cost of team empowerment, likely reducing the approach to a new coat of paint on the same project-based, budget-driven model. Fear of Accountability Shifts A genuine POM shifts accountability to teams, who are responsible not just for shipping features but also for delivering business results. This shift is uncomfortable for leaders who measure success by project completion rather than customer impact. These stakeholders may be wary of setting clear, outcome-driven goals, preferring to manage by tracking tasks or features completed — an approach that undercuts the entire product model ethos but is much simpler to implement and follow through, not to mention classic incentives still linked to the old way of working. External Consultancies: POM as a Revenue Stream Enter the consulting giants McBoston and others. These firms have seized on the product operating model as the “next big thing” in corporate transformation, marketing it as a solution for every organization, regardless of context. Simultaneously, this approach is necessary if you want to bill junior consultants for senior consultant rates. This external influence often distorts POM adoption in several ways: One-Size-Fits-All Approach Consultancies often sell POM as a turnkey solution, ignoring each client’s unique culture, readiness, or needs. They focus on replicable templates and processes over context-sensitive transformation. This leads companies to mimic the structures of successful product companies without understanding or building the cultural foundations that make POM work. When external consultants push a model without regard for an organization’s maturity or constraints, POM becomes another misaligned framework destined to fail. Emphasis on Quick Wins Over Sustainable Change Consultancies are incentivized to deliver visible, short-term results to justify their fees and retain clients. This necessity leads them to prioritize low-hanging fruit — creating “product teams,” renaming roles, or implementing surface-level changes — rather than addressing the fundamental cultural shifts the product operating model requires. These cosmetic changes provide the illusion of progress without actually enabling empowered, outcome-driven teams. Continued Dependence on Consulting When consultancies position themselves as the experts and orchestrators of transformation, they often retain decision-making power. This dependency is counter to POM’s principle of empowering internal teams. By embedding consultancies into core decisions, organizations bypass their own people’s input and miss opportunities to cultivate product leadership from within. The result? An organization that looks product-driven from the outside but still leans on consultants for direction. Anti-Patterns of a Hollow Product Operating Model Transformation The signs of a superficial or misguided product operating model implementation — classic “product washing” — are often straightforward, yet they persist in many organizations. Here are the most common anti-patterns, along with their implications: Surface-Level Structural Changes To signal alignment with the product operating model, many companies reorganize teams, rename departments, or create “product-focused” titles but fail to provide the necessary empowerment or accountability. They add layers and labels without shifting the underlying culture or processes. This approach leads to a “feature factory” where teams are busy delivering outputs but are not genuinely accountable for customer outcomes. Renaming Project Managers and Business Analysts as Product Managers A particularly insidious example of product washing is rebranding existing project roles as “product managers” without redefining their roles or responsibilities. These “product managers” are probably still judged by delivery timelines and feature completion rather than the impact on customer experience or business goals. This perpetuates a task-oriented mindset incompatible with true product thinking, leaving organizations stuck in project-based thinking with new titles. Token Autonomy and Faux Empowerment In a product-led organization, teams have genuine autonomy, including decision-making authority and accountability for outcomes. However, this autonomy may be only skin-deep in many failed product operating model implementations. Teams are empowered in name but restricted by top-down directives, rigid roadmaps, and lack of access to the data, funds, or tools they need to make meaningful decisions. Faux empowerment is not merely a recipe for frustration, as teams are expected to “own” outcomes without real agency. It is also a genuine sign of product washing. Metrics Without Meaning Many organizations applying the product operating model seem to continue using traditional metrics like feature velocity or on-time delivery, which gauge activity rather than value. Accurate product-led metrics focus on outcomes — customer satisfaction, usage, retention, and revenue impact. Yet, because outcomes are more challenging to quantify and take longer to realize, probably outside the tenure of a responsible individual, managers may advocate using familiar but meaningless metrics, giving the organization the illusion of success, likely without genuine progress but in alignment with existing bonus systems. Steps Toward a Genuine Product Operating Model Transformation Avoiding product washing requires organizations to go beyond labels and create conditions where proper product-led practices can thrive. Here’s what it takes: Establish True Leadership Alignment on Product Goals Leadership must buy into the principles of the product operating model beyond appearances or quick wins. The CEO, CPO, CTO, and other leaders must visibly support the shift to outcome-driven work, even if it means relinquishing some control. This may require fundamental shifts in determining budgets, priorities, and strategic initiatives. Leaders must demonstrate commitment by promoting experimentation, creating a failure culture, valuing customer impact, and prioritizing long-term goals over short-term metrics. Build Competency, Not Dependency Instead of relying on consultants to drive the transformation, focus on building internal expertise. Train product managers, designers, and engineers in customer-centric thinking, data literacy, and business acumen. This investment will take (much) more time but, consequently, allows the organization to make informed decisions independently and empowers teams to drive their own product success without an external crutch. Shift Metrics to Focus on Outcomes; Adapt Bonus Systems Meaningful metrics are critical. Move away from delivery-based metrics like feature count or project completion toward outcome-focused metrics that reflect customer value and business impact. Customer retention, satisfaction, and usage are far more telling than feature velocity. This shift in measurement reinforces a culture focused on impact rather than activity. Moreover, ensure that the required change in metrics is reflected in everyone’s bonus structure. Start Small, Learn, and Scale Intentionally Instead of a top-down, all-in POM rollout, begin with pilot teams that are genuinely empowered, accountable, and aligned with a clear product vision. Gather insights from these initial teams and iterate on what works and what doesn’t. Gradually scaling the product operating model adaptation helps avoid over-commitment to a rigid model and provides opportunities to adjust based on actual outcomes and feedback. Commit to a Culture of Learning and Adaptation Product-led organizations are never static. Build mechanisms for continuous feedback from customers and internally and establish a culture where teams generally feel safe to challenge assumptions and fail in the process of creating value. This mindset of learning and iteration is the cornerstone of any successful product organization and enables lasting transformation over surface-level change. Conclusion The Product Operating Model’s allure is powerful, but without real commitment, it’s little more than another management fad. Product Washing — implementing POM in name only — is worse than not adopting the product operating model at all, as it breeds cynicism and disillusionment within teams. True transformation requires profound, often uncomfortable cultural shifts, accountability, and leadership. For organizations serious about the product operating model, the journey starts with clear intent and a willingness to address uncomfortable truths. Abandon the quick wins and invest in building product expertise, real team empowerment, and an unwavering focus on outcomes. Anything less is just another rebranding exercise destined to fail or product washing.
DevOps and Agile are two distinct software development methodologies aimed at maximizing efficiency and speed in product releases. Agile gained prominence in the 2000s, revolutionizing product development practices, particularly in software. However, it initially overlooked the operational aspects of product deployment and management, prompting the need for DevOps to bridge this gap. As many organizations adopt both DevOps and Agile methodologies, it becomes crucial for product managers and their teams to understand the differences between DevOps and Agile to use each methodology effectively. What Is DevOps? DevOps is a development methodology emphasizing collaboration, integration, and communication among IT professionals to facilitate rapid product deployment. It promotes a culture of collaboration between development and operation teams, enabling faster and automated code deployment to production. DevOps aligns IT operations and development to enhance the speed of delivering services and applications, driving business competitiveness and customer service levels. It offers specific benefits across three key categories: business, cultural, and technical. Business benefits include improved trust and team collaboration, resulting in stable operations and faster delivery. Cultural benefits lead to more efficient and productive teams, ultimately enhancing customer satisfaction. Technical benefits manifest in quicker problem resolution, continuous delivery, and reduced complexity, enabling faster and higher-quality code deployments compared to traditional methods. Some of the other benefits of DevOps are mentioned below. Trust in Collaboration: DevOps fosters shared transparency and responsibility, enabling quicker feedback and problem-solving, reducing finger-pointing and misaligned priorities.Smart Work and Fast Releases: Teams practicing DevOps release deliverables more frequently with higher stability and quality, thanks to automation and streamlined processes.Accelerated Resolution Time: Seamless communication and transparency in DevOps enable quick issue resolution, minimizing downtime and improving customer satisfaction.Better Management of Unplanned Work: DevOps enables clear prioritization and efficient handling of unplanned tasks, reducing distractions and enhancing productivity.Dependable Resolution of Issues: DevOps ensures swift response and continuous improvement through frequent small changes, replacing traditional waterfall models with agile operations.Better Organizational Flexibility: DevOps breaks down departmental barriers, promoting alignment and cooperation across global teams and enhancing business agility and consistency.Fostering Creativity through Automation: Automation in DevOps frees up time for innovation, encourages cross-team collaboration, and supports continuous improvement and creativity. What Is Agile? Agile is an iterative software development and project management approach emphasizing customer feedback, collaboration, and rapid releases. It enables development teams to react and adapt to changing customer needs and market conditions through upfront design and planning followed by development in small increments. Continuous stakeholder collaboration and methodologies like eXtreme Programming (XP) and Scrum support quick adaptation to changes and frequent releases of usable product versions, contrasting with traditional waterfall models. Preparing for a Scrum Master role involves deeply understanding Agile methodologies. Scrum, a popular Agile framework, is often a focal point in many Scrum Master interview questions. Candidates should be well-versed in Scrum principles, roles, and ceremonies to effectively manage Agile projects and address common questions in interviews. To learn more about Agile methodologies, follow this blog on Agile vs Waterfall and understand how each software development lifecycle (SDLC) model differs. Here are some benefits that Agile offers: Higher Quality Product: Agile prioritizes working software over extensive documentation, enabling modular software delivery and rapid updates, including security and feature enhancements.High Customer Satisfaction: Agile involves customers in the decision-making process, ensuring products meet their requirements quickly, reducing time-to-market, and fostering repeat business.Enhanced Business Value: Agile focuses on maximizing customer value through iterative cycles that align deliverables with customer needs, empowering teams at all levels to prioritize effectively.Adaptability: Agile emphasizes flexibility and responsiveness to changes, enabling teams to adjust priorities and plans quickly to meet evolving client demands.Improved Control: Agile provides transparency, quality control features, and integrated feedback mechanisms that enhance project management control and ensure high-quality outcomes.Fewer Risks: Agile environments support skilled individuals by fostering creativity, collaboration, and skill development, reducing organizational inefficiencies, and increasing personal responsibility. So far, we have learned about DevOps and Agile and their benefits. Further, we will learn the key differences between DevOps and Agile. While they are often interconnected and implemented in projects, they also have distinct differences, as we will learn below. Key Differences Between DevOps vs. Agile DevOps and Agile are pivotal methodologies in the SDLC, each with unique focuses and benefits. Agile emphasizes iterative development, customer feedback, and collaboration within teams. In contrast, DevOps extends this collaboration to operations, ensuring a smooth, Continuous Integration and Continuous Delivery (CI/CD) process. Below are the key differences that set these methodologies apart. ParameterDevOpsAgileDefinitionIt is a practice that brings development and operation teams together.It refers to the continuous iterative approach, focusing on collaboration, customer feedback, and small and rapid releases.PurposeIt aims to manage end-to-end engineering processes.Its purpose is to manage complex projects.ScopeIt is a company-wide approach to software deployment.It focuses on developing single applications or projects.TaskIt focuses on continuous testing and delivery.It focuses on constant changes and iterative development.Team SizeIt involves larger teams that integrate all stakeholders.The teams are typically small, facilitating faster movement.Team Skill SetIt divides and spreads skill sets between development and operations teams.It emphasizes a wide variety of skills among all team members.ImplementationIt lacks a single framework but focuses on collaboration.It can be implemented using frameworks like Scrum, Kanban, etc.DurationIt aims for continuous delivery, ideally daily or hourly.It works in short cycles called sprints, typically less than a month.Target AreasIt targets end-to-end business solutions and fast delivery.It primarily focuses on software development.FeedbackIt relies on internal team feedback for improvement.It receives feedback directly from customers.Shift Left PrincipleIt supports both "shift left" and "shift right" principles.It supports only the "shift left" principle.FocusIt focuses on operational and business readiness.It focuses on functional and non-functional readiness.ImportanceIt emphasizes developing, testing, and implementation equally.It prioritizes developing software as per requirements.QualityIt enhances quality with automation and early bug detection.It ensures quality through iterative improvements.ToolsIts tools include Puppet, Chef, AWS, etc., for automation.Its tools include Bugzilla, JIRA, etc., for project management.AutomationIt prioritizes automation for efficient software deployment.It does not emphasize automation as much as DevOps.CommunicationIts communication includes specs and design documents.Its communication revolves around daily Scrum meetings.DocumentationIt values process documentation for operational readiness.It prioritizes a working system over comprehensive documentation.PriorityIt emphasizes the continuous deployment of software.It prioritizes the continuous delivery of software.Implementation FrameworksDevOps frameworks include CAMS, CALMS, DORA, etc.Its frameworks include Scrum, Kanban, XP, etc.AlternativesIt contrasts with silo-based development and deployment.It contrasts with Waterfall as a traditional alternative. Now that we have understood the key differences between DevOps and Agile, it is important to learn when to use which approach to improve the project result. When Do DevOps and Agile Work Together? DevOps complements Agile in two main ways: as a missing puzzle piece that enhances Agile practices and as an evolved version of Agile itself. It integrates Agile innovations into operational processes, amplifying feedback loops and facilitating flawless communication between and across teams. Scrum supports this communication through practices like retrospectives, planning meetings, and daily stand-ups. Integrating DevOps and Agile methodologies enhances business performance, leading to significant revenue growth. It simplifies release processes, improves product offerings, and maximizes collaboration. Implementing both methodologies supports the seamless implementation of CI/CD pipelines, ensures higher value and lower risks in releases, and offers benefits such as quicker issue resolution, fewer bugs, increased visibility, improved customer satisfaction, and higher-quality products. However, implementing DevOps and Agile practices together, project teams, developers, or testers might face challenges due to the different sets of tools required to leverage the full capabilities of each approach. Maintaining continuous testing across multiple environments and devices can be resource-intensive and time-consuming. Scaling test processes to meet the demands of frequent releases and high-performance standards can also be challenging. Organizations or teams can use any cloud-based platform that helps integrate project management tools and facilitates continuous testing to accelerate the feedback cycle and improve testing efficiency. Leveraging such solutions will allow teams to conduct manual and automated testing at scale across various devices, browsers, and operating systems. To further enhance your CI/CD process, advanced testing platforms can be employed to boost the speed and efficiency of automation testing. Integrating HyperExecute into your CI/CD pipeline can significantly accelerate test execution, providing swift feedback on code modifications. It empowers you to manage and execute extensive test suites concurrently, optimizing resource allocation to lower infrastructure expenses while ensuring robust and consistent tests across diverse environments, elevating overall software quality. Conclusion In conclusion, while Agile methodologies have brought significant successes to many teams, others have faced challenges due to misunderstandings or incorrect implementations. Incorporating DevOps can effectively address these gaps, offering a synergistic approach to enhance software development quality and speed. Embracing DevOps and Agile practices positions teams for greater success in achieving their development goals.
(Note: This is a continuation of the article "What Is a Performance Engineer and How to Become One: Part 1.") Create a Monitoring Framework/Strategy A monitoring strategy is a plan for monitoring, measuring trends, and analyzing your systems, apps, components, and websites. A good monitoring system will allow real-time visibility of the application's availability, performance, and reliability to the end users, and a first look at the inside of the application through logs, traces, and metrics will allow the performance engineers to fix problems early. Performance engineers must understand how to: Define goals and objectivesIdentify what to monitor (e.g., critical metrics)Set benchmarks, thresholds, and alertsCreate dashboardsChoose the right monitoring toolCollect data on monitored metricsAnalyze the data regularlyDevelop an effective incident response planImplement real-time monitoringIdentify trends and patterns They should continuously optimize the monitoring framework to maximize its effectiveness, ensuring optimal performance, preventing costly downtime, and creating a positive user experience. Finally, if we keep your systems in check, they will keep your business ahead. Know How to Set Up Dashboards for Server Metrics and Application Logs Setting up dashboards for server metrics and logs depends on how structured your logs are, how deeply nested the metrics are, and how your data would correlate and provide business value. Understanding the key metrics and logs an organization wants/needs to track is probably the most critical aspect of creating meaningful dashboards. A solid dashboard can only be created by combining the needs and preferences of business users with the data, visualization, and automation capabilities of a dashboard solution. Performance engineers must spend as much time asking questions and learning about the business's needs as you would spend on building the dashboard itself and understanding how to monitor and get the statistics of the custom metrics of your applications and other metrics not provided by Amazon Web Services (AWS) by default, for example. Creating a metric filter for the appropriate log group where the logs are being captured for the required data, knowing metric details, assigning the metrics, creating a graph from that relevant metric, and adding it to the dashboard are some of the key concepts for performance engineers to know. The most common approach is to use one from hundreds of pre-defined templates to create dashboards and use the system to correlate your logs, traces, and metrics. Propose Effective Performance Optimization Solutions Proposing effective performance optimization solutions for any application or system is not a straightforward exercise for performance engineers, and it requires lots of skills and experience. The first cause of a performance problem is the selection of the wrong algorithms for the task, which is sometimes caused by choosing the wrong data structures. Performance engineers and developers need to figure out where your bottlenecks are and focus their time on them. Once we know what to optimize, we will need to figure out the limitations. For example, is it the algorithmic complexity, or is it disk I/O, or is it thrashing memory, etc? Implementing a permanent performance fix effectively requires a lot of understanding of the architecture design and a thorough analysis of hardware and software components to identify any areas that require improvement. There is no point in optimizing some parts of the code that are not contributing significantly to your application's overall CPU, Disk, or responsiveness. With the help of metrics, logs, and traces from the monitoring tools, we can isolate the guilty layer or component from the architecture first and start fine-tuning them for desired performance. In today's constantly changing distributed complex environments, a structured performance optimization approach must be taken for all aspects of the system to avoid quick and dirty performance fixes. Performance engineers should stop overlooking the initial analysis and focus on optimizing the common cases, considering the impact of the newest part of the system first, for example. Build a CI/CD Pipeline for Doing Performance Testing A continuous integration and continuous deployment (CI/CD) pipeline in performance testing is an automated process that runs performance checks at each stage of software development. Performance engineers must choose the right tools (e.g., Loadrunner, JMeter, Gatling, BlazeMeter, etc.) to play nice with CI/CD performance testing, which integrates load, stress, and spike tests into your development process. This helps to catch performance issues early and prevent costly fixes and downtime. CI/CD pipelines automate the process of merging code changes, testing, and deploying applications, which results in faster and more consistent software releases. The primary goals of developing a CI/CD pipeline for performance testing are to: Understand the importance and benefits of incorporating performance tests into CI/CD workflows.Become more familiar with the tools and technologies that support CI/CD performance testing.Analyze test results ahead of deployment before they hit production. Additionally, performance engineers should be ready to change the test approach as the application evolves and new features roll out. In many cases, the tests should change and be reviewed regularly. Gain Knowledge of Microservices Firstly, as performance engineers, we need to learn about your application’s performance problems as early as possible in the software development life cycle. Software performance testing and engineering are challenging, and in some ways, they are even more difficult than application correctness testing, which is extremely difficult. If you’re developing a new application or microservice, performance testing is a task we need to do a little bit every single day, even before you begin writing code. Performance engineers and developers should profile your code as they write it so that you are continuously aware of how the code you write spends its users’ time. Microservice applications evolved from monolithic architectures to better address application responses to highly erratic user traffic. As applications have grown in size and complexity, development time and cost have also increased. Besides that, we can notice many major difficulties in scaling, synchronizing changes, delivering new features, and replacing frameworks or libraries. In this scenario, the microservices architecture is a possible solution for overcoming these difficulties. In a microservices application, many supporting services operate faster or slower, depending on total application traffic or the state of the resource. With microservices performance testing, it is important to check that the microservice covers all functionality in the service and completely exercises that with a variety of valid and invalid arguments. If the microservice is being used in a repetitive or bundled manner, it is good to check its performance as well. This ensures that the overall rendered time for any transaction is not affected. To test that, we must: Create a single-user script with the microservice.Execute it for a definitive period of time using the tool you are using, like LoadRunner or RPT.Check the average response times, throughput, and performance degradations. Mentor and Train People for a High-Performance Culture High performance is about helping people become more of what they already are, not what we think they should be. One way we achieve a high-performance culture is by accomplishing big things together, and this is why, as a team, a shared mission, goals, and core values are so important. This begins with hiring the correct people for the job, but it goes far beyond that. A high-performance culture involves being passionate about creating an atmosphere that encourages and inspires employees to succeed. We must build teamship by grouping people in ways where training and learning skills are maximized and by spending fun time together occasionally and keep reading and being knowledgeable. Stay updated about everything that goes around you, and this helps whether you add a burden to the shoulders of your clients or help strip it away. Get Familiar With Chaos/Resilience Engineering Tools How do you implement chaos engineering for your organization? There's lots of discussion around "You are not Netflix; you don't need it." I think that misses the point of chaos engineering here. Your production systems will have failures/downtimes, and if that failure is impacting your business, chaos engineering is about testing/correcting those latent issues when everyone is around versus waiting for them to occur. Netflix and Gremlin are two companies that I know have strong engineering practices regarding chaos engineering, especially with tools like Gremlin, Chaos Monkey, and Kube-monkey. For example, Netflix uses this approach with its chaos engineering tests, and it runs over 1,000 experiments daily, which has helped reduce outages by 70% since 2018. Another example is Amazon's 2018 Prime Day sale, which saw a 54% increase in orders per second compared to the previous year. Without rigorous performance testing and chaos testing, their systems might have buckled under the pressure. I think it's very important for performance engineers to learn chaos tools when evaluating practices like chaos engineering to understand whether your own business can even benefit from it and use of such tools might be appropriate for very large service providers who have strict uptime requirements. Why Should Every Performance Engineer Learn Chaos Engineering and Its Tools? If a performance testing team has a short turnaround for a deadline, the product/application has to cut corners, which is often the case with quality protocols and testing time.As performance engineers, sometimes we test the minimum features and hope they will work with the desired performance. This is unacceptable, and we often end up with bad performance.In software, once your application is labeled as bad with respect to performance, you rarely get a second chance to achieve the expected ROI. In such scenarios, performance engineers can craft effective chaos tests, starting from minimal scenarios to real-world complexities, to understand what will happen when you run a certain attack, what could go wrong, what can actually happen, etc. There is a big investment in tooling required, so it's easy to get started with manual chaos testing as well. Simple attacks to start with include: Restarting a processRebooting a hostIntroducing network latency or isolation to check the resilience of the systems Know What the Operating Systems Are Doing A performance engineer should have a good understanding of the operating system used by their applications. They need to know how changes in the OS settings can impact the application's performance under test. The users using the applications will have multiple configurations; they vary based on different operating systems (Windows, RHEL, CentOS, Ubuntu, etc.), and all of these need to be tested to check for better performance. As performance engineers, we should understand how operating systems work, especially when: Writing a multi-threaded appRunning a cron job with the help of schedulingBuilding a distributed appDealing with memory managementDeveloping an Android app These activities all require knowledge of the OS to ensure proper interaction with system resources. For example, as many performance engineers are now working on the performance engineering side by performing analysis on many servers to predict and fix performance failures, we need to understand the functionality of an OS and all other interactions with I/O to make sure that data is collected properly. When the application is load tested, the programs that are interpreted, compiled, or even executed inside the browser and servers are all processed by the operating system, which decides to allocate CPU cycles and memory allocation for the program to run. Performance engineers need to understand how an OS manages resources like memory, CPU, and I/O devices. This knowledge helps in: Optimizing application performance issuesOptimizing resource usageDebugging and troubleshooting issues related to system performance, crashes, and resource leaks You do not generally need operating system expertise, knowledge, and experience, but you may still find it useful in many situations that need the most attention. Experience With SQL and NoSQL Databases If you are serious about being good and advancing your career in performance engineering, especially on the technical ladder, then yes, it's a must to have experience on both SQL and NoSQL databases, as we get to work in distributed environments with different tech stacks every time. It's never been a waste of time for performance engineers to learn more about databases; one can't say that SQL is better or NoSQL is better, and it completely depends on your application needs and other requirements. For example, if the data is continuously growing at a high rate, then we should use NoSQL. Otherwise, we can go for SQL; this is not one direct recommendation, though. SQL databases are vertically scalable, which means that an increase in load can be managed by increasing the CPU, RAM, SSD, etc., on a single server. On the contrary, NoSQL databases can be scaled horizontally, which can be done by increasing the number of servers to handle the increase in traffic. Performance engineers must gain extensive knowledge and experience on how your application stack creates and manages database connections, how your application uses database connection pools, properly disposes of a database connection, performs updates/inserts/deletes, and commits the transaction, become proficient in optimizing long running queries (e.g., right indexing, avoid full table scans, using joins effectively, etc.). From a performance engineering perspective, there may not be a single root cause, and it often involves multiple issues. Hence, performance engineers need to understand the database design, execution plans, the role of indexes and tradeoffs (for example, optimized read times and write times), caching mechanisms (e.g., Redis, Memcached and understand how caching can relieve the load on the database), familiarity with tools such as SQL Profiler, Oracle AWR, MySQL Sow query log, etc.). Finally, as performance engineers, we don't handle any data directly. It is recommended to be good with these concepts to address the performance issues from databases. Cloud Knowledge In today's world of cloud computing, performance is a very important requirement. If you don't measure it, how would you know if you're meeting the expectations of your consumers, managers, investors, and end users? Performance testing on a cloud doesn't necessarily guarantee desired performance. Performance engineers need to understand the need for performance testing and how it changes with changes in technology. For instance, more and more applications are being moved to the cloud. Many cloud services available in the market already provide comprehensive performance monitoring solutions as part of the package. The key benefits of moving performance and load-based application testing to the cloud include: Lower capital and operational costsSupport for distributed development and testing teamsThe ability to simulate load tests with millions of concurrent users from multiple geographical locations Cloud is a good choice for organizations that do not want to have a full dedicated investment in testing infrastructure, as it fulfills all test environment needs and requirements. Automation and scripting are key components in cloud performance testing and engineering. For example, we can: Develop a script that constantly monitors system performanceSend alerts if any issues are detected, which helps minimize downtime and improve overall system stability We must keep ourselves updated with any cloud service (AWS, Azure, GCP, etc.) of our choice, as there should be no difference in the functionality. If we are moving our existing application from physical machines to cloud VMs, it's good to: Test and compare the results for bothUse similar server configurations in VMs as your physical ones to find out which performs better and why Be Familiar With the Networking Concepts Whether you are a small enterprise or a large organization, the performance of your network infrastructure can make or break your success. In today’s interconnected world, the smooth flow of data is essential for virtually every aspect of today’s businesses. Performance testing is not just a way to assess network performance but helps to identify areas where the throughput is not as expected, causing network issues. For example, you might want to measure the throughput, jitter, packet loss, or response time of your network. You also need to specify the baseline and target values for each metric and the acceptable range of deviation from network aspects. Another challenge with most protocol-based load-testing frameworks is writing dynamic load-test scripts that involve sessions or cookies, and identifying performance bottlenecks is a very challenging task for performance engineers and network administrators as well. It demands significant resources and manual effort to assess and measure network performance accurately in most cases. There are numerous network conditions that affect the performance of an infrastructure, such as the specifications of its routers and switches, the way it is designed and configured, the type of internet connection, and so on. As a performance engineer, you shouldn't be learning completely about how networks work but rather learn how to make your application traffic more resilient. Due to a lack of understanding of how application data flows between systems basically, what talks to what, on what ports, which host initiates the connections, protocols, the OSI layer they operate at, and their method of transport (usually UDP, TCP, or any other). This will help performance engineers understand why or why not when an application fails or times out. It might be the network, and knowing how to use tools such as ping, traceroute, netcat, or Nmap will help diagnose network performance problems between applications. Learn How to Leverage AI The future of performance testing is bright, playing a key role in changing performance engineering. It is not that every performance engineer doing performance testing should learn complete AI but instead gain knowledge on how AI tools provide valuable insights, allowing performance engineers to quickly identify bottlenecks and make necessary recommendations on application, system, and network performance. The role of AI in performance testing and engineering is expected to grow, and it will continue to provide deeper insights and more efficient performance testing processes, helping customers and businesses with high-performance, reliable, and resilient systems. Performance engineers, by leveraging AI tools and their features in various performance testing tools, performance testers can simplify complex processes, reduce errors, and accelerate development timelines. Many companies and stakeholders are now focusing on building AI performance testing tools that automatically create scripts, run tests and analyze results, and provide in-depth explanations of performance metrics with intelligent monitoring practices. For example, when a performance engineer runs a load test, the AI tools with new features can interpret graphs and analyze test results, identifying bottlenecks in the system under various anticipated traffic conditions. This not only helps in understanding the current performance but also helps in identifying potential bottlenecks and anomalies. Conclusion It is very clear that learning an extensive list of basic to complex technologies is not quite possible. Still, performance testing and engineering is a challenging field for a number of reasons, including the fact that it is subjective and complex, there may not be a single root cause, and it often involves multiple issues. A number of job roles contribute to performance, including system administrators, site reliability engineers, application developers, network engineers, database administrators, web administrators, and other support teams. For many of these roles, performance is only one aspect of the job, and performance analysis focuses on the role's area of responsibility: the network team checks the network, the database team checks the database, and so forth. Companies are hiring multiple performance engineers that allow individuals to specialize in one or more areas, providing deeper levels of support with strong root cause analysis skills. For example, a large performance engineering team may include specialists in OS performance, client performance, network performance, cloud performance, language performance (e.g., Java, .NET), runtime performance (e.g., the JVM or CLR), performance tooling, and many more. As performance engineers, we must be multi-skilled and regularly keep ourselves updated with emerging technology trends, best practices, and cutting-edge technologies to meet current market expectations and business requirements.
Test automation has become a mandatory requirement in the fast-paced software industry. It helps with quickly testing the functionality, stability, performance, as well as security of the applications. In addition, continuous testing using test automation allows us to deliver good quality applications to the end users. Though many companies these days start writing the automation tests in parallel with development, there are companies who still lag in the test automation department due to lack of personnel, tools, techniques and skills. These companies rely on manual exploratory testing to check for the functional validations and perform regression testing. Performing regression testing manually has multiple side effects such as: Tedious and boring repeated tasksTime consumingIncrease in time to marketHuman prone errorsLess confidence on the builds All of these side effects might lead to error-prone builds that are sent to end customers, thus affecting the product quality and indirectly hitting the business. The solution for overcoming these side effects is to automate the tests, so the testing team can gain an upper hand on the other features of the application. The QA team can also focus on other activities like performing usability, UI/UX, performance, and security testing to enhance the overall quality of the product. They can also focus on the lesser tested areas of the application to uncover more issues and fix them before the product is released to the customers. Five Point Plan to Start Test Automation The following five point plan will help the test team to easily implement test automation for the software development: Create a test automation planCheck the stability of the application for test automationCheck for critical test scenarios that could be a good fit for automationPrioritize the tests as per the critical test journeys of the productWork step-by-step to add the test scenarios to the test automation suite Create a Test Automation Plan The first point is to create an automation plan which will help the team understand the following: How an application platform can be automated for web, mobile, or APIsThe test objective : functional, visual, performance, security, etcFeatures to be automatedScope of the test automationTimeframe to complete the test automationTools/frameworks to be usedEntry and exit criteriaWho all will be participating in the test automationTest environment and test dataTest cases including expected and actual output with test steps The plan should provide a brief overview about the test automation. It will also help the test team to track the progress and check for the deviations, if any. It is recommended to get it reviewed and approved by the team leads, product owners. Check the Stability of the Application for Test Automation Before jumping into and starting the test automation, it is better to check the stability of the application by performing a manual round of exploratory testing. This will provide us confidence and ensure that the application does not crash mid way through and hence the automated tests as designed and executed will provide us with the desired results. If this check is not performed, then there are chances that the automation tester might be lost mid way as he won’t be able to determine that the flakiness in the test results is due to an application issue or an automation test scripts error. Find Out the Critical Test Scenarios The test automation should begin with the critical scenarios being automated first. As these scenarios are used by the end users frequently in day-to-day business, it would be best to prioritize them. The Business Analysts and the Product Owners as well as the experienced QAs in the team are the best people who can be contacted to find out the critical test scenarios. These scenarios should be updated in an excel sheet or a document for future reference. It is recommended to go for the end-to-end automated tests as these tests check for the integrity as well as the functionality of the application as an end user would use it. Automating critical scenarios would also relieve the manual test teams and save their time they spent while performing regression tests on these scenarios. It would benefit the whole team as all the critical scenarios when automated could be tested in quick time giving more confidence on the builds. Prioritize Test Journeys After getting the list of all the critical test scenarios, it is necessary to get it prioritized. This prioritization of the test scenarios can be discussed with Business Analysts, Product Owners, and Customer Success team members. The scenarios that are most frequently used such as registration, login, payments, etc., should be taken care of first. Next, the scenarios with the least criticality should be taken up and automated. It should also be noted that the test data plays an important role in the automation tests, so do not hesitate to ask for the test data at the time of prioritization, as it will save time and let you automate the scenarios in a quick time. Addition of Tests to the Automation Test Suite This is the final point in the five point action plan where we need to delve in and start writing the test scripts for the prioritized test scenarios as per the plan. It should be noted that the emphasis should be on checking for the correct assertions in the tests. Each test should have an assertion statement that checks the difference between actual and expected output. While writing the test automation scripts, make sure to use the best practices for coding considering the code maintenance, readability, and code reusability. Page object model can be used for writing the automation test scripts as it offers great flexibility. It enables us to separate the page objects from the actions and thus helps in code reusability, maintenance, and readability. Similarly, efforts should be made to consider the parallel execution of the tests as well. Parallel execution of the tests allows running the tests speedily and provides fast feedback on the regression cycles. Thinking about parallel execution in the early stage would help in designing the test framework and save our time that would later be required for refactoring the code. Test Automation Next Steps The test automation work does not get completed here as there are further steps that need to be followed. Code Review After writing the test automation scripts, it is a recommended approach to get the code peer reviewed. Code reviews benefits in multiple ways: Finds the flaws in the code and fixes themEnsures best practices are followed and implementedChecks that the required scenarios are accurately coveredChecks that the assertions are properly implementedEnhances the knowledge of the test automation engineerChecks for consistent design and implementationPromotes team Collaboration A code review should be done positively, considering the code and checking the best practices. The code should be kept in mind during the review and not the coder. Likewise, the test automation engineer should take the comments positively and implement them or provide necessary justification for the review comments. All in all, code reviews will help in boosting the code quality. Integrating With CI/CD Pipelines The next step after writing automation test scripts is completed is to integrate it with the continuous integration and continuous development (CI/CD) pipelines. I would recommend using the test pyramid for implementing the CI/CD pipelines as it provides an efficient way of using the automated tests at the lower level. The regression cycles can be run on the CI/CD pipelines itself in the dev environment before the build is pushed to further environments such as QA, Staging, UAT, etc. For running the tests in the pipelines, cloud platform grids like LambdaTest, PCloudy, Perfecto, SauceLabs, etc., could prove beneficial. These platforms provide on demand infrastructure for testing like real mobile devices, multiple platforms like Windows, MacOS, etc., with a wide variety of browsers readily available. As these platforms support cross browser and cross platform testing, executing all the tests in parallel is an easy task allowing to save time on test execution. By using the services of these platforms, we don’t have to worry about the installation and set up of multiple devices, browsers, and platforms for testing as the cloud platforms provide all these services for a nominal price. Addition of More Test Scenarios After all the test scenarios that have been discussed in the automation plan are covered, a meeting should be set up with all the stakeholders including the Business Analysts, Product Owners, developers, and QA leads to further discuss the plan for additional scenarios that are part of the latest feature releases. The addition of more test scenarios would give leverage to cover more test cases in the automation, allowing for more of an upper hand to the test team. This would increase the overall product quality and build a robust product. Maintenance of Test Scripts Allocation of time should also be devised for the maintenance of the test scripts. The changes to the product are carried out frequently due to ever increasing customer demands which would make the test scripts redundant. Hence, it should be noted that maintenance of the test scripts should also be a part of the test automation plan and wherever required, appropriate time and resources should be allocated for making the redundant test scripts working once again. Summary In my experience, it takes a lot of effort to start test automation and sometimes due to improper planning and less experience, it does not happen as expected. This five point plan can help you to efficiently organize the test automation and allow for the full testing of the application. It would give much needed upper hand to the QA team and allow them to focus on uncovered areas of the application. This would boost the product quality and allow the team to deliver a robust product to the end users.
In the early 2000s, I was an American in Tokyo, serving as president of TurboLinux Japan, the largest Japanese Linux company. During this period, I encountered many exceptional Japanese open-source developers. Their skill and dedication were apparent, showcasing a meticulous focus on quality and detail — a true hallmark of Japan's approach to software development. This focus has led to contributions that have shaped global projects and produced standout examples, like the Ruby programming language, in the realm of open-source development. However, while Japan excels in contributing to open source, it often plays a supporting rather than leading role on the international stage. It's a long list of examples including the Linux kernel, Kubernetes, KeyCloak, and many, many more. A key difference between Japan and the United States in those days was the professional environment for open-source talent. In Japan, many of the most talented developers worked full-time in large, traditional technology hardware companies. These companies were well-established, with hierarchical structures and focused on long-term employment. This setting contrasted sharply with the more dynamic and freewheeling culture surrounding open source in the US, where contributors often worked independently or in small, fast-moving startups. This cultural contrast between the structured world of Japanese technology firms and the flexible, experimental ethos of the US open-source community fascinated me. It highlighted how different work environments could shape the contributions and careers of talented developers. Despite these differences, the commitment to excellence in both settings demonstrated the universal passion that drives innovation in software development. Last month, I had lunch in Palo Alto, California, with a group of Hitachi engineers involved in open-source education. I learned about big changes in large Japanese companies in a new movement to embrace open source. This is a departure from the traditional Japanese strategy of building proprietary value internally and protecting the value generated with patents. I decided to read through Japanese articles on open source and found inspiring information about the work of a Hitachi employee, an open source specialist, and advocate, Yuichi Nakamura. Currently, Nakamura is the chief OSS strategist of the OSS Solution Center, as well as an evangelist at the Linux Foundation Japan and a founder of CNCJ, and is committed to developing the open-source community in Japan. Nakamura got involved in open source while doing research in college at Tokyo University, an elite school in Japan. He started using Linux around 2001 because it was free. He got deeply involved with open source and joined Hitachi’s research and development department to pursue a career in open source. His initial research focused on SELinux, a security variant of Linux. Hitachi has been contributing to various open-source communities since the 1990s and has strengthened and promoted proactive open-source utilization proposals to customers. It is also active as a platinum member of The Linux Foundation. As most of the information is in Japanese, I have spent time translating and summarizing the information to better understand Japan’s open-source strategy. I have compiled some of what Nakamura has been saying online. Nakamura believes that the era of building systems using internal technology only is over. Companies around the globe regularly incorporate open-source software to build advanced solutions. He says that it is important for companies to incorporate open-source software into their company strategies. You Can’t Do System Integration Business Without OSS How did you get involved with OSS after joining Hitachi? At first, I was young, so I was allowed to do whatever I wanted in research, but then I started incorporating it into products, and I became involved in development projects for products using so-called open source. But it didn't go well. I missed everything. Why didn't it go well? It was simply because I had no business experience. Up until then, I had only been doing research, so I didn't know how it would be used on the front lines, what kind of requests customers had, or how to proceed. After that, I left the research and development department to gain business experience, and at that time, I continued it as a personal academic hobby rather than as a job. Then, about seven years ago, the current department, the OSS Solution Center, was established, and I transferred there and have been working on OSS-related business ever since. You've been doing OSS activities for a long time, but it was only seven years ago that you established a specialized department. That's because there were still few OSS personnel. Linux was an exception, but until about ten years ago, there wasn't much awareness within the company of actively using OSS, which was not a commercial product. In that situation, an executive who understood OSS promoted the establishment of a specialized department, thinking, "Without OSS, we can't do the system integration business," and I was very grateful. I Want To Continue Creating Cloud-Native Technology Originating in Japan Speaking of Nakamura, you are also one of the founders of CNCJ (Cloud Native Community Japan), which was launched recently. Please tell us about the background of the launch of CNCJ. The idea of "cloud-native" has been spreading recently, and globally, about 850 companies and organizations around the world, mainly including large cloud ventures, have joined the Cloud Native Computing Foundation (CNCF) to develop basic technologies and provide them as services while differentiating them well. Looking at Japan, the large cloud services that were created are just used as they are, and original technology development is not being done. Frankly speaking, it's not interesting, and technology development in such an environment does not get much traction. What is the biggest reason for the lack of excitement? I think it's that we can't decide the direction of the technology ourselves. There are many things that are difficult to use, and we want to fix, but when it comes to general foreign vendors, they are not made in Japan, so they are not very good at fixing them. Of course, many people are working hard individually, but it is difficult to find a company. The habit of a closed strategy that differentiates from patents is ingrained, which is important, but I feel that the big problem is that open source has not been incorporated into the technology/management strategy. With this background, we launched CNCJ to continuously produce cloud-native technology originating from Japan. Hitachi also started saying "cloud-native" a little late, but there are few companies in Japan that are doing open source and cloud, so as a company, we are participating with that sense of purpose. The establishment of CNCJ was announced on November 8, 2023. Please tell us about the situation since then. Our biggest activity is holding meetups. We have already held more than ten meetups, and from Golden Week in 2024 onwards, we have had many people participate by having each subgroup lead the meetups. Other than that, we just promote the organization and help attract people to events. We also discussed CNCJ's efforts at KubeDay Japan 2024, held in August 2024. What do you want to work on at CNCJ in the future? Currently, we have just under 500 members, but we would like to expand our reach and become number one in Asia. It is important to increase contributions to the CNCF itself, but I think one of the challenges to do so is to have it incorporated into the technology/management strategy, as I mentioned earlier. That is a matter specific to each company, so it seems like it will be difficult to move forward quickly. From that perspective, I hope that CNCJ can function as a kind of external pressure and promote it by introducing examples from other organizations. Over the past ten years or so, there has been a lot of momentum for "using open source in business," and the number of companies actually getting involved is increasing, so things are changing. On the other hand, it is still only a matter of using it, so I think the next point is to take it to a strategic level. I Think “I” Am the One Who Benefits the Most From Job-Based Human Resource Management Please tell us about the positive impact of OSS activities on Hitachi. I think this was mentioned in a previous interview with Tabata from our company, but by developing a business that utilizes OSS as a company, we end up contributing to the community. Promoting our efforts outside the company also comes back to Hitachi's business. I think this cycle is a valuable system for Hitachi, and we are currently receiving more inquiries from customers about using OSS. That's the joy of OSS! Please tell us about the appeal of being involved with OSS as an employee of Hitachi, a major Japanese company. I am very grateful that the company officially recognizes these activities and evaluates them as specialist career paths. Hitachi is a company with a long history, so it has traditionally been a generalist company, and it was rare for people to work as specialists. With the introduction of job-based human resource management and the clear career path for specialists, I feel that I can be involved in open source freely. 2021, when you interviewed us previously, was a transitional period, so I feel that the scope of my career has expanded more than it was then. So, such a change has occurred in about three years. If you ask me which member has benefited most from job-based human resource management, I would answer without hesitation, "It's me." Again, I think one of the attractions of the company is that they recognize the importance of OSS to their business and have set up a specialized department in the form of the OSS Solution Center. After all, there are limitations to what you can achieve when working as an individual. For example, what are the limitations? For example, travel expenses to attend conferences must be borne by the individual, and cultivating personal connections is also difficult, so if you want to make an impact on the world, it will be difficult without the backing of a company. In the case of Hitachi, we are a platinum member of The Linux Foundation, so if you contact us, we will give you top priority and introduce you to people. There are only three companies in Japan that can enjoy platinum status. Another good point is that we can engage in OSS activities while taking advantage of the Hitachi Group's vast resources and know-how. Generally, when a business company tries to engage in OSS activities, it is necessary to work only on things that are used in the company's own services, but in the case of Hitachi, we develop so many services that we need a wide range of OSS. I feel that this is a really big advantage. I Want to Change the Culture of Open Source in Japan What do you think is rewarding about OSS for you, Nakamura? It's the fact that it's open as a technology, so you can see what's inside. As someone who originally studied basic academic subjects such as physics, I personally find it attractive and rewarding to be able to explore the contents. And then there's the "people." You can meet people from all over the world and have academic discussions, regardless of competition. I find it fun in that sense, too. Please tell us about your future efforts and challenges. I would like to make the OSS cycle I mentioned earlier something that goes beyond the OSS Solution Center and can be carried out throughout Hitachi. If we do this across Hitachi, we can utilize the Hitachi Group's vast resources and know-how to carry out OSS activities. I believe that if we work seriously while collaborating with and revitalizing communities outside the company, Japan's OSS culture will change. I would also like to develop my career as an OSS specialist while also creating a path for other members to improve their skills and follow a career path as specialists. References I Want to Transform Japan's Open Source Culture! Exploring the Multifaceted Contributions of a Hitachi OSS Specialist
As a data scientist, I’ve learned that our ideas are more than just abstract concepts — they’re a part of us. We pour our hearts and minds into developing solutions, and it’s only natural to feel a sense of pride and ownership over our work. But this close connection between our ideas and our identity can be a double-edged sword. Learning a Valuable Lesson I remember the first time I presented a complex machine-learning model to my team. I had spent weeks fine-tuning it, convinced it would revolutionize our approach. When my colleagues started pointing out potential flaws, I felt a knot in my stomach. It wasn’t just my idea being critiqued; it felt like a personal attack. This experience taught me a valuable lesson: the need to separate our ideas from our ego. It’s a challenge that many of us in the data science field face, yet it’s often overlooked amidst the technical hurdles and resource constraints we deal with daily. Learning to depersonalize feedback is an important first step in navigating the complex landscape of Industry. But even as I grew thicker skin, I faced another daunting reality: the inherent uncertainty of our idea's success in the real world. No matter how brilliant a concept seems on paper or in our minds, its practical application can yield unexpected results. I’ve had algorithms that performed flawlessly in test environments fail spectacularly when deployed. These moments can be crushing if we tie our self-worth too closely to the success of our ideas. These challenges underscore the importance of dissociating our ideas from our ego. By creating this separation, we protect ourselves from the emotional rollercoaster that often accompanies innovation. More importantly, it allows us to approach our work with greater objectivity and resilience. My experience at an AI startup, where we used to optimize call center operations, really drove this point home. We were working on a complex problem: predicting agent performance to intelligently pair callers with agents for optimal outcomes. Our existing model used a Posterior Bayesian technique, which had been working quite well for most of our clients. However, for one particular client, we started noticing unstable agent performance predictions. To investigate, we employed a time series analysis technique to validate the temporal inconsistency of agents’ performance. The results were puzzling: there was little correlation between an agent’s past performance and their future results. This volatility in performance metrics didn’t align with our understanding of human behavior or how call center skilling worked. Convinced that I had identified a critical flaw in our existing algorithms, I proposed a new ML model. My approach aimed to better correct the difficulty of each call taken, something I thought our current model was failing to account for adequately. I spent a significant amount of time collecting more nuanced features that I thought would better estimate the difficulty score for each interaction. My ego was fully invested in this idea, and I became a vehement supporter of this new approach. We deployed the new model, and for the first few days, it seemed to be working. However, after the first few days of success, we realized that the results were just as unstable as before. The model, despite its complexity and additional features, wasn’t capturing the underlying issue. This failure hit me hard. I had been so certain of my solution that I started questioning my abilities as a data scientist. My ego, so tightly bound to the success of this idea, took a significant blow. Weeks later, we discovered that the real problem was far more fundamental than our algorithmic approach. The agent IDs, which we had assumed were unique identifiers for individuals, were being reused at the call center. This meant that what we thought was a single agent’s performance was actually an amalgamation of multiple individuals’ work. This realization explained the volatility and lack of correlation in performance that both our original approach and my new model had failed to resolve. No matter how sophisticated our algorithms were, they were working with fundamentally flawed data. This experience taught me a valuable lesson about the dangers of tying my ego to my ideas. If I had critically challenged my own assumptions and approached my new model with the same skepticism I had for the original algorithm, I would have tried to validate it more thoroughly offline. This process would have likely revealed that my idea suffered from the same inaccurate assumption as the original algorithm: that agents could be uniquely identified by their IDs. Conclusion By dissociating my ego from my ideas, I could have saved time, resources, and personal distress. More importantly, I might have identified the real issue sooner. This experience reinforced the importance of maintaining a critical and curious mindset, even — or especially — towards our own ideas. It’s a reminder that in data science, as in many fields, our assumptions can be our biggest blind spots, and our ability to question them is often our greatest strength.