Measuring Agile Testing Maturity: An Interview with Bob Galen
Measuring Agile Testing Maturity: An Interview with Bob Galen
Learn more here about the characteristics of a mature Agile team, from the top to the bottom, and how such a team influences overall production.
Join the DZone community and get the full member experience.Join For Free
Whatever new awaits you, begin it here. In an entirely reimagined Jira.
Bob Galen is an agile methodologist, practitioner, coach, and with more than 30 years of experience as a software developer, tester, project manager and leader. We recently sat down with Bob to learn more about what separates mature agile testing teams from the rest, and how leaders can best support those teams in order to consistently deliver working software.
Noel: We hear a lot about the requirement of shortened feedback loops, and development, test, and release cycles that deliver continuous feedback. Continuous feedback is certainly a must-have, but what I don’t hear people talk about a lot is how important it is to actually be able to quickly respond to that feedback, and make informed decisions.
How important is being able to do that, and not just collect or note that feedback, but adjust on the fly to make new decisions, pivots, and responses continuously as well?
Bob Galen, president, RGalen Consulting Group
Bob: I think this is a really important point, Noel. And it’s not simply related to continuous feedback, but also to the continuous improvement cycle. For example, feedback on adjustments that are raised in team, project, or organizational retrospectives. To your point, our lean-age should be towards doing something about our discoveries, in taking action.
It’s one of the reasons I talk about “stop the line” as a mindset.
If we were part of an assembly line that was building Toyota Corollas and we noticed that current cars were being delivered without rear doors, we would pull the cord and stop the line.
Then what would we do?
I would hope that we would fix the cars. In fact, the sooner we stopped the line, the fewer doors we would have to repair (less rework, less time impact).
But are we done? No!
We have to examine root cause in our process and figure out what is creating the issue. Then we need to…fix it! Only then are we done with the event and can start the line again.
This somewhat contrived story focuses on:
- Early detection
- Immediate action – stopping what you’re doing
- Immediate repair
- Thoughtful root cause analysis
- Finally, repairing the “process” so that faults don’t continue
- Then, restart the line
I’d like to reframe the question to focus more on continuous improvement, with a preference for fast feedback, fast understanding, and fast adjustments.
Working software is the ultimate goal and measure of, "Are we there yet?” and, “Are we done?” —Bob Galen
Noel: You gave a session at DevOps East that talked about what makes teams agile teams “mature.” What qualities do mature teams have that other teams may not?
Bob: I guess, first of all, they behave like a team. They have shared goals and they share the work in order to get results. So, skills are not an impediment to collaborating and working together. Activities like pairing on work are commonplace.
Mature teams also hold each other accountable. Accountable to quality deliverables, accountable to each other’s commitments, and accountable to delivery results. You can see this in their work, but very clearly in their retrospectives—where hard conversations are surfaced when necessary.
They also have the courage to challenge outward and upward. For example, if they feel leadership isn’t giving them the landscape to succeed, then they’ll push back on distrust, over-commitment of work, and lack of support resources.
They’re in it together. They succeed and fail as a team. In fact, they defend each other. They help each other. And they work hard to maximize their global strengths and minimize their weaknesses.
And finally, they get sh** done. My friend Josh Anderson spoke about his 3-part hiring requirements for high-performance agile teams during his presentation. His criteria are:
- Smart, Quick Learner
- Team Player
- GSD – Get Sh** Done
Josh was focused on candidates who could exhibit all three characteristics (the intersection in the Venn diagram).
I think that’s another way of effectively thinking about a “mature” or a “high-performance” agile team. But please don’t get stuck on those terms or goals.
I also want the team to have fun together. To discover the joy in doing great work that drives great customer value.
Noel: You mentioned in your session the need for agile leaders—and other leaders—to have “an increased understanding and respect for software testers and to recognize testers as first-class citizens.” How did it get this bad for testers? How did we get to a point where “leaders” might need to be reminded to do that?
Bob: I’ve been involved in software development for nearly 40 years, Noel. And, yes, that number frightens me from a variety of perspectives.
I think testers have had a second-class citizen rap that entire time. Developers have always been perceived as a value center and testers as a commodity and cost center.
I think a big part of it is many technology leaders come from a software developer background, so they more easily understand the value proposition of development. I’ve actually had the opportunity to lead both development and testing organizations over the years. I also sent myself to many classes so that I could understand the profession and craft of software testing more deeply.
Given that, I think I achieved a much more balanced view towards the value of each. Many leaders lack that broader perspective.
In fact, I think they trivialize the notion of testing. For example, in agile contexts, they think it simply surrounds “checking” user story functionality on a sprint-by-sprint basis. But it is SO much more than that. Or, at least it should be. When you minimize it to simple functional checking, it becomes a commodity that anyone can perform. Or, at least that’s the perception.
They also don’t think of the quality practices that testers can drive, which I discuss in question #5, below.
Noel: You also describe the need for leaders to “understand the power of basing decisions off of the objective evidence of working software.” To me, defining “working” is a little like defining “done,” meaning, extremely difficult. How important is it to learn how other stakeholders define “working,” and then, once you know, does everyone need to define it the same way, or is there room for some variation there?
Bob: I don’t think defining “done” should be the focus. I stand by my comment, and what I’m trying to say is that we, as software teams, often work on many things to build our software. Some of those things include:
- Architecture and design documentation
- UX design
- Requirements – from traditional to user stories
- Project plans, Development plans, Test plans
- A wide-variety of checklist and process elements
- Status reports, metrics capture, tracking
- Various sorts of tooling infrastructure
- Test cases and test execution
- Internal documentation and customer documentation
ALL of which do not directly deliver value to our customers or stakeholders. In other words, they don’t pay us for the above elements. Sure, all of them are necessary to some degree, but they’re not directly in the value stream.
So, I think the point is not the definition of working software, but the delivery of working software. So that we can determine our direction not from documentation or plans or talking, but from the very thing that we’re building to deliver customer value.
It’s tangible. It reflects the value we’re delivering. The customer can touch it, feel it, interact with it, and give us feedback as to whether it meets their needs and solves their problems.
Working software is the ultimate goal and measure of, “Are we there yet?” and, “Are we done?”
Noel: You’ve talked about how in mature agile teams, “testing is not their only quality practice.” I think that would, at least initially, surprise some people. What, outside of testing helps build quality in, and, secondly, does doing whatever that is relieve some of the burden placed on testers?
Bob: Actually, to be clear, testing is not a quality practice. It is a verification process.
Quality practices are different. They focus on pre-verification activity. For example, the “3-Amigos” is a common practice within agile teams. It’s where developers, testers, and the product owner all work together to vet user stories all along their lifecycle. It’s a collaboration metaphor where each perspective weighs-in on the story definition.
Design reviews, code reviews, and pairing are also hallmarks of quality activities within agile teams. As is working with the customer/stakeholder and product owner to ensure we understand the problem we’re trying to solve.
Sometimes folks refer to some of this as “shifting left” in testing. That is, the testers focus on ensuring that we’re building the right thing and building it right, before verifying it.
Think about it. In many ways, testing is too late to catch defects. Even within the tight feedback loops of agile teams, I’d much rather testers spend some of their time in “up front” activities to ensure we’ve got the recipe right before they focus on testing things.
And yes, if we get the shift-left-balance right, then the testing is more of a by-rote or safety net activity. And it loses its typical, repetitive test-fix-test again nature.
Or, that’s the hope when focusing on building quality in vs. testing quality in.
Published at DZone with permission of Noel Wurst , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.