Big Teams & Agility - Take 2
Big Teams & Agility - Take 2
Join the DZone community and get the full member experience.Join For Free
10 Road Signs to watch out for in your Agile journey. Download the whitepaper. Brought to you in partnership with Jile.
In Big Teams & Agility, I talked about a macro process for agile development on large teams (those up to, possibly exceeding, 100 developers). The article was posted on AgileZone at JavaLobby (where I now help out as zone leader, though did not at the time the article was posted), where I was accused of having my head in the cloud. While I could construe that as a compliment given the buzzwords du jour, I probably shouldn’t make that mistake. So I want to take a moment and respond because there were some good points made, and clarification is necessary.
First, you can apply agile practices on big teams, and it does work. I was pretty clear that I’ve used the structure on teams up to 100 developers, and have done so for a good share of the work I’ve done since 2001. It’s worked marvelously. However, I never said it was easy. In many ways, it’s the most difficult approach to developing software I have ever taken, but it’s also the most successful. One question posed by a commenter follows:
Do you really expect big company to go through integration/testing/whatever every week?
Absolutely! The more often you integrate, the earlier you’ll discover problems before they’ve had an opportunity to fester within the system for prolonged periods of time. I recognize this is against the grain of conventional wisdom we’ve been taught for decades. As I mentioned in the original post:
The economies of scale lead us to believe we need longer iterations
because there is so much more to manage. But that’s flawed because it
delays risk mitigation and discovery.
At the end of the post, I stated that there were many micro process details omitted, such as how to keep the build running quickly. To address slow build times on very big systems, you may have to implement staged builds. A staged build is basically a pipeline of builds that perform different build activities.
For instance, a stage 1 build might perform a subset of the overall build steps to provide rapid feedback to the team. A stage 1 build is performed hourly or anytime new code is checked into the source code repository. A stage 2 build is a more complete build process. It might integrate all system components, or execute a complete suite of tests. The actual tasks are going to vary given the context, but the idea remains the same. It’s ok to have multiple build processses for a single software system. It’s up to the development team to identify the components of each build, and it may change throughout the life of the project. But the key element is the rapid feedback the team receives.
Another point of contention with the article centered around the misunderstood suggestion to release frequently to the customer, and the perceived lack of QA, acceptance testing, etc., etc. In fact, I stated the exact opposite, and it’s the continuous integration strategy that allows us to close the loop and perform these types of testing frequently. I stated pretty clearly the following:
We should also frequently execute a variety of tests. Not just unit and
acceptance tests, but usability tests, performance tests, load tests,
The key element here is that we increase project transparency because we have the ability to get our product in front of the customer on a frequent basis. The customer experiences the growth of the application with the development team. They see it’s evolution, and can provide valuable feedback along the way. There are fewer surprises at the end of the project.
However, I have never said that each build should be released to the customer for use as a production software system. Never! I did say the following:
Once the build executes successfully, the application can be deployed to an environment where it’s accessible by the customers.
The environment is likely a test environment that the customer has access to, and can experiment with the system to provide feedback. Or a QA environment where acceptance testing can be performed. It’s also a place that can be used for system demonstrations. I recognize the pain in delivering large enterprise software systems, and also realize the impossibility of releasing each build to production. But we should be striving for that level of quality each time we write a line of code.
Regardless, the key takeaway is that because we always have a functional system, we are able to perform various lifecycle activities at any time, and many times, throughout the development effort. This increases project transparency where the customer, management, and developers have a consistent understanding of the current state of the system. We avoid those nasty surprises late in the development effort that plague many projects.
This is not overly zealous. It is not unrealistic. It is not some abstract theory born of academia that has never been proven in the real world on a large enterprise development effort. In fact, while it adopts various practices from popular agile methods, it is not Scrum, nor is it XP. But it is agile, and it does work. It captures the essence of agile development - rapid feedback through software that works while maintaining the ability to respond quickly to change. This is, quite simply, one of the best approaches to software development I’ve ever used. But it’s not easy, nor will it ever be easy. Software development is hard work, and it will always be hard work. The comments are welcome, the questions are valid, and the discussion is important.
Published at DZone with permission of Kirk Knoernschild . See the original article here.
Opinions expressed by DZone contributors are their own.