Minimal Viable Feature
Minimal Viable Feature
Join the DZone community and get the full member experience.Join For Free
The Agile Zone is brought to you in partnership with Techtown Training. Learn how DevOps and SAFe® can be used either separately or in unison as a way to make your organization more efficient, more effective, and more successful in our SAFe® vs DevOps eBook.
The concept of a Minimal Viable Product has a lot of traction, and for good reason. It talks about building the smallest thing you can that will let you learn if the product is going in the right direction. The same strategy can be applied within established products when working on new features. Sometimes it’s clear what a new feature should be. In which case, go build it. Other times, what is clear is the general direction where value lies, but the exact feature set isn’t clear to you or your customers.
This is where the MVF strategy comes in. You want to quickly deliver the smallest incremental capability and learn from your customers if you are on the right track or not.
Example: Load Balance a Build Farm
Shortly after releasing AnthillPro 3.0 in 2006, we began to look at distributing build load. As a central tool for a large enterprise, we knew that different servers in the build farm would have different capabilities and sizes. We had addressed the capabilities through filtering, but wanted to account for some boxes being faster than others. Builds are tricky performance-wise as they tend to alternate between abusing I/O, memory and CPU. Ideally, we would track those capabilities and assign builds whose profile best matches the spare capacity of a server while leaving as much capacity available for as many build types as possible. As the white-board filled up with stubs of algorithms, the development TODO grew:
- Build native components for each supported platform to measure maximum disk, network, CPU and memory capacity, and consumption of each build.
- Bunches of database tables and analysis to track typical consumption
- Predict types of builds that will be required so we know which resources to conserve
- Build lots of user interface elements around all that stuff
- Etc., etc., etc.
- Max jobs at once
- Throughput metric
- Eliminate from consideration all build machines that don’t meet the criteria of the build (wrong platform, lacking a compiler, etc.)
- Eliminate from consideration all build machines running with max jobs
- If no machines remain, queue the job until someone is available
- For the remaining machines, estimate capacity by dividing the number of running jobs (plus one) by the throughput metric
- Assign the build job to the most available machine.
Published at DZone with permission of Eric Minick , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.