Let's take a trip back up to 10,000 feet and revisit the first point of the Agile Manifesto:
Individuals and interactions over processes and tools
Its entirely possible to get so bent on following the tenants of Scrum, XP, or any other methodology, that you forget about the fact that software is developed by people, not process. It's also developed by people, not practices.
Let's say you get started working with XP and pair programming just isn't working for you. That doesn't mean you're just "not trying hard enough." Perhaps pair programming DOESN'T work for the people in your environment. Perhaps it runs too counter to the culture of your organization. Does that mean you can't be agile? Absolutely not.
Any practice's potential success depends entirely on the context in which it is applied. Any practice could work wonderfully at company A, and then epically fail at company B. The bottom line is, if it doesn't work for you, throw it out and find something that does. Insanity has been aptly defined as "continuing to do the same things continually and expecting different results."
Be agile, don't be insane!
In keeping with this theme, let's look at an agile way to evaluate agile practices.
- Evaluate: Look back over the last few development cycles and identify areas that need improvement. Of those areas, focus in on the one or two areas that if effectively addressed would return the most value to your team and your customer. Perhaps you have a lengthy verification cycle preceding each release that includes a great deal of thrashing between defect identification by QA and defect fixing by development.
- Research: Look into development practices that are targeted at addressing these needs. One practice targeted at the need identified in #1 is acceptance test driven development. Tests are defined in an executable manner prior to development of the features. Features are considered done if and only if the tests pass.
- Metrics: You need a way of deciding whether or not the new practice is having its intended effect. Pick a metric that your team agrees will give you this answer. An obvious target for this problem would be the average length of the verification cycle. Try to grab several data points from before you start the new practice (if possible) so that you can get a baseline for comparison.
- Install the Practice: Start the new practice. Make sure you continue doing it for several iterations/cycles before moving on to step #5. You'll need time to move through what Martin Fowler calls the "Improvement Ravine." (http://martinfowler.com/bliki/ImprovementRavine.html) Any time you try a new practice you'll usually get worse before you get better. This is because it takes time for you to adapt to this new way of working, and only when you work through this dip will you start to truly see how well the practice helps (or doesn't).
- Evaluate: Pause and look back over your metrics. Do you see a gradual dip into the improvement ravine followed by gradual improvement in your metric? Or do you see a downward spiral? Only your team can decide whether or not you've given the practice long enough, so decide together if its time to keep it or toss it.
This process looks an awful lot like the scientific method, and that's totally intentional. It's only by looking at our work in such a disciplined manner that we can get to objective answers about how well a practice fits in our environment. It's a much better way of moving forward than doing something just because "the ScrumMaster said I had to!"