DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Culture and Methodologies
  3. Agile
  4. When Done is Too Hard

When Done is Too Hard

Scrum teams that apply too much customization to the Definition of Done risk finding themselves not practicing Scrum at all.

$$anonymous$$ user avatar by
$$anonymous$$
CORE ·
Mar. 01, 19 · Opinion
Like (1)
Save
Tweet
Share
5.17K Views

Join the DZone community and get the full member experience.

Join For Free

"The Development Team consists of professionals who do the work of delivering a potentially releasable Increment of "Done" product at the end of each Sprint." — The Scrum Guide

Image title

The Scrum Guide is markedly ambitious in the standard of professionalism it demands of a team. Development Team members must be self-organizing and cross-functional, to the point that they will repeatedly and sustainably create a valuable product increment — fit for immediate release — in no more than one calendar month. All of the design, coding, testing, and integration work for the increment must be completed in each Sprint time-box, with none of that work left undone. This cannot be dismissed as an idle gloss of how Scrum might arguably function in an idealized scenario. The Scrum Guide provides the actual specification of the Scrum Framework, all of which must be implemented in its entirety. If any element or rule is elided then the result is not Scrum.

Yet if you were to examine how Scrum is interpreted and applied in the field, you would rapidly establish that faster and looser interpretations are widespread. Sensible choices about how to implement the framework — its roles, events, artifacts, and rules — are not always made. Instead, so-called "customizations" are often made to the Scrum specification itself, on the grounds that "pure Scrum" is inappropriate to the organizational context. The result might be described as a kind of impure Scrum, although by definition it certainly means that Scrum is not being implemented at all. More importantly, though, the expected benefits of the framework are unlikely to accrue.

The problem usually boils down to this: organizational change is hard. When push comes to shove, it is all too often the Scrum Framework which is modified, and only lip service gets paid to the principles.

Scrum can be butchered in many ways and for all sorts of supposed "reasons." There is, however, a common denominator which typifies a broken Scrum implementation: the standard for release will not be achieved by the end of a Sprint. In other words, the Definition of Done will be inadequate or inadequately observed. There can be many underlying causes. In some situations, a team might be restricted by the tooling they have, such as a weak continuous integration and deployment capability. No less often, they will be impeded by structural or cultural issues within the organization, such as external dependencies. A company release process might be vested in a Change Control Board, for example, which leaves a team disfranchised from production. The work the team performs is not then "Done" because it cannot be safely released into a live environment should the Product Owner choose. Additional work will be needed to complete it. The outcome is that empirical process control cannot be established. Development becomes a game of obfuscation and prevarication, of uncertainty and unmet promises, of smoke and mirrors.

Such dysfunction is not only endemic, it has set expectations across the industry as to how Scrum is meant to be implemented. Definitions of Done that are of less than release standard have become, more or less, "industry normal." Scrum professionals with a clear understanding of what "Done" genuinely means are widely assumed to be unreasonable nit-pickers, or disconcertingly naive about organizational reality, or simply in error. Yet any deficit in the standard needed to deploy an increment into production, by the end of a Sprint, really does mean that Scrum is not being implemented. Empiricism is lacking: the team can't see how each increment performs in reality. It isn't possible to test and prove even the smallest hypothesis, nor to inspect and adapt the product by means of validated learning.

The Scrum Guide tells us that transparency, inspection, and adaptation are the three pillars upon which Scrum stands. Of these three, transparency can arguably be said to come first. If a situation is not clear then it can hardly be inspected, and no sensible adaptation will be possible. Hence, the first thing to do with a broken Scrum implementation — such as a deficit for release — is to make the problem clear to all. It must be clear to the teams doing the work, the stakeholders who will be impacted, and the executives who are accountable for corporate reputation.

Bear in mind that a Development Team has the right to refuse to do any work, if they cannot commit to its completion by the end of the Sprint. No one can force them to take on technical debt, for example. Remember also that the Scrum values are commitment, focus, respect, openness, and courage. They may need the courage to assert, "We are not yet a Scrum Team, we will not pretend we are until the deficit for release we have highlighted is closed, and we will limit any commitments we are prepared to make accordingly."

In practice, of course, few teams consider themselves in a position to be so exacting and forthright in their transparency. Hence, they proceed on "best efforts" instead, using a Definition of Done which is inadequate or poorly observed, and for which they incur a deficit for release. Nevertheless, it will still be important to put transparency over the matter, even if it cannot be readily solved. They might, perhaps, consider enumerating the "deficit" within their Definition of Done. However they approach the issue, team members must make it clear that Scrum isn't yet being implemented...and that the gap, and its consequences, are understood.

Example Definition of Done

Remember that a Definition of Done properly applies to an increment.

Environments Are Prepared for Release

First, check that no unintegrated work in progress has been left in any development or staging environment. Next, check that the continuous integration framework is verified and working, including regression tests and automated code reviews. The build engine ought to be configured to schedule a build on check-in. It may also trigger hourly or nightly builds. Also, check that all of the test data used to validate the features in the release has itself been validated.

Handover to Support Is Complete

(Note: This may be elided in a DevOps context or where the Dev Team will follow the product through to support).

All design models and specifications, including user stories and tests, must be accepted by support personnel who will maintain the increment henceforth. Note that they must also be satisfied that they are in competent control of the supporting environment.

Review Ready

Part of the work in a Sprint includes preparing for the review. Sprint metrics ought to be available, including burn-down or burn-up charts. Any user stories which have not been completed ought to be re-estimated and returned to the Product Backlog.

Code Complete

  • Any and all “To Do” annotations must have been resolved, and the source code has been commented to the satisfaction of the Development Team. Source code should have been refactored to make it understandable, maintainable and better able to support future change. Note that the Red-Green-Refactor pattern found in Test Driven Development is helpful here.

  • Unit test cases must have been designed for all of the features in development, and allow requirements to be traced to the code implementation such as by clear feature-relevant naming conventions. The degree of code coverage should be known, and should meet or exceed the standard required. The unit test cases should have been executed and the increment proven to work as expected.

  • Peer reviews ought to be done. (Note: If pair programming is used, a separate peer review session might not be required). Source code is checked into the configuration management system with appropriate, peer-reviewed comments added. The source code should have been merged with the main branch and the automatic deployment into elevated environments should be verified.

Test Complete

Functional testing should be done. This includes both automated testing and manual exploratory testing, and a test report should have been generated. All outstanding defects (or incidents such as build issues) should be elicited and resolved, or accepted by the team as not being contra-indicative to release. Regression testing has been completed, and the functionality provided in previous iterations has been shown to still work.

Deficit for Release

"Done" criteria which are needed to effect a release, but which cannot yet be achieved by the team, constitute a deficit. They should be enumerated here (e.g. by moving them out of the Definition of Done). A deficit implies that Scrum is not yet being implemented, and that there is likely to be technical debt.

  • Performance, security, and user acceptance testing must have been done, and the product should be shown to work on all required platforms.

  • Release authorization must be obtained.


scrum unit test Release (agency) Continuous Integration/Deployment Sprint (software development) code style

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • Application Architecture Design Principles
  • Stop Using Spring Profiles Per Environment
  • Fargate vs. Lambda: The Battle of the Future
  • What Are the Benefits of Java Module With Example

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: