I tell you this story to introduce a problem I've run into many times on many different scales. This story is probably the most egregious, but it is certainly not isolated. The problem stems from the fact that we rarely define the word. It is assumed that everyone shares a definition but it is rarely true. Is a feature done when it compiles? When it is checked in? When it can run successfully? When it shows up in a particular build? All of these are possible interpretations of the same phrase.
It is important to have a shared idea of where the finishing line is. Without that, some will claim victory and others defeat even when talking about the same events. It is not enough to have a shared vision of the product, it is also necessary to agree on the specifics of completion. To establish a shared definition of done, it is necessary to talk about it. Flush the latent assumptions out into the open. Before starting on a project, it is imperative to have a conversation about what it means to be done. Define in strict terms what completion looks like so that everyone will have a shared vision.
For large projects, this shared vision of done can be exit critera. "We will fix all priority 1 and 2 bugs, survive this many hours of stress, etc." For small projects or individual features in a large project, less extensive criteria is needed, but it is still important to agree on what will be the state on what dates.
While not strictly necessary, it is also wise to define objective tests for done-ness. For instance, when working on new features, I define "done" as working in the primary scenarios. Bugs in corner cases are acceptable, but if a feature can't be exercised in the main way it was intended to be, it can't be tested and isn't complete. To ensure that this criteria is met, I often insist on seeing the feature demonstrated. This is a bright line. Either the feature can be seen working, or it cannot. If it can't, it isn't done and more work is needed before moving on to the next feature.