Lean Tools: Perceived Integrity
We will now explore the Lean principle "Build Integrity In" and its applications to every day coding, starting from software integrity. Integrity has two components: conceptual integrity and perceived integrity.
These two concepts resurface in many software development models over the years: for example I think they can be mapped to internal and external quality as defined by Growing Object-Oriented Software. If you're a Christopher Alexander fan, you could define conceptual integrity as the relation between form and function, and perceived integrity as the relation between form and the surrounding context. In short, conceptual integrity is related to the internal architecture of an application, while perceived integrity to its customer-facing side.
"Perceived" by people outside the team
The holy grail of software development is the perfect process for deriving a form (code and working software) from a context (the user requirements and related acceptance tests). We still work on heuristics to improve this process, with the goal of achieving the maximum perceived integrity: software that fits exactly with the mental model of the user and the assumptions he makes on the system.
For example, there are many ways we can show a map to a person and display widgets to work with it; the best images (satellite ones or road networks) and the best tools (zoom in and out, pan, directions calculation) widely depend on how the information about the user requirements is transformed into working code: if we just decide for a satellite map we are going to lose many users that wanted to refer to the map in their car. If the analysts and the product owner do not interview any potential user, or there is no geography expert in the team, the result is impoverished. If you only communicate with product-oriented people via documents, there will be a very long cycle of feedback and you will have many unanswered questions by the time your software is generally available.
This example is taken to the extreme, but the point stands: every time there is an handoff or an additional step in the road between requirements and the finished product, we risk losing information or degrading its quality. We risk to misinterpret some terms or the goal of screens; or to mix up the features that makes the application competitive from the nice-to-have ones.
Improving perceived integrity
That's the problem with a waterfall division of labor: different phases are tackled by different people, in a long pipeline (of value) that loses a bit of water at each junction. Dividing the work by user story forces a single developer to work on requirements definition, analysis, coding, testing and even releasing.
There are, however, a few tools that can improve the communication channel between the parts of the pipeline that cannot be integrated into the team (or in a single person):
- customer-facing tests (like acceptance tests) tell us if the product is working for a client, and also document our understanding of what the system should do. Discussing over a test (written in code or Gherkin or even as an English example) is more concrete than debating over a mathematical model.
- a glossary defines an Ubiquitous language that the customer and everyone on the team are able to speak. Currently I speak about Mobile Terminated and Mobile Originated messages and everyone in the room with me understands what we're talking about.
- intermediate models may be useful in the road between specifications and code. However, they are costly and must be reserved for the core domain of your project and the components most difficult to get right. Many implementations of Domain-Driven Design I've seen mostly uses code as the final model to cut maintenance costs.
- Non functional requirements can also be transmitted to the team. If a maximum response time of 2 seconds is mandatory, stories can be created to model this constraing too.
Agile welcomes change: even if your customers are not going to scrap half of the user stories each week, the Poppendiecks note that being able to respond to change means being able to maintain perceived integrity in the long run, even after N versions of the software have come out and half of it has been rewritten over time. Yet in many cases both types of integrity remain compromised after wide changes. We'll encounter conceptual integrity shortly, and we'll see how it is a prerequisite of perceived integrity.