Object Oriented Analysis and Design is certainly not new to the enterprise software workforce, but how is it actually carried forth in industry? Are software engineering groups reaping the benefits that OOAD claims?
Software systems these days are becoming more complicated than simple data-entry applications of the past. Many companies are intertwined with their proprietary software suites which allow them to compete at a level above their competition. With software becoming so valuable and tightly integrated with inherently complex business domains, what strategies are software engineers using to understand and cope with these domains, and are they working?
One strategy to cope with a complex business domain is to use a model.
Modeling complex systems is probably one of the hardest things to do as a software engineer. I think we’re more comfortable with exploring new technology, or learning new design patterns, and we don’t pay enough attention to learning the details of a domain much more than we need to “make it work.” Trying to use technology to hammer out a solution to a complex domain without understanding the domain ultimately leads to crappy software.
UML diagrams are not models. Nor are flow diagrams, sculptures, or code. A model is a set of concepts and understanding constrained to a given context. The concepts are chosen to illuminate the important parts and details of a problem context and avoid the non-essential ones. Dealing with complicated problems can be very difficult and overwhelming if the details are not well organized and conceptualized. Models can be explained by using UML, flow diagrams, or code, but buy themselves, they’re not the model.
The OOAD prescription tells us that an analysis model (http://en.wikipedia.org/wiki/Object-oriented_analysis_and_design) is a distinct and separate artifact from a design model. We use the analysis model to understand the domain (the “what”), and then we develop a separate domain model to implement it (the “how”).
Can two models like this solve the same problem? Sure, since a model is just a set of concepts, any number of models could probably solve the same problem. However, what happens in practice when you have two separate models: one in your head and one in the code? In my opinion and experience, both begin to deviate so much that the model in your head becomes misleading and the model in the code becomes lost. The analysis produces artifacts that focus on details that may be irrelevant when actually implemented in code, while the discoveries made while implementing in code (which can be incredibly important) rarely if ever get fed back into the analysis model. Instead, the developers ostensibly go down the track of developing a different model (if any at all) to compensate for unexpected details.
The solution is to use a model that both satisfies the analysis requirements as well as the design requirements. The model should focus on the problem domain and be applicable to the design of the code. If the model cannot be expressed properly in code, a new model should be chosen. An iterative process of learning more details about the domain, implementing them in code, and then going back to modify the model if the code is awkward will slowly generate a model that is useful and contains key knowledge of the domain. Changes to the model must imply changes to the code, while the reverse is also true: changes to the code imply changes to the model.
Coming up with a model that satisfies both understanding and coding requirements is hard, but certainly can be done. In the long run it can lead to systems that are easier to extend and maintain because the core principles of the domain are embedded in the code. The fact that the domain model (set of concepts) drives the design while the design feeds back into the model is very powerful and becomes a very practical approach to developing complex software.