The Death Star: An Ambiguous Requirements Issue?
The Death Star: An Ambiguous Requirements Issue?
If you want to transform your organization so that it can continuously deliver quality automated code at a high velocity, you have to completely overhaul your design process.
Join the DZone community and get the full member experience.Join For Free
Whatever new awaits you, begin it here. In an entirely reimagined Jira.
You read it right! I’m not convinced that an architect left a thermal exhaust port on the Death Star without a reason. Could the most powerful battle station in the galaxy really have been destroyed due to an ambiguous design? I’m sure that in the Star Wars universe, they have very sophisticated engineering design processes and tools. Maybe we’ll find out when “Rogue One: A Star Wars Story” opens on December 16.
Coming back to our own galaxy… when we bought our first home, a condo, we had to do a gut renovation on our unit, so we hired an architect at a local civil engineering firm. To keep it short, we learned that deadlines don’t mean much in a construction project and that handling detailed finishing is an art that very few contractors master.
On the other hand, the design experience was amazing. I did not expect such a traditional field of work to be so modern, flexible, and mature when it came to designing our new home, especially when we compare the civil engineering design process with our own in software engineering. It’s like night and day.
In my last article, I talked about how civil engineering and architecture design practices have evolved tremendously through CAD (computer-aided design) software in the past 30 years. Civil engineers and architects are able to continuously make changes to their design, and the CAD software ensures all impact caused to other stakeholders in the project is properly identified — it even suggests potential fixes.
The Best of Both Worlds: Civil and Software Engineering Design Techniques
So, as I think about software engineering and how we could truly accelerate the continuous delivery pipeline by starting with the requirements, I can’t imagine why we wouldn’t learn from a more mature field of engineering and understand how they have been able to evolve their design techniques and apply what we learn to our field of work in order to achieve the acceleration we all aim for.
If we’re serious about transforming an organization so that it can continuously deliver quality code at a high velocity and in a fully automated fashion, we have to completely overhaul our design processes and techniques, not just focus on the build-test-deploy-operate cycle acceleration.
So, how do we come up with more agile and better designs? Let’s look at an example of a description of an Epic application.
This is a structured way of describing what the application is expected to do and it is much more effective than what we used to do years ago. However, it still leaves developers, testers, and any others with questions. For example:
- Do you want to view the departure and return flights as a combined pair or do you want to select each independently?
- When should the user select seats? After selecting the departure and return flights or just before payment?
- Does the user need to be logged on to do the search? If not, will the user have the opportunity to log on at some point? When?
To answer those questions, developers will make their own assumptions and create their code. Testers will create their test cases based on their own interpretation and so on. That’s when defects get introduced into the code, inaccurate tests are created and unnecessary rework will slow down the delivery pipeline.
So, how can we create unambiguous requirements using civil engineering design techniques and tools that will ensure the issues above don’t happen? By using multilayered visual models and CAD-like technology.
I’ve had teams use visual models to represent the requirements in many different types of companies across different domains, from banking and financial services to travel and hospitality and government. Also, we’ve modeled requirements in new applications as well as legacy systems, from web to mainframe and mobile. So, what is a visual model?
- A model in systems engineering and software engineering is a structured representation of the functions (activities, actions, processes, and operations) within the modeled system or subject area.
- The purposes of the model are to describe the functions and processes, assist with discovery of information needs, help identify opportunities, and establish a basis for determining product and service costs.
The First Layer of the Model: Foundation
It is very common today for a Product Owner to draw an initial sketch on a whiteboard describing what he or she wants to be built. That sketch is then further refined through multiple iterations until the product owner is satisfied and accepts it.
That initial sketch for our Flight Booking Path example could look something like this:
Then, through multiple conversations with the Product Owner, developers, testers and other stakeholders, the person assigned to formally model the Epic could come up with the following model:
As you can see, many of those initial questions have been answered through conversation and are reflected in the model. We now know that the Product Owner wants the user to select the departure flight first and then select the return flight. It is also clear that before going to the passenger information step, the user must be prompted for logging in. Last, it was clarified that the seats must be chosen only after the passenger information has been entered in the application.
Through the mere representation of the Epic in a visual model, ambiguities are removed and defects are prevented from entering the application code. This means testing is truly “shifting left” in the lifecycle. We’re already starting to “build quality in” the application.
The visual model of the Flight Booking Path Epic becomes the foundational layer for other stakeholders in the lifecycle, exactly the same way the house floorplan created by the architect in the earlier analogy was the basis for all other stakeholders to create their own designs and artifacts.
The Second Layer: Test Data
The next layer to be added on top of the foundation is Test Data. For each step in the model, we specify the test data necessary for that step to be executed. In the example below, we’re looking at the Test Data layer of the Search step. We do the same for all the other steps that compose the Flight Booking Path.
Depending on the organization and environment complexity, that layer could simply contain examples of data that a developer or a tester should use, or it could actually contain links to a test data warehouse or dynamic real-time processes that automatically fetch and mask or generate synthetic data.
Services and API Layer
Similar to the Test Data layer, this is where we specify the interfaces required in order to execute the Search step and whether they are real or virtual.
We would do the same for the other steps in the Flight Booking Path process.
Test Automation Layer
You may think you have the best test automation framework and engine, but if you’re still having to write or update the code of your scripts every time there is a new test or an update to an existing test, then you’re really doing manual scripting and automated test execution. When I’m talking about adding this Test Automation Layer to the visual model, I’m talking about automating the creation of the actual script, which is the most time-consuming activity for a test automation engineer.
Here’s how we do this: For each step in the Flight Booking Path process, we tie the keywords from your existing test automation framework (in any language or tool) to the actions that must be executed in each step of the process in order to execute it.
We can get fancy here and actually tie keywords only, write the actual scripting code or do a combination. From a best-practice perspective, since test automation tools evolve too fast, I always recommend that my teams keep the automation engine separate from the visual model. So if tomorrow you want to change the automation language or tool, the Test Automation Layer in the model would not be impacted.
Binding It All Together
With all the layers specified and tied together in a visual model in a CAD-like requirements design tool, we’re able to immensely accelerate the application lifecycle. Just like the CAD tools in civil engineering, our software engineering CAD-like tool maintains full traceability across all layers as shown below.
So, if there is a change to any of those layers, the impact is automatically identified and communicated to the owner of each impacted layer, prompting the owner for a decision in order to address that impact.
The real kicker is that if you integrate a CAD-like requirements design tool like this into your continuous delivery pipeline, as your user stories are created or updated on the visual model, your continuous integration agent triggers the automatic generation of manual test cases; automatic generation of test automation scripts in any language; automatic finding, generation and provisioning of test data for each test; and activation of the real or virtualized services so that the tests can actually be executed.
All these artifacts are also automatically stored in the respective repositories that are unique to each organization. For example, manual test cases can be pushed to a tool like ALM, Rally/Agile Central, JIRA, etc.; automation scripts can be pushed to a GIT repository for proper source code management; test data can be provisioned in the Development, QA or Stage environments; and virtual services can be enabled automatically.
Just as civil engineering design was revolutionized with CAD software a few decades ago, leveraging a CAD-like tool for designing requirements as visual models in software engineering helps organizations truly accelerate the entire continuous delivery pipeline, starting all the way to the left with the requirements. This prevents ambiguity in those requirements, which ultimately prevents defects from ever getting into the code.
Published at DZone with permission of Alex Martins , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.