Over time I will write up a number of articles that touch the many aspects of developing SOA applications. I'm starting with testing since without a sound testing strategy there's no point in starting software development.
As an introduction to testing SOA application I'll explore the typical organization of SOA project with regards to testing.
SOA practitioners don't care about implementation nor unit testing. They expect developers to do their job and deliver quality software. They'd rather spend time on analysis, design and quality control. The latter includes functional and non-functional testing.
In a typical SOA project you'll thus find two kinds of testing:
- Functional and non-functional testing, most likely the responsibility of a test team or quality assurance team
- Unit and integration tests, developed and maintained by developers
If you look at the schedule or planning of SOA projects you'll typically recognize the waterfall: requirements gathering, analysis, design, development, testing, ... . SOA applications by definition do a Big Design Upfront.
Why's that? If you think about it, before a SOA application is developed the stakeholders have to agree on the interfaces (communication channels and XML schema), reliability, Service Level Agreements and so on.
It's hard to imagine any other approach. Stakeholders need to verify whether their problem domain will be properly handled by the application. They also need to start working on integrating the SOA application in their organizations.
To enable these two activities there has to be a documented design that goes through many iterations of reviewing and approving before software development starts. Notice that while this is typical for SOA applications it does not put a lot of constraints on the way software is developed.
In my experience software development teams inside SOA projects can choose their own methodologies. The design documents will describe XML schema and most likely also the architecture. Again, this does put constraints on software development but it does not mean at all that developers are working in a waterfall model.
This relative liberty of software development teams comes with a serious risk, namely that the quality of the software is below what's expected and required and that this is discovered when it's too late.
The root cause of this risk is the meaning of the words "test" and "testing". For developers tests are tools they use on a daily basis. Tests verify the quality of their software and produces happy developers. For quality assurance people tests are tools they work on for many months and run once or a couple of times to verify a particular piece of software complies with requirements and expected quality standards.
These two interpretations are very different. Tests produced by quality assurance teams are typically pretty complex pieces of engineering. They will send messages to the SOA application, expect certain results, report errors and generally serve to convince the client or stakeholders that a certain level of quality has been achieved.
Quality assurance teams have the responsibility of facing stakeholders and defend their results against though analysis, even skepticism. Software developers are typically not used to defending their work in this way and in SOA applications they're not expected to do so.
Tests written by the quality assurance are a great way to test your development work. However, they typically come in too late and they don't serve the purpose you'd expect.
Why would these tests come in late? First of all, it typically takes several months of works to write these tests and the process is very intensive. Secondly, before they start working on their actual tests the quality assurance team typically has to write tons of documents: test plans, quality assurance documents, ... . Before that people on the quality assurance team typically are involved with design and analysis.
Typically the software development and quality assurance teams start their work at about the same time. It will take however several months if not more before the quality assurance team delivers the first automated tests.
When the first version of the functional and non-functional test suite becomes available developers already have worked for many months of the application. The worst thing that can happen when the test suite is actually ran for the first time against the application are bad test results.
If that would happen the development team has to re-consider the quality of their work, re-order their priorities, change their planning and basically go into panic mode. This is something you want to avoid.
The ideal situation after months of development is that the functional tests report a good score. No doubt there will be some errors or bugs found but overall the large majority of tests should work.
Functional tests typically consist of thousands of individual tests and many of them test outlandish situations. Overall, a score of 50% or more on the first run should be considered as a success. Obviously 60% or even 70% is more desirable (remember that tests may contain bugs as well). Never the less, a decent test score not only proves that the application is complete to a large degree, it also shows that most test scenarios already work.
So how to get there? How do you avoid that the first execution of the official test suite turns into a disaster and ruins the project's planning?
There is of course only one way and that is to aggressively test yourself. How to approach that is the subject of a future installment.