A couple of years ago when hiring to some of my teams, we tried hard to put stress on the difference between a software tester and a quality assurance engineer. I think the situation got much better and it is now obvious to people that quality assurance is much closer to the development that it might appear at first sight. Especially at Red Hat in Middleware, where people write mostly integration tests that are like small applications deployed on one of our products. Which is basically the same thing that customers would do, except for the difference that you learn the technology to its core, bend it to the maximal extent and demonstrate how you understand it the best way you can. Without the mundane copy pasting the same solution a thousand times to build an end user application.
But wait! You have to run those small applications, don't you? Not exactly. If you are a smart quality engineer, you automate your work not to create a technology debt. And in the end, you end up with nice charts of test results in your Continuous Integration environment like Jenkins.
However, there are two concepts that might be hard to get under your belt when you are serious about quality assurance. And I mean it — it is not easy to get used to them. As a quality engineer, you probably have the same education background as an application developer. And you strongly incline towards making your applications work smoothly.
The first concept that is hard to get is: Your goal is to make your tests fail!
Nobody needs a quality engineer who does not discover any bug. The opposite is truth. Your only value is in finding bugs! How many code bugs did you find in the last month? Divide your monthly wage by that number and you see how expensive it is for your company to just discover a bug! Are you an efficient quality engineer?
Well, it is quite obvious that a quality engineer needs to discover bugs. What is so difficult about that? Because we don't like failing applications and we like to release our products! We want everything smooth and running. We don't know whether I did not understand the system under test or whether I really discovered a bug. What if I am just wrong? Plus the charts in our Continuous Integration system shows the failed tests in RED . We don't like this colour. Red was always wrong back then at school. If your exam returned back to you full of red notes it wasn't good. So we'd rather be good kids and make everything pass.
As opposed to a developer, a quality engineer needs to work towards avoiding releases.
I once had a junior guy in my team who said that he made the assert command fault tolerant for the test to pass. This does not make much sense but yet, we are subconsciously pushed to it. If it helps you, consider switching the colors in the output charts.
Chart colors for developers:
Chart colors for quality engineers:
You might also face a question similar to: "What if there is a bug I am aware of and it is not fixed in the current release? Should I delete my test or make it pass?" Obviously, such a test would always fail, so do we want to keep it running?
Definitely, you must avoid making the test pass or not writing it at all. If it really causes you troubles (like it breaks some other tests), you might consider using a tool to annotate the test so that it executes only when the bug is resolved (see issue-keeper or Arquillian Governor). By applying this principle, you even don't need to verify the bug fix. It gets verified automatically as part of your automated test suite.
The second concept, I would like to talk about, is not that hard to get used to. Junior quality engineers rather do not think about it at all. Many times I faced a situation when a quality engineer did not make a difference between a test failure and a test error. When the test method exits abruptly no matter how, there is problem with the test that needs further investigation, full-stop. Not that fast! It depends whether you start investigating the system under test (in case of a test failure) or the testing environment and the test itself (in case of a test error). If the environment is the culprit, it might be sufficient and perfectly fine to just rerun the tests and see if the error is gone. In case of a test failure, you might be facing a race condition that you would simply mask by rerunning the test.
I once met a quality engineer who completely omitted using assert commands and used if conditions with throw new Exception(). Perfectly avoiding using any test failures. His comment on this was that: "It is just the matter of getting accustomed to it." Because of the reason just mentioned, it is obviously not just a question of habit. It is a huge mistake!
Although it might seem funny, even smart people can get pulled into masking issues in their daily routine. I am not sure whether anybody of you made a similar experience. I have seen this so many times that I am going to put these two concepts into our new hire guide.