So, you and your team decided that this time, you going to do it right. You have been to too many conferences and learned that the only good code is the code that has tests, and lots of them.
Also, you like to have a good night’s sleep, knowing that what you did during the day should not cause any major problems. Coding without fear; that's the dream, right? Also, QA guys may smile once in a while, because they don't have to do tests the 100th time on trivial use cases and can focus on more important aspects of the software like security or UX from time to time.
This looks good on paper, but the sad truth is, many will fail during this hard and tiresome journey. I'll try to give a list of five things, that can easily lead to failure.
Hopefully, new software is made in a way that enables developing, deploying, and testing with ease. People have already overcome these difficulties by careful design and implementation.
The first candidates are young companies. For them, it's acceptable to get some basic features out for just staying relevant, rather than to produce code that can be easily changed. After all, the software used by no one will require no change.
Second is when you have a legacy system and you have to add a new component, but now, as we mentioned in the introduction, you want to make quality code and create a better overall life for your developer friends.
There's be a third category, where you already have end-to-end tests but see no benefit from it. You don't trust them and they just take up space on your dashboard. New guys may ask, "What happened with the tests? Is the software really in that bad shape?" You just mutter, "Oh, my sweet summer child..." and go to having coffee with your veteran buddies. These people have experienced dysfunctions already. They are in a difficult spot if they want to try this again. It's hard to sell to management what has already failed. A common thing to unite them is that they both want end-to-end testing.
People love services that work in a stateful way. It gives them an anchor point in today’s mad world. Your application probably stores a lot of these states. A lot of functionality is based on this data. Can I list all that data with correct attributes? Can I change that other property and still remain unchanged after a certain amount of time has passed? These states of data need to be controlled. Functionality is built on this.
Symptom 1: Test environments cannot be created with ease. Data used in testing cannot be loaded dynamically and cleared after test suites. Code changes are hard to deploy and track.
Execution time is not just a fancy thing to show off for management. You need to have it under control. Otherwise, test results may take too much time to evaluate, slowing down the development process. People will have less motivation to take part in the automatization if they can evaluate the same functionality by manual testing.
Symptom 2: Tests run even after the heat death of the universe. There's no control over individual tests and test suite execution time.
3. Parallel Runs
To respect these strict time constraints, you have to run your tests in parallel. This is why test environment setup is so important. Say you have split your tests into two suites: A and B.
A suite always modifies A's related data, creating A's variation temporarily. B suite then does assertions on the listing of A. On their own, they run fine, but when run in parallel, they produce flaky results. And flaky tests are the worst.
Symptom 3: Tests cannot run in parallel and if they do, they make other tests unstable. (This is enabled by #1, and a requirement of #2.)
4. Respect for Tests, Ownership, and Visibility
Think of it as an entity, which is part of every team you work with on the product. He's never late to a meeting, is always there to provide results, never complains about the monotony of his work, and only occasionally does irrational things. Treat them as a valuable team member. After all, you did this whole mess because part of your job was pretty miserable, so it needed to be automated. This is the real drive for innovation. You need to evaluate the results together and start working on it immediately. Also, some form of physical visibility can have great benefits, like a dashboard where you can spot a failing test case, even from the furthest corner of the office.
Symptom 4: Tests are handled like some magical entity that, once done, will require no maintenance. They are not part of the daily discussion and workflow; results are not visible.
5. Test Simplicity, Scope of Testing, and Aim
Surely, many of you are familiar with the test pyramid.
Take it seriously, and always ask before writing a new end-to-end test. Ask, "Can I verify this behavior in some lower level tests? Is it really worth a test of full-scale automating?" Remember, if a test is hard to write, later it will probably require a lot of effort to maintain. Avoid implementing complex logic inside the end-to-end test logic. Rather, try to make the application testable for your tests.
Symptom 5: Many tests are generally hard to implement, there is a lack of will to write them because you have complex logic outside of your application domain that needs to be understood.
I'm not telling earth-shattering truths. It's rather some reassurance that comes from experience. We did a year of end-to-end testing and had some hard times. We raised the question many times, "Is it really worth the struggle?" And right now, I'm confident that it's the right way to go. We ran into each problem along the way, and I can tell you, we are far from resolving all, but we did try to counterweigh every point listed here to some degree. We are far from perfect. For me personally, the hardest part was to convince every team that tests need a lot of attention. The runner-up is probably the first in the list. You should take a look at your team and tests and try to recognize these warning signs.
We used Angular 1 as a front-end framework and Protractor (recommended layer over WebDriver) as a testing framework. The project was to create a web-based client with high-quality standards. At the end of the year, we'd created ~1,100 unit tests and 167 protractor tests. Tests are separated in suites that run under half an hour. Every test runs under one minute. On the back-end side, we did a little facelift with Spring. We created ~700 unit tests, about 1/4 of them SpringMvc test cases. This, of course, only covers the functionality that was needed for the new client. For the new components, we implemented hot deployment (no downtime on release) to ease testing and releases. At the end of the year, we were able to achieve biweekly releases.