When it comes to improving the speed of app releases, it's critical to have a reliable build process. Broken builds, missing files and failed unit testing are symptoms of a more fundamental problem: the engine that keeps your team rolling needs maintenance. But before we try to fight one fire at a time, let's consider what got us here in the first place.
Continuous Integration Is Not a Nanny
With a continuous integration (CI) server, you get out what you put in. If you expect a completely working app or website, what you commit to your repository needs to be complete. This includes code, configuration and data assets. If any of these items are not tracked (not added to the code commit), the build won't work. What's worse than a broken build is one that looks successful but fails at runtime.
What is Unit Testing? Unit testing is a software development process in which the smallest testable parts of an application, called units, are individually and independently scrutinized for proper operation. Unit testing is often automated but it can also be done manually. (Source: WhatIs.com)
Running test automation on every build is a great way to solve this problem. Is this a lazy way to make sure all assets are checked in as part of a commit? No. It shows us when we fail to do what we're supposed to do. With increased visibility comes greater attention to the root cause of specific problems.
Your Unit Testing Strategy Reveals as Much as You Ask It To
A multi-layered approach to testing provides maximum coverage throughout the entire development and delivery process. In the following diagram, there's a big emphasis on unit tests and less attention on the GUI (graphical user interface). Why? Because there's a lot of code to check and unit tests are a minimalist way to do that and they're typically quick to run.
"Fast feedback from UI tests helps developers understand the impact of their code changes."
Many DevTest teams require developers to include unit tests for the functionality they add or change during a commit, but are then often surprised to find bugs that are only exhibited in the wild (on real mobile devices). This is because unit tests only exercise small portions of the system in isolation; they are necessary, but not at all representative of how an entire system will function.
I also see many teams running "integration" tests as part of their build verification process. This is fantastic, but still not a complete solution. Integration tests test how a few components of the larger system will work together. They typically focus on system boundaries and designed behavior, but they don't use actual data or real scenarios. Integration tests are necessary, but again, they're not representative of how a user will interact with the software (app, service, etc.). We need one more step before we're at a minimum-viable testing strategy.
Automated UI Tests Improve Build Cycles
At the top of the testing triangle shown below are UI (user interface) tests. Far fewer end-to-end UI tests are written than unit tests because they usually take longer to write and require more resources (i.e. an actual UI runtime) to execute. So why do we write them in the first place? Because UI tests represent how real user interacts with our system. This is how we get real feedback; we must ask real questions of our software and verify how it reacts to real human interaction.
In other words, we need to use a combination of partial, simulated and fully interactive automated testing to obtain the fast feedback we need to improve software velocity without reducing quality.
Build engineers want a reliable build process and output that's functionally complete. The trick is to establish fast feedback to development teams by automating tests of key scenarios. The way to accomplish this is to include the most important UI tests into the tail end of the build verification process (post-integration testing). These first few tests will help you establish a reliable feedback loop to developers about how code changes affect real users.
What is UI Testing? UI testing is the process of ensuring proper functionality of the graphical user interface (GUI) for a given application and making sure it conforms to its written specifications. (Source: WhatIs.com)
Typically, these are called "smoke tests" because they verify if it's worth doing more extensive testing (UI, regression, or manual) on a build once it passes basic validation. UI workflows are often fast, minimally complex and well-known actions that other testing depends on (i.e. login). This is precisely why UI workflows used as smoke tests are perfect candidates for being automated for inclusion into the build process: they're the easiest and most valuable tests to map the development process to the real world.
By their nature, UI tests take resources. They require interaction with the tool an actual person would be using. In web apps, this means using a real browser. For mobile apps, it means real mobile devices. The environment you host these real user conditions in must be reliable enough so that your build process continues to work dependably. It also has to allow for various independent builds to run parallel testing, and therefore must be scalable to the development team's velocity needs.
Once you have established reliable UI testing, your development team will be able to see where they need to improve their own behavior and commit process. Fast feedback from UI tests helps developers see defects and understand the impacts of their code changes. By reducing escaped defects as early as possible in each stage of the development cycle, DevTest teams can improve the quality and velocity of their software.
Next Steps for More UI Testing on Every Build
Start by identifying a few short but critical business use cases (i.e. the login process, checking balances, etc.)
Connect these use cases up with key stakeholders at your company that would be most impacted if they failed
Write automated tests for these use cases; start with static data at first, and improve them later with real user data
Use a service (such as the Continuous Quality Lab) to run tests on real devices and operating systems
Pipe the results back to your CI system for easy access to feedback (see this CI example on YouTube)
Add more tests of other important use cases that experienced problems in the past; when they fail, fix them
Share your results with stakeholders; it's important for everyone to see that fast feedback is working!