DevOps concepts such as containerization and build/deploy pipelines empower developers to get feedback about code changes much faster. Rather than checking a week's worth of code, creating a build that takes hours, and then maybe another hour to set up a new environment, the DevOps flow moves from lines of code to test environment in minutes. These same challenges exist in mobile software development. Developers write some code, check it in, create and distribute a build, and then someone still has to install the new software.
Simulators, emulators, and parallelization can help speed things up.
Simulators & Emulators
Let's say you have an iOS app that is in production and used by a wide customer base, but that is also in active development. New changes are pushed to the Apple Store every couple of weeks, and each release is high stakes. An important bug in production translates to an emergency release and an expedited push through the Apple Store review process. You can increase the amount of software regularly covered by writing some automated UI tests, probably through the combination of ios-driver and Appium.
The next choice is what environment you run the tests against.
An obvious place to start is with a real device. The exact piece of hardware and operating system your customer base will have. This gives realism, but it can be difficult and time-consuming to provision and maintain internal mobile device clouds. Most development teams will have a limited number of devices on hand for testing. Developers will want some for testing their changes before creating and distributing builds. Testers will need devices to install and test new builds. That doesn't leave many devices for building a local device cloud for test automation. Real mobile devices are the ideal testing platform, but should probably remain at the tip of the mobile automation pyramid.
Simulators, on the other hand, are nearly instant, and also impossible to run out of. A new simulator (or emulator - there are differences, but both are virtual alternatives to real devices. To keep things simple I'll refer to simulators throughout this post), of any hardware type and OS combination, can be created in seconds and accessed over the web. There is little to no timing delay when automated tests run on these environments. Rather than waiting for software and hardware to respond to an event like a button press or navigation, the event happens nearly immediately. When you're done with the simulator, just turn it off and it is gone as fast as it was created.
Running an automated UI test suite on a simulator rather than a real device might shrink runtime by a quarter, or a third in some cases. This still leaves you with a potentially severe delay between when the tests run and when a developer can get crucial information about what their code changes broke. The next step is using parallelization, or running pieces of your test suite on multiple simulators at the same time.
Start by splitting your test suite in half and running two different simulators at the same time. This will give you a baseline on how quickly you can get feedback. Continue doubling the number of test suites, and simulators in play, until you get feedback in 15 minutes or less. Simulators are endless; you can create as many as you are willing to purchase. If you need a test suite to return faster, you can either purchase more simulator time or reduce the number and complexity of tests in your suite.
The combination of using simulators to run automated UI tests, and running those simulators in parallel until a test suite returns in 15 minutes will get a development team pretty far. Ideally, this will catch the majority of the low hanging fruit-type bugs that test teams spend so much time on. Parallelization is easiest to engineer when tests are atomic, meaning that all setup and teardown are done within a test and no test is dependent on another. Combine this with unit and service layer tests where appropriate, and a healthy amount of testing on real mobile devices, and you have a pretty good start to a test strategy. Fast feedback, and in-depth testing are both needed. Tooling can help with both.