I am a big fan of the CMD+U conference and just wanted to share with you key highlights of this event.
The CMD+U Conference not only offered a successive exhibition of tech talks about the different options that we have as developers for code testing for iOS, but also workshops where they showed us the advantages and disadvantages of different techniques. This means that we were lucky to see “real time coding” and to check the results of applying each of the “in situ” techniques, which contributed a lot to the success of this event and in my opinion, was a very interesting point of the conference. It’s important to mention that the fact that it was a theoretical and practical at the same time was one of the main objectives expressed in the manifesto by the organizers of the event.
The high technical profile of the speakers already meant that the conference would be quite interesting and I knew I would learn new techniques of code testing for iOS applications. Just to make it clear, in the past couple of years, the amount of tools frameworks and techniques of software testing has increased exponentially. Three or four years ago, it was unthinkable to have all these options for this platform because you could hardly find any documentation and standardized testing forms for Apple. Developers were a little free, each one applying their customized solutions and trying to bring to the Apple platform the same systems that were already being applied for years in other platforms such as Java or .NET.
One of the things that most motivated me of the conference was to discover that using design patterns for software such as MVP, MVVM, and others to develop applications that respect the SOLID principles had become a trend in the world of Apple development. I also noticed that developers are more interested in building better software, more reusable and with better quality. Today, it’s not only about building the best application that will be in the top of the Apple Store, it’s about having an app that is well built internally. To accomplish this task, it’s very important to rely on code testing at all levels.
Code Testing for iOS – CMD+U Conference Talks
Testing Functional Reactive Programming
The event started with a talk about “Testing FRP (functional reactive programming)” where Rui Peres explained that reactive programming is a combination of the KVO technique of iOS (Key Value Observing) and of the NSOperation (which is a system of iOS to perform work asynchronously) plus a certain level of logic combined with these two systems. In functional programming, the KVO would be taken by the object Observer that is responsible for monitoring when the data undergoes some changes.
The role of NSOperation would the Signal Producer as it is at this moment that the data change occurs and this change has to be notified in some way. In reactive programming, a signal is emitted permitting other objects to register to receive the data and continue with the changes. In some examples, we applied reactive programming as validations of login forms, game loop in video games etc. Finally, he explained that in order to test this type of program we should be using some third-party tools such as what ReactiveCocoa has, where what ends up validated is the end result of the transformation of the data. The problem is that the asynchronous nature of this type of programming causes serious difficulties in the test.
The next presentation was carried out by Jorge Ortiz, and through a practical example, we all understood he had a great talent for teaching. The example was to develop a small application for supervillains using TDD. He explained the cycle of TDD Red-Green-Refactor and the use of different test doubles (stubs, mockups, and fakes). The use of test doubles allowed him to teach us how to test separately logic model and view using the MVP pattern (Model, View, Presenter).
Using SnapShot Technique to Test Graphic Interface
After that, it was the turn of Luis Ascorbe, who told us about how to test a graphic interface using a technique called SnapShot. It consists of making a screenshot and comparing it with the result that you expect to get after the execution of any changes. To carry out this task we use frameworks as Expecta, Nimble, and Kaleidoscope. The idea is to compare the screens before and after the test to validate that the changes are consistent with what is expected. He also mentioned the existence of a test tool that Facebook has, called FBSnapShotTestCase and that is similar to CGContextRef that is the system that iOS uses internally to paint screens in devices. What this tool does is that it compares the CGContextRef before and after the changes to verify that the results are consistent with what is expected.
Code Testing for iOS Without Using XCode
After a quick pause to regain strength and grab a coffee, the session continued with Kyle Fuller, talking about how to test without using XCode in iOS. The main topic of the talk was the character cross-platform the language Swift acquired in its latest version and the ability to use it to develop applications on other platforms than Mac. Kelly taught how to validate code using only asserts to check conditions and ensure the good functioning of the code. Then he showed and briefly explained the functioning of some testing frameworks developed by himself as Spectre or SwiftEnv.
Spectre is a testing framework based on the validation of “expectations (Expect).” You set a configuration of the test to end by validating the result “expected.” The functioning is very similar to other frameworks of BDD. One of the advantages of this framework is that it is cross-platform and it allows you to run the tests directly from the command line and through different parameters, get reports of the test. SwiftEnv is a tool that enables you to install and manage different versions of Swift in the same system to be able to validate different versions of the code and be sure that there are no problems with the different versions.
The next talk was carried out by Pedro Piñera and was about showing us how to test code when the application has to make network connections to handle the data. The first thing he did was to explain the difference between unit testing, integration testing and acceptance test (that was the focus of the talk). The problems of working with network connections is that these are usually asynchronous and suffer all sorts of problems of availability or connectivity. His proposal for network testing is to use a framework called Szimpla. The functioning of this framework is based on the combination of a test UI and the Snapshot testing technique, but instead of recording the screen, what is done is that we ‘freeze’ the network request and turn it into a JSON file that can later be used to validate or compare the expected results of the test.
Test in Apple Watch and Application Extensions
Then it was the turn of Boris bugling (known as @neonacho) who spoke about the test in Apple Watch and Application Extensions. Basically, what he came to say is that currently, this field has no official testing system by Apple, so to carry out code testing for iOS, you need to use some tricks or recommendations. To carry out the tests, a framework called Pivotal/Cedar is used, which is a BDD framework quite well adapted to meet the needs of tests in different types of devices. Once we have already selected the framework of test, Boris recommends us to model the entire application using the MVVM pattern to decouple all our business logic from the dependencies and system classes and once that we have the project correctly modeled, what he recommends is to convert it in a library or framework to ensure the independence of the system tests.
After this talk came Paul Stringer , who talked about acceptance tests. He started talking about what are for him the keys points to a good acceptance test that consists of a good combination of automation, a live documentation (technically a test is a piece of self-documented code) that the execution responds to what is expected of the tests. For him the bigger they are and the more they try to cover, the more difficult it will be to maintain, update and correct them. In order to do the maintenance and to have a good test system, Paul proposes to use a tool called Fitnesse, which is both a tool for test acceptance and a wiki system.
This tool is a web portal that allows you to define acceptance test in relation to your source code. The advantage is that all the tests are defined with a very “human-friendly” language allowing non-technical people to write application requirements. The way it works is very simple. It’s installed as a stand-alone application and it’s run from a Jar in the command line, which enables us to execute a web server that contains plugins to connect with mobile applications. Those plugins are installed in the project and are able to relate the acceptance test to the project. During the talk we checked how fast it is to write requirements through this tool and the validations and corrections that are made during the execution; it is highly recommended.
Code Testing for iOS Using Protocols and View Models
Ayaka Nokana taught us how to test code using protocols (interfaces in Java or .NET) and View Models. In Ayaka’s opinion, a good test system is based on breaking down the problems into manageable pieces to build modular systems. His demonstration consisted of refactoring a code by abstracting classes through protocols and using View Models to encapsulate the logic of the operation. By using this technique, we saw how to model and decouple the logic within an application and make very generic abstractions that can then become frameworks and be reused in other applications.
Introduce Testing in a Legacy Code Application
Last but not least was Michael May. He spoke about how to introduce testing in a legacy code application. This talk was so practical that there were not even slides, we started coding straight away. He began by saying that the best ally to know the extent or scope of a method or function is the use of the compiler. Because through small changes, this can tell us which areas of the application are related and where it affects our changes. Another tip was to make the objects of Core Data in immutables in order to control our possible modifications when it comes to introducing test and making changes. Another recommendation is to go extracting common code in small functions or methods to go modularizing the system and see where there is repeated code or the like.
Sometimes this is not as intuitive and easy to see because depending on the complexity of the class, these duplications could be not evident. The final recommendation of May to manage and modularize code was to make static the methods that refer to self (the designation to refer to its own class in iOS), this will eventually allow us to draw these methods in helper classes or categories.
In conclusion, I would like to say that the event about code testing for iOS was very interesting from a technical point of view because all the talks had some practical demonstrations of the topic it was covering. This is what really enables us developers to try out these test techniques and to assess them. Although some talks covered the same or very similar topics, I think that the speakers were able to have different approaches to make them all useful. If the organizers continue in this line and we are fortunate to enjoy new editions with speakers of such quality and so much knowledge, the CMD+U Conference can become a benchmark.