Using Gray Box Test Automation With Unity Games
Learn about the potential benefits of implementing a test automation solution for your Unity 3D mobile game application.
Join the DZone community and get the full member experience.Join For Free
Without access to source code, there are limited ways to develop test automation solutions for mobile games. With the increasing popularity of Unity Technologies amongst mobile game developers, we at Bitbar began to explore gray box test capabilities with Golden Rat Studios. In collaboration with Golden Rat Studios, we’ve engineered a test automation framework with Unity tools to supplement an image recognition-based approach.
With many of Bitbar's Professional Services (PS) projects, where we help develop test automation solutions, we’ve been using a black box approach with image recognition. However, the approach takes a lot of foresight, requires persistent maintenance, and presents various technical challenges:
Images take time to process by the image recognition library and may not be fast enough to support dynamic test cases.
Producing image references takes a considerable amount of time to optimize in order for the approach to work and accurately map coordinates for Appium to reference.
It is cumbersome to manage a large database of images with such a high variance of mobile device screen sizes and resolutions on the market.
Even with these challenges, we have been able to work through them with brute force, but it no longer has to be this way. The PS team started looking into another approach that the Unity mobile automation team has experimented with. They found that “the easiest way to communicate information from the game to the tests scripts is to read them from logs. Logcat for Android or syslog for iOS. In Unity3d, you can do this simply by using the Debug class. If you have a lot of information you need to read from the game to the test-scripts you can use a standard format such as JSON. The itest-niacin libraries used in this repository can handle the JSON parsing in your tests scripts.”
“For native UI-elements of both Android and iOS, this is solved by the platform and you can use Appium to find elements on the screen. For games, this gets more difficult as the platform cannot help you find the elements. A solution which will work for many cases (especially menus and 2D games) is to instrument the game to write positions of the elements/objects into the device log. Once again using a standard format such as JSON is recommended.” -Source.
This concept/example helped us tackle a problem that was difficult to solve with just image recognition. Explorer Corporation Limited has been kind enough to share some screenshots of their Baccarat game to help illustrate the challenges we encountered during the test development process.
At this point of the game, our test is waiting for an opportunity to place a bet at the Baccarat table. With a Unity Debug class, the test can look for Unity messages within the device logs to signify a card is being burned, prompting the test to place a bet thereafter.
In this code snippet, this Appium function is waiting to see “burnbetting” in the device logs before it clicks on the desired betting area of the Baccarat table.
Even with other players at the table, who may be doing other things, the test is designed to wait until a card is burned and then places a bet on the table when it appears. With OpenCV and AKAZE, the image recognition processor was having difficulty placing a bet within the 10 second time limit, while Appium is juggling other processes at the same time.
The function also checks the log to see how much the player wins/loses. The image recognition library was unable to do this consistently because the value appears on screen for only a short amount of time.
This implementation has proven to be a simple, yet elegant solution to a problem that was challenging to address with only image recognition capabilities. However, this strategy takes close coordination between development and QA. To adopt this type of approach, a SDET would need to have a clear understanding of Unity capabilities and how Appium works.
Image recognition is still a good option for automating games that are turn-based or does not have a lot of dynamics when & where user input is required. With our Hill Climb Racing example, a gas pedal is always on screen, so it’s easy to find it with the image recognition processor. Whereas, elements and objects that appear and disappear - makes it difficult to control without careful planning.
There are still other test cases where Unity can help make test development more efficient. Unity console can also log x,y coordinates for any clickable object on the screen, which is a capability the Unity team has already validated. These logs would make the tests more scalable and reduce the need to manage a large database of images in order for tests to work on a wide variety of devices.
For sure, there are many other game developers and testers out there that have researched or worked on mobile test automation for games. We’re curious to hear about your experiences with automated mobile game testing. Please share your thoughts and any methods you’ve implemented in the comment section below.
Whether our customers are using black, gray or white box testing, we always encourage them to add important assertions and take plenty of screenshots. Having these test artifacts can help manual testing, localization and documentation efforts. Bitbar Cloud aggregates and organizes all of this information along with video recording and performance metrics to help our customers understand the root cause of issues in an agile way.
Stay tuned for more details as we continue to explore the possibilities with Unity games.
Published at DZone with permission of Lingkai Shao. See the original article here.
Opinions expressed by DZone contributors are their own.