My Bot Cheated and Automated Shooting, I Hacked
The bot has been through a few iterations:
0 - every few seconds it uses an electromagnetic pulse and then gives itself a new EMP, therefore having infinite EMP devices
1- every 100 milliseconds it iterates through the letters available and shoots that letter watch bot 1 beat a human
2- fixes a bug in (1) so that it only shoots letters which are used on the level
3- once it knows what word is currently being shot at, it uses the letters in that word, but when it doesn’t know the word it shoots all the letters on the level
4 - only shoots letters on screen, every 10 milliseconds, either the start of a word then continues on the word
5 - only shoots letters on screen, which are the start of a word, then focusses on that word to shoot it to bits (doesn’t wait 10 milliseconds to finish the word) - 100% efficiency
6 - essentially bot 5, but only waits 2 milliseconds watch bot 6 in action
The different versions represent ‘cheating’ or ‘automating’.
I had to ‘hack’ to get the information I needed to ‘cheat’ with bot zero.
Cheating or Automating?
Bot zero is essentially a cheat bot. It breaks the rules of the game to get ahead. The game only ever allows 3 EMP devices, so I ‘cheated’ when my bot used an EMP device and then generated a new one.
I also had to ‘hack’ to build the information I needed to automate.
Bots 1-6 are where the automating steps in.
I’m using the game interface code as the entry point. Rather than amending internal model state, I use the functions that would be called by keyboard events. All I’m really doing is bypassing the keyboard and triggering the game events very quickly.
Lessons for Testing
I do cheat when I’m testing.
change values in the database,
amend messages and send in messages to the backend that the front end would never send
All to achieve my aims.
But with ‘cheating’ comes ‘more risk’.
There is a risk that I put the application in a state it would never get to in the real world — particularly if I’m amending the application using mechanisms that a ‘user’ could never trigger.
This might be worth the risk, and might expose a valid problem that would be very hard to surface in any other way.
the risk of false positives is higher.
you would have to be able to justify this approach in your testing.
you might find it harder to convince people that any problems are real problems.
Some of the cheats are less risky, e.g. amending a message. This is easier to justify because, in theory, if it is a Web application then any user can feed the communication traffic through a proxy and amend it. Malicious users, like hackers, certainly would, and if the ‘cheat’ exposes a vulnerability then it is easier to justify. Really I’m using the external interface, I’m just triggering messages that the GUI would never send.
When we use the external interfaces it is much harder to discount what we are doing.
Bots 1-6, don’t cheat.
They have the advantages that:
they don’t have to read the screen, they read the internal models.
they don’t have human reaction times, they are fast as the machine can handle.
This allows us to put the application into states very quickly and push the application to extreme states that the user would never reach.
I can’t imagine a user reaching wave 95 in ZType. In essence, we performed a ‘stress’ test on the application and it performed without any execution issues.
It doesn’t matter what application we are testing, we will probably want to ‘hack’ it to gain more information about how it is ‘really’ implemented. We can use that information to ‘cheat’, which might introduce additional risk to our process. But if that ‘cheat’ is available to a ‘user’ then it is easier to justify the approach. When we automate using the interfaces that the application provides us, the automating itself targets more risk than it introduces (probably).