A few years ago, I joined a young startup and, within the first few weeks, it became apparent that writing automated software tests was considered an optional endeavor. A senior engineer even said to me, “I don’t believe in automated software testing.” I was shocked and wondered what I had gotten myself into. After a few more weeks I realized that my teammates and the management team did not understand a) the purpose of software testing and b) what constitutes an effective software test. Without understanding both it is hard to imagine software testing having much value.
Purpose of Software Testing
So what’s the purpose of software testing? The definitions you will often encounter are “to minimize defects” or “to catch defects early.” These are developer-centric definitions and we as developers tend to communicate these definitions to non-developers. But what if we defined software testing as “to make software simple, efficient and stress-free for people developing, selling, supporting and buying software?” As developers, we want to minimize defects and catch them as early as possible, but software testing is so much more than that. It is about having an outward focus and serving internal and external clients.
The software business is not like most industries. We are not like the real estate development industry which is done with a construction project once it’s completed, or a widget manufacturer who moves from manufacturing one widget to the next widget in their pipeline. We are expected to maintain and enhance the same software, often without the luxury of comprehensive requirements, and the adequate development and testing cycles afforded to other industries. We face constant changes to the same software by different people on our team and even by new team members. We have to develop mechanisms to make the development process simple, efficient, and stress-free. Software testing is that mechanism. It’s not optional or something you do when we’re in a good mood and have nothing else to do. It’s a process that should be part of your development, support, sales workflow and a success criterion for your software products or services.
Not convenient? Well, imagine being part of a sales team trying to demo and sell buggy software or trying to resell a client whom you’ve sold buggy software in the past. At best the client will require a demanding Service Level Agreement (SLA) in the contract. But more likely than not, they will select your competitor or politely entertain you until they find a suitable replacement for your product or service.
Maybe you’re in a support staff role responsible for diagnosing and resolving client support requests. How do you diagnose software that lacks tests? Well, you can either build your own ad-hoc test cases and hope they don’t become stale and outdated. Or you can ask the software development team for help and hope someone on the team is knowledgeable. Doing your own thing or having to constantly ask the development team for help is not exactly simple or efficient. It certainly doesn’t scale as the organization grows or withstand the test of time given the tech industry’s turnover rate.
We have all purchased defective products and it is not a pleasant experience. Think about how it made you feel. Were you upset? Angry? Did you try to get a refund? Did you end up buying a competitor’s product? We try to avoid that sort of experience, often by soliciting recommendations from people we trust or reading online reviews. We also share our experiences with our friends as well as write online reviews to warn others. In the long term, defective products don’t do well in the marketplace and neither do those who produce them.
Or perhaps you are a developer who joins a team that doesn’t test software or half-heartedly tests “critical parts” of the software. How long will it be before you start hearing, “It works on my machine” or “It’s broken” or noticing an ever increasing backlog? Not very long. Even worse, every day you come into work you will be reminded of an inefficient and ineffective development process. Nothing is more demoralizing than working on buggy software and it won’t be long before you realize you are a hamster on a wheel and looking to escape.
Effective Software Tests
If you have been a software developer long enough or have worked at multiple organizations, you have probably heard tests referred to as unit test, sanity test, acceptance test, performance test, functional test, integration test, system test, regression test, stress test, security test, component test, black-box test, gray-box test, white-box test, validation test, end-to-end test, verification test, smoke test, scenario test, contract test, intake test, alpha test, beta test, destructive test, accessibility test, concurrent test, usability test, etc.
What the heck is a sanity test? Are not all tests sanity tests? What is the difference between a functional test and an integration test? Or a system test and end-to-end test? There does not seem to be consensus on what to call tests between teams let alone organizations. Even worse, what constitutes a functional test at one organization is referred to as a sanity test in another. We do not have a shared lexicon when we communicate so how can we define what constitutes an effective test?
Language affects the way we think and the decisions we make. Do we really need 27 plus ways to refer to our tests? Can we make software testing easy to understand and define a consistent approach to applying software testing best practices and principles? With Semantic Testing I believe we can! It is incumbent up on us as an industry to take care in how we communicate, not just with each other but to a broader audience.
Semantic Testing is the belief that software testing is not idiosyncratic. Testing a web application at Company X is the same as testing a web application at Company Y. The software may be different in form, function, and environment it is deployed to, but how you fundamentally test the software is the same. This belief holds true whether you are testing a library, a SaaS, an RPC client/server, a cloud service, a mobile application, embedded software or AI software.
Semantic Testing applies systems thinking to software testing to develop a specification that prescribes habits and practices to effectively test software. At the heart of Semantic Testing is the concept that all test cases can be categorized into five testing categories with a specific goal, objectives, strategies, and tactics. These concepts are described further on the Semantic Testing Specification website. For implementations of the Semantic Testing specification, check out Testify Project (an open source Java implementation).
It used to be that software was written for mainframe computers then came the microcomputer revolution. We started writing software for personal computers and not long after came the mobile device revolution. Now the era of the Internet of Things is upon us and software is being written for everything from toaster ovens to medical devices. Given the trend towards invisible and ubiquitous computer technology, software testing matters more than ever. As software developers, it is incumbent upon us to be able to write effective tests as well as explain why software testing is important to a wider audience.