Quality Assurance and Software Testing: A Brief History
Quality Assurance and Software Testing: A Brief History
Ever wondered how we got from the early days of programming to the modern world of Selenium and cloud-based testing? Keep reading for a brief history of quality assurance and software testing.
Join the DZone community and get the full member experience.Join For Free
See why over 50,000 companies trust Jira Software to plan, track, release, and report great software faster than ever before. Try the #1 software development tool used by agile teams.
Developers have been testing software since they first started building software following World War II. And quality assurance as a whole has a history that stretches back much further than that, of course.
But have you ever wondered how we got from the early days of programming – when developers relied on ad hoc methods for finding bugs in their code – to the modern world of Selenium and cloud-based testing?
Keep reading for a (brief and totally non-exhaustive) history of quality assurance and software testing.
The Origins of Quality Assurance
I could start by describing quality assurance processes in preindustrial societies, long before anyone had ever heard of software. But that would actually require writing a book.
So I’ll just quickly note some things that are probably obvious if you think about them, but that you might take for granted. Before the Industrial Revolution and the advent of modern capitalism, the calculus of quality assurance was a bit different than it is today. Markets were usually monopolized by guilds. Without free market competition, assuring quality wasn’t necessarily important for keeping customers happy. And in the absence of strong governments, attempts by the state to prevent defects in products tended to be rare or ineffectual.
That was why, as an example, bakers in eighteenth-century France could get away with cutting their flour with sawdust or lime, and selling bread that weighed less than they claimed. But when markets became more open in the nineteenth century, making sure that the things one sold were as free as possible of defects became a means of attracting buyers.
Software Testing in the Early Days
How does that apply today? Let’s jump ahead to the software age.
To understand where software testing and quality assurance fit within the history of software, it’s important to keep in mind that programmers need to fulfill several distinct goals in order to make users happy. One of those is debugging. Another involves configuration testing, or making sure a program works in all of the environments for which it’s designed. Another is assuring user-friendliness. And the list goes on.
It’s also worth noting that, early on, programmers tended to work in small teams. They adhered to the “cathedral”-style approach to software development advocated by Fred Brooks, who argued in his 1975 book The Mythical Man-Month that programming is easiest when projects are small, finite, and when a lot of testing can be done before releasing products to the public.
In the first decades of computing, when cross-platform programming languages like C did not yet exist and programs frequently incorporated assembly code that worked on only one specific type of computer chip, software was rarely designed to run in many different environments. That made configuration testing less important, since there were fewer configurations to test for. Your users’ computers had to be pretty identical to your own or your software wouldn’t run at all.
Under these conditions, the type of software testing that platforms like Sauce Labs deliver today was done as part of the broader debugging process. With small teams of programmers, relatively few environment variables for a given software program, and little pressure to release code on a frequent basis, an ad hoc approach to software testing worked well enough.
Modern Software Testing
Fast forward to the 1990s and 2000s, however, and quite a bit changed.
IBM’s introduction in 1981 of the PC (and the many clones it spawned) revolutionized hardware. For the first time, at least in the consumer market, programmers could write for a single hardware platform.
By the 1990s, PCs were not identical, of course. The specifics of each machine’s hardware and software could vary widely. But programmers faced increasing pressure to release software that worked well on any type of computer advertised as PC-compatible.
Another change was increasing demand for more frequent software releases. This was the result of many factors, like the commercialization of software, and businesses’ desire to keep customers happy by providing new and updated products on a consistent basis. Another was the growing importance of the Internet, which provided a much faster way to distribute new versions of programs. And then there was the advent of open source, heralded by projects like Linux. (I have not mentioned GNU or “free software” because GNU originally followed more traditional development methods. They dispensed with Brooks’s slow-and-steady development mantra, adopting in its place a release-early-and-often approach.)
These changes raised the stakes for software testing. Releasing software that worked on any PC required careful configuration testing of the many possible environment variables. At the same time, the fact that users had come to expect more frequent releases meant that programming teams had to optimize their testing processes so they could deliver faster.
And while the Linux crowd showed that it was possible to develop complex software by releasing code to the public and asking users to help find defects, the companies that started trying to sell Linux in the early 1990s quickly learned that better configuration testing and other quality assurance was needed in order to make open source commercially viable. Red Hat didn’t become a billion-dollar company by inventing Linux – it became successful by assuring that its versions of Linux actually worked under particular different hardware and software configurations, then selling support services for Linux on those platforms.
The Future of Software Testing
The pressures described above are what ushered in tools like Selenium. But today, developers face a new set of needs, and those needs require even more sophisticated innovations.
For instance, take Continuous Delivery, which puts enormous pressure on programmers to test and update code on an ongoing basis. Incremental tests no longer work in the age of Continuous Delivery.
The advent of mobile computing, IoT devices and the like also mean that environments vary more widely than ever. Yet a single program often has to run across all of these platforms. That means more testing, too.
Fortunately, developers are now better equipped to handle these pressures. The cloud has made it easy to offload testing from local environments, and make it scale. And parallel testing allows programmers to test software much faster than they could in the past.
So new problems have led to new answers. And it’s a safe bet that this trend will hold true whenever the next programming revolution rolls around.
Published at DZone with permission of Chris Tozzi , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.