Why Are Bugs So Common in Software Engineering?
Why Are Bugs So Common in Software Engineering?
Everyone makes mistakes, but why does there always seem to be a conversation about how many bugs are in software?
Join the DZone community and get the full member experience.Join For Free
Whatever new awaits you, begin it here. In an entirely reimagined Jira.
There are several kinds of software defects, and all of them have completely different causes.
First, let's start with the technical defects. These are common among new and less experienced developers. For example, let's say you need an application that sums two integers and the requirement clearly states that, but when your developer delivers the software, these things happen:
- You can enter letters or any kind of characters and decimal numbers, or the improper numeric format can be used, or very large numbers out of the computer's ability to calculate. What happened, in this case, is that the developer didn’t validate the input. This is usually the number one technical defect and the most neglected. There is a saying on software development: "All input is evil until proved otherwise." The first line of defense against defects is doing proper data validation.
- The application crashes, stops working, or shows an error. I think it is also very common to see the error message, “Object reference not set to an instance of an object.” This error happens also because of the developer's lack of experience. The best way to explain what is going on is that the developer is leaving loose ends on the code, so the code's quality is poor. Part of the software design should include error handling strategies so that won’t happen.
Technical defects are the easy ones and should never leak out to the production environment. If this kind of defect is happening in your production environment you desperately need software testers that prevent that.
Usually, technical defects are transient, and with a good strategy, training, and follow-up, they tend to disappear over time or happen less often. Over time, the developer should gain experience on how to prevent them and testers should know how to detect them before hitting the production environment. If not, your team is only focusing on writing code but not learning from their mistakes nor improving the deliverable code quality.
Second are business rules defects. These are the difficult ones, the ones that make the projects fail or take a lot more time than expected to be completed. Would you think it fair if somebody asks you for a blue ball and when you bring a blue ball he says “It is blue, but I need navy blue and that is not navy blue”? Or, even worse, if they say, “You know what, it was actually a red one we need,” or, “After trying to fit the ball in place we see that we actually needed a cube.”
The problem with business rules is that requirements gathering is not as simple as it seems and they tend to be very neglected. It takes work time from several roles to properly get the requirements from the users and shareholders, create stories, or use cases (pick your preferred weapon), develop the software based on those requirements, test that the software meets the requirements, and deploy to production. It’s an expensive thing that the user finds out that the software is well done but it doesn’t do what it is needed. Basically, all the work being done is converted to waste and needs to be done over.
Some common scenarios are that: 1) Teams don’t have Product Owners or business analysts with whom lies this responsibility; 2) The analysis is done superficially in a high-level view (or, as other call them, low-resolution requirements) leaving the developers to get the details; or 3) The developers during the development phase needed to do a requirements analysis. All these are the perfect ingredients to make a lot of business defects.
The developers on the development phase don’t have a way to tell if they are making defective software. Having a defect in the requirements is a silent time bomb, so proper requirements gathering and analysis is crucial. Your best scenario is that during the user acceptance test (UAT), where the user is supposed to review the software against the requirements to verify that everything has been developed, catches the defects before they are going to be deployed, but it is common to find out the defect is on production.
From each development phase of the software development the defects go through without being noticed, the more expensive they get. Formal methodologies put in place proper controls to catch these kind of defects before they are in the development phase, but Agile methodologies don’t have this safety feature. That is why for small or medium projects, Agile is fine, but for large ones it is better to use a formal methodology to minimize the risks.
Part of the project manager's/team lead's job is to have a separate follow-up list of any issue that happened on the production environment (not only software defects) for two reasons: to make sure any issue/defect is not lost in the work pile and that they are properly resolved as soon as possible since it is a production issue; and to do an issue/defects analysis and create strategies like developers and tester training, process implementation, and sometimes educate users so it will prevent the same issue/defect to happen again in the future.
So, as you can see, is not a single cause but several that, without proper training and management, lead to deliver defective software. They all need to be addressed separated and should never be neglected.
I will also recommend these two books that address the issue more broadly:
Published at DZone with permission of Alfredo Pinto . See the original article here.
Opinions expressed by DZone contributors are their own.