Software is like a house.
According to Lev Lesokhin, senior vice president of strategy and analytics at CAST, a house has to be continuously maintained or else you may start to notice, as the its paint starts to peel that its foundation is not as stable as it once was. This could not be a truer analogy for big data environments as they are inherently distributed systems and the probability of issues – like partial failure or unexpected latencies – only increase with scale.
However, constructing a scalable big data system would be an incredible software architecture challenge for software engineers and program managers. And while there is trend for development teams to focus on the use of analytics that goes from data to decision, there also needs to be a focus on quality analytics that looks directly at source code. Without this being taken into consideration, big data management can quickly become problematic.
Therefore, IT managers should ensure that a quarterly clean up is on their teams checklist, so as to prevent any possible future issues. This is especially prudent if you don’t want to end up spending hours answering help desk tickets and customer complaints (hours that could otherwise be spent developing new features).
In order to keep this from being you IT teams reality, which stalls innovation, there are two factors you need to keep an eye on: reliability and technical debt.
U.S organizations are losing up to $26 billion a year in revenue due to downtime. There is a solution to much of this loss, which is implementing routine application benchmark testing.
For example, look at the recent discovery at the U.S Department of Corrections where over 3,200 prisoners were accidentally released in Washington state over a span of 12 years, due to a software glitch. In this case, some officials were aware of the issue but did not handle it adequately allowing a breakdown of the system and the release of potentially dangerous criminals. So it is imperative that IT managers not only take action to care for the output of data in their systems, but also their structural quality. By establishing a reliability benchmark managers will be able to have visibility into their system’s stability and data integrity before it is too late. Discovering vulnerabilities in critical systems allows you to see possible issues that could disrupt services and negatively impact customer satisfactions and the organization’s reputation. Having a reliability benchmark allows you to do just this.
Besides measuring reliability it is highly important for IT managers to measure technical debt present in their systems. Technical debt can be simply defined as the accumulated cost and effort that is needed to fix problems in code that remain after an application’s release. The average sized application carries around $1 million of technical debt, and according to Deloitte CIOs are starting to turn much of their focus on handling technical debt by building business cases for core renewal projects, prevention of business disruption, and prioritization of maintenance work.
However, despite this new awareness they still struggle estimating the actual levels of technical debt they have and the case to paying it back to business stakeholders. In big data environments technical debt is made worse by the urgency that often comes along with trying to make sense of scattered information sources. In order to deal with this IT managers should enlist their teams to arrange structural quality tools that measure the cost of remediation ad improvement of core systems. This takes into account code qualityand structural quality of their organization’s software.
Structural quality metrics can help identify code defects that can pose risk to the business when they are not fixed before a system’s release. These represent defects whose remediation costs will have to be tacked onto future releases. Meanwhile, code quality measures account for coding practices that can make code more complex and difficult to change in the future, this manifests itself as technical debt.
If you have adequate estimates of technical debt than businesses can plan and allocate resources for future projects to deal with this debt. Business success is continually becoming more reliant on the ‘software house’ that IT builds, so it is key to have visibility into these systems in order to ensure that it is stable and won’t pose any serious risk in the future.