Technical Debt: Avoiding Technical Bankruptcy
Technical Debt: Avoiding Technical Bankruptcy
The issues that come with technical debt can absolutely be avoided, even if your product is pretty mature.
Join the DZone community and get the full member experience.Join For Free
Whatever new awaits you, begin it here. In an entirely reimagined Jira.
This is part two of my piece on technical debt. The first explored what technical debt is and what it means for your product. Now, it’s time to talk about how to avoid these issues and how to tackle that debt in a more mature product.
If your product is live, you could visualize it as an iceberg. It looks to be fine from far away, but the technical debt beneath the surface could sink the product or even the organization.
Avoiding Technical Bankruptcy
Here are a few recommendations based on past experiences. It’s all about starting with and adapting best practices.
Modularize your system architecture and take a firm stance on technical debt in new components or libraries in the application.
Prevent bugs with automated tests and continuous integration.
Don’t allow people to build release software from local machines; use Continuous Integration instead.
When a new bug is found, write a new test to reproduce it and then fix the issue. If that bug ever resurfaces the automated test will catch it before customers do.
Ingrain quality in your culture to avoid reckless and inadvertent technical debt.
Encourage clean code and get people to work together in pair programming instead of isolation.
Get all of your engineers a copy of Clean Code.
Make sure your team is participating in code reviews.
Use tools to build, analyze, and find areas of concern in your system such as sonar, Structure101, and Jenkins.
Addressing Tech Debt
There are three common approaches that teams take to try to address technical debt.
1. A Technical Backlog
Have a technical backlog of many Sprints full of technical debt, addressing those problem areas in the system. This becomes a laborious task and is a very drawn out process. It’s also very hard to show progress and value to stakeholders.
2. Allocate a Percentage of the Capacity
Allocate a percentage of the capacity to addressing technical issues. It’s hard to show the value of that percentage of work taken on and the value it brings to the product. Features normally take precedence here over the technical tasks, so they end up continually being deprioritized unless you ring fence allocation.
3. Fixing Debt While Adding Value
Each new feature you work on could get tech debt work included in the scope so your product matures as you add new features. This takes discipline, as it can be easy for engineers to continually fix and rewrite code. The scope of rework needs to be agreed upon and should have demonstrable value — whether it's increased code quality, better test coverage, or better performance. It’s hard to estimate how long it will take to deliver that new feature while including a rework. You are going to come across landmines when reworking features (particularly when working with legacy systems).
Fix the Upfront Pain
There is no exact formula to fixing tech debt so it is important to hear feedback from the team about what they think is going on. Your engineers who interact with the system day to day will know where to start, so talk to them and find what the main points of contention and pain are. Try to get the team to estimate the effort in fixing that pain and frontload the work. The value delivered always needs has to be higher than the cost and effort involved to do that work in fixing the pain.
It is worth looking at outsourcing solutions rather than trying to fix them yourself. For example, at Phorest, our engineering team uses products such as NewRelic, Datadog, and Logentries. It is more practical for us to stick to our core competencies rather than building and hosting our own logging stack or trying to wrangle monitoring tools to suit our needs.
Quality investments should be a business decision and should be quantitive. You need to ensure it has a positive effect and the tax is paid wisely. We could just improve the codebase all the time, writing code that is perfect, but what if nobody uses that feature or it’s not a critical area of the system? There is no point in that.
If you have a long, slow deployment time and release cycle, invest in a new deployment stack and infrastructure. AWS, Google Cloud, Puppet, and Docker are technologies that enable you to apply modern deployment patterns to your system.
Manual testing, low release confidence, the continual breaking of features (regression) can be addressed by improved automated testing and coverage. Low test coverage makes things hard to change — you break stuff without noticing. Increase your coverage with automated acceptance tests to combat this issue.
Messy code is hard to maintain is slow and makes adding new features slow. Removing duplicate code and refactoring complex areas will address many issues. It needs to be done through automated testing and TDD, though, or else you will break more features. Never touch code without ensuring it has unit tests covering it.
Continuous Integration and automated build workflows remove manual tasks and increase productivity. This saves time and risk of human error. Jenkins is our choice for this at Phorest, but there are plenty of other options.
Introducing analysis tools give you the ability to make decisions on code quality. You can automatically break the CI build if quality degrades which helps keep standards high.
Many great tech companies such as Spotify, Gilt, and even Amazon have had huge success by embracing microservices and breaking away from a traditional monolithic architecture.
Microservices are isolated services in your product that are responsible for performing a single function. This is a smart approach when trying to tackle tech debt, as it alleviates many of the points of contention I have discussed. It breaks the tech stack into small autonomous functions and allows the product to change direction and iterate fast. No longer do deploying or changing part of the system involve changing everything else (because they isolated services).
Maintaining and testing small codebases is a lot simpler. If a microservice wasn’t scaling well with your product, you could easily rewrite it without much obtrusion to other areas of the system (because they use an HTTP contract for communication). If you have a problematic monolithic application, you could start rewriting or breaking out functionality into microservices.
That’s a wrap with technical debt. I hope some of my past experiences and advice can help you in dealing with technical issues you might come across with your product. Here’s my Twitter handle if you’d get in touch to discuss this topic more.
Published at DZone with permission of John Doran . See the original article here.
Opinions expressed by DZone contributors are their own.