hello2morrow CEO Alexander von Zitzewitz Talks Technical Debt, Code Quality, and Future Projects
hello2morrow CEO Alexander von Zitzewitz Talks Technical Debt, Code Quality, and Future Projects
I had the wonderful opportunity to chat with hello2morrow CEO Alexander von Zitzewitz about technical debt, the state of code quality, and future projects.
Join the DZone community and get the full member experience.Join For Free
See why over 50,000 companies trust Jira Software to plan, track, release, and report great software faster than ever before. Try the #1 software development tool used by agile teams.
Recently, I had the great pleasure of chatting with hello2morrow CEO and co-founder Alexander von Zitzewitz. Alexander has been a huge champion of reducing technical debt, and his ground-breaking methodologies have helped many big-name companies, including Ford and BMW to improve their business practices. We had a great talk about code quality, the business advantages of reducing technical debt, and hello2morrow's current project, "a domain specific language for architectural description."
You co-founded hello2morrow along with Dietmar Menges in 2005. What inspired you to start hello2morrow?
The idea was inspired by a project I did with my former company for BMW, at this time (1999) I had an integration company that did software projects for other companies. BMW asked us to build for them something like a simple static analysis tool to check architecture rules. So we did this, and this project was quite a success for BMW because it got them much better project metrics after they introduced what we call a software measuring facility. That tool was able to check simple architecture rules based on an architecture model described by an XML file. It also computed some metrics and checked for cyclic dependencies. That allowed the definition of project quality gates for BMW. No project could go into production that wouldn’t pass the quality gate. And the quality gates were quite strict, you could not have cyclic dependencies, architecture violations or outlying metric values.
That project was such a success, and we always had in the back of our minds creating a new company, to build a real product out of that. In 2005 we had this opportunity, and discussed this idea, and said “great, let’s do it.”
You’ve been a big champion of code quality, and been very vocal on the topic of technical debt. What’s our current state of code quality?
I would say it’s moving into the right directon, but we still have a long way to go. I’d say there’s a growing concern about the quality topic, but it’s usually put on the back burner. As soon as the pressure gets too high, quality just falls off the cliff, and people don’t take it seriously enough. And then on the other hand there’s a big movement to use tools like SonarQube, which is really a step in the right direction.
But I think with those tools it’s always important to know what you’re measuring. Because if you use SonarQube in its default configuration it’s just something like basic bug checkers and code style checkers. They can help you to detect problems but you get a lot of problems reported that are not really critical. So for example if you don’t configure the tools in the right way you get lots of warnings about things that are not really harming your code quality. And the stuff that’s really harming your code quality, structural and architectural debt, which I think is the most toxic form of technical debt you can have, is for the most part not reported at all.
The aspects regarding the architectural structure of a system are not checked, but people say “Oh I have SonarQube, we’re all good,”. That gives you a false sense of security – your system can be totally unmaintainable even if SonarQube shows mostly decent metrics. But one thing that you can do with SonarQube already is checking for cyclic dependencies between packages or similar elements and other languages and I think that’s a very good step because too many cycles are a first indicator for rotten architecture and structural problems. It boils down that you need to understand what you are measuring and put the focus onto the right aspects. I wrote a blog post specifically about that: “Not all technical debt is should be treated equally”.
Your core ideas and operating principles actually lead to cost reductions and improved efficiency, but as you mentioned, this often gets relegated to the back burner. How have you been able to achieve these results, and why is this so often deprioritized?
I think it gets to the back burner because the pressure is too high, and in many companies they don’t have formal quality gates, or very simple ones that just don’t look at the right metrics. And if you don’t have quality gates and you have a deadline looming, you do everything to meet your deadline. SO that means you will accumulate technical debt at a higher speed, especially structural and architectural debt. And, ok, you made your deadline, but for the next deadline it’s already more difficult because you already have structural debt accumulated and now every change becomes a little more difficult. And this goes on and the problem compounds and compounds and compounds, until the cost of change is just too high. And that’s what I see in a lot of companies. I do a lot of software assessments, and when I come to those places I see many mission critical software systems that are structurally rotten. There are a lot of cyclic dependencies, a lot of cyclic groups that have basically eaten the whole system, and everything is connected to everything and then you can imagine that change becomes really difficult. Unfortunately people mostly call me only when it’s too late.
If structural and architectural debt goes over a certain threshold then there’s not much you can do about it because the cost to repair the system that reached that stage is just unbearable in most cases. So you can’t do it and you have to live with the mess, which means you’re not really efficient with what you’re doing. On the other hand, if you can keep a software system very structured, if you can adhere to some sound architectural principles, the cost of change will grow a lot slower. It will still grow with the complexity and size of your system. Software is complex after all, but if you can adhere to some principles like avoiding cyclic dependencies, and following an architectural blueprint, you’re a lot better off. And we know this from customers. Our biggest customer in the United States is Ford, and they’ve been using this technology for four years now and they’ve made some big improvements with their projects because they don’t fall into this trap any more. And we heard that from many other customers that were able to cut something like 50% of maintenance costs and 20 or 30% of the lifetime cost of the project by just following these principles.
But [these results] require a lot of discipline, and that’s the point. Most of the time I think we have a big disconnect between the people who pay for the software and the people who write the software. And the people who pay for the software of course want to get the results as cheap and as quickly as possible, but they don’t understand the concept of technical debt and the impact of technical debt. And when the teams come back to them and ask for refactoring time and budget, you really don’t get enough of it, and then the problem compounds. The only way to get out of this is to have a strategy in the company to define quality top down and bottom up so everyone in the management is on board that quality gates are important. And then you have to implement your development operations accordingly and define those quality gates and don’t let them slip away as soon as there’s a little bit of pressure in the system. And there’s always pressure in the system, that’s a fact.
Hello2morrow has a lot of high-profile partners, including BMW, Ford, Barclays, BP, and Black Duck. Which of your customers are adopting your philosophies, and how have they been implemented?
We have two groups of customers basically. One group of customers buys the tool and they don’t use it on a regular basis, they don’t integrate it into their build system, they don’t really have tough quality gates and those customers don’t have a lot of success. It still has some effects that improve certain code bases, but they don’t have the real cost cutting effects that we’re talking about. To get those kinds of cost cutting effects first of all you must integrate static analysis with a foucs on structure and architecture into your build system so that builds will fail if something’s not right, when new problems are introduced, and the second thing is it has to be defined to go from top down. So people from the top down have to say “We want to get those quality metrics better over time or we want to at least make sure that they’re not getting worse.” And if you combine those two things, this is the recipe for success. As soon as you leave out one element, either no management support or no fully continuous integration into your build, you will probably not get the benefits. About 50% [of customers] really do it, they get it, many others get distracted when they buy, so the tool alone won’t help you, you need the tool and the changes in your development strategy to be successful.
You’ve mentioned cost reductions a few times. What are other tangible effects of improving your code quality and reducing your technical debt?
The major tangible benefit is that people will lose less time staring at the screen. Because if you analyze what your developers do all day long in operations where they have a tangled and rotten code base, a lot of cycles and not a lot of architectural structure left, people spend 80% of their time staring at the screen to figure out what the damn code is doing before they can even think about making changes to their code. They first have to figure out how this piece is related to this other piece, and how this other piece is related to other pieces in the system, and it takes a lot of time to understand it. And even then if you make the changes, you don’t have a good feeling in doing it because you never know if you’ll break something else because if the tangle index is very high you have a lot of connections between different parts of the system, probabilities for regression are just skyrocketing. If you can avoid this trap people have more time to actually work on code, and that means more time to really produce stuff that produces business benefits.
Another benefit is that if you hire new people, they’ll be operational faster. They don’t need such a long time to figure out how the system is built.
From a software side, is this having an impact on software performance?
Not necessarily, that really depends. You can say that bad performance is more probable in poorly constructed code because people don’t understand what the code is doing. But good structure by itself doesn’t really mean you get better performance. But if you can understand your code better, if it’s readable, I think chances are much higher that you will be able to write better performing code.
But it does improve the maintainability and comprehensibility of your code, which is important. If it’s not maintainable, then it costs you a lot of money to keep it alive.
What’s been the impact of so many open source libraries been on quality and maintainability?
Well, that’s a tricky question. First of all, it’s interesting to look at open source code itself so we can analyze open source code with our tools and find out which open source libraries do well with quality and which don’t do so well. And that’s a really interesting experiment in itself. We already know that the Spring framework is extremely well maintained, because they follow their architectural rules. Other projects like Hibernate for example or Apache Cassandra are already in not such a good state when it comes to structure and architecture. They have a lot of tangles, they are very tightly coupled, and it’s much harder to maintain those code bases. The quality of projects that use open source libraries, is only indirectly affected. You consider the open source library as something external. It’s not your code base, it’s just a tool you’re using. So just because you’re using Hibernate doesn’t mean the quality of your software is bad or good. You write something that is completely independent, you’re just using Hibernate.
What projects does hello2morrow currently have planned?
I’m working on something really cool right now, it’s really fun honestly. I’m working on a domain specific language for architectural description. So basically you write your architecture in this language by defining what we call architectural artifacts and the relationships between them, and that seems to be a very promising concept. So we’re working with it right now, we have different experiments going, and we are very excited about that thing.
We just weren’t very happy with the flexibility of our architectural model in Sonargraph 7. It’s a little bit restrictive from the meta model. So you can model most things, but there are some things you have to model in a certain way and you’re not very flexible the way you can model things. And if you have a big architecture, things can get a little bit complex, especially if you want to have any special rules. So we thought how we could improve that. So we said “Why don’t we think about architecture as a much more general concept, where you have basically architectural concepts that are containers for architectural components or atoms. Architectural atoms are the smallest unit you can assign to an architectural element, and for example in Java that would be a Java file, in C# it would be a C# file, and in C++ it would be a combination of a source file and a header file.
So you assign those what we call components to architectural artifacts and you group the software system into nested groups for artifacts and define the relationships between them. And then we experimented between them with different features for such a language like interfaces for artifacts and connectors for artifacts, and we found that you can have an incredibly powerful tool for modeling the structure in many different ways. You’re not stuck with one way to model architecture, so you can basically do whatever you’d like. You can even describe different aspects of your architecture in different architecture files. So that’s another cool feature. If you don’t want to describe your architecture in one big monolithic file, you can spread it out, and have one file that describes the architecture between the client and server, another that describes the layering, another file that describes your business components, that gives you a lot of flexibility. And the cool thing is, many people prefer to work with text instead of graphical editors, so you can use the text basically to describe your architecture.
Opinions expressed by DZone contributors are their own.