How to Analyze a Complex Solution
This article covers using NDepend on massive solutions with 100s of Dot Net projects to improve code quality and to run static analysis tests to produce optimal .NET runtimes.
Join the DZone community and get the full member experience.Join For Free
Editorial Note: I originally wrote this post for the NDepend blog. You can check out the original here, at their site.
I’ve made no secret that I spend a lot of time these days analyzing code bases as a consultant, and I’ve also made no secret that I use NDepend (and its Java counterpart, JArchitect) to do this analysis. As a result, I get a lot of questions about analyzing codebases and about the tooling. Today, I’ll address a question I’ve heard.
Can NDepend analyze a complex solution (i.e. more than 100 projects)? If so, how do you do this, and how does it work?
Can NDepend Handle It?
For the first question — in a word, yes. You certainly can do this with NDepend. As a matter of fact, NDepend will handle the crippling overhead of this many projects better than just about any tool out there. It will be, so to speak, the least of your problems.
How should you use it in this situation? You should use it to help yourself get out of the situation. You should use it as an aid to consolidating and partitioning into different solutions.
The Trouble with Scale
If you download a trial of NDepend and use it on your complex solution, you’ll be treated to an impressive number of project rules out of the box. One of those rules that you might not notice at first is “avoid partitioning the code base through many small library assemblies.” You can see the rule and explanation here.
We advise having less, and bigger .NET assemblies and using the concept of namespaces to define logical components.
You can probably now understand why I gave the flippant-seeming answer above. In a sense, it’d be like asking, “how do I use NDepend on an assembly where I constantly swallow exceptions with empty catch blocks.” The answer would be, “you can use it to help you stop doing that.”
So, what’s the problem with scaling your solution like this? Well, first and foremost, it makes dealing with your codebase absolutely brutal. If you’re used to a codebase like this and it’s simply your day-to-day life, you might not realize it, in the way that one adjusts to having a perennially stuffed nose or upset stomach. But there’s a world out there where you don’t need to suffer like this. It takes a minute or two just to open the IDE, and if you want to do a build all, that’s a few minutes, even on a really nice development rig. Running the solution’s unit tests is another huge time sink, and any IDE plugins you have will slog or crash.
That’s not just a question of wasting your time — it has a negative impact on code quality. If you build less frequently and run tests less frequently, more bugs creep into the solution. If you turn off your productivity tools, you miss opportunities for learning, correction, and refactoring. A massively complex codebase invites low quality.
There are ramifications to your pipeline as well. Configuration/deployment management becomes a headache if you’re shipping hundreds of assemblies. In production, the application’s startup time takes an enormous hit. And, while there can be some benefit in granularity and independent deployment of assemblies, my experience has been that shops with solutions like that are rarely doing so in practice. Rather, dividing up the solution this way tends to be a means of partitioning the codebase to allow teams to work in parallel and to tell the junior developers, “don’t touch these assemblies — only the anointed may go there.”
Moving Toward Sanity
How, then, does NDepend help you escape this situation? It will produce some extremely helpful graphs, charts, and matrices for viewing your dependency structure among assemblies. Not only will it show you what assemblies depend on what other assemblies, but you can also drill into the coupling of namespaces, types, and methods in those assemblies to see that coupling at any granularity you need.
Visualizing and understanding the relationships are required precursors for starting to form tactical combinations. If, for instance, you have an assembly that is only used by one other assembly, that makes an excellent candidate for being swallowed into the using assembly. Simply create a new namespace in the using assembly and move all of the classes into that.
Another path that you have is to figure out what is truly a library or general business logic for your company, and to start pulling those out of your codebase and into some common, company archive. A great way to manage this type of code is by hosting your own, company NuGet feed. In other words, you don’t need to constantly recompile the “string utils” package that no one ever touches. Make it a proper library dependency so that you can focus on your application’s business logic.
If You Must…
If you absolutely can’t live without your 100+ assemblies (or if you don’t have the decision-making authority to get away from them), you can still put NDepend to good use. In fact, NDepend doesn’t much care about this state of affairs. The assembly in which a piece of code is found is simply another data point, so you can group code by assembly if you want, but you don’t have to do so.
Because NDepend essentially treats code as data, you can filter, group, and select information in any way that you please. With a giant, dependency-complex codebase, those abilities become important in the same way that they are in a massively complex database. You’ll need to learn to take advantage of custom queries that are, perhaps, segmented by assembly. Get used to creating graphs and matrices for visualizing subsets of your codebase. Approach your use of NDepend in the same way that you approach the actual code.
Published at DZone with permission of , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.