Coverity Architecture Analyzer Improves 'Software DNA'
Join the DZone community and get the full member experience.
Join For FreeCoverity, Inc., a provider of static source code analysis tools, today unveiled Coverity Architecture Analyzer, providing development teams with the ability to ensure the integrity of application architecture across development teams, analyze the complexity and dependencies of software systems, and identify errors that can create crash causing defects or security vulnerabilities.
The new version, which incorporates the company's patent-pending Software DNA Map technology, automatically maps hierarchies and dependencies in C/C++ and Java code bases, providing the visibility and control development teams need to detect potential defects and ensure build modifications align with original design specifications. The tool supports a web-based interface and IDE plug-in for Java, allowing teams to navigate code, check for architectural accuracy, correct dependency defects and set complexity limits.
DZone had a chance to interview Coverity CTO Ben Chelf, to learn more about the latest release:
DZone - What is the Software DNA Map technology and how does it work?
Ben Chelf - There are two aspects to the Software DNA Map technology – the map itself and the generation of that map.[img_assist|nid=5805|title=|desc=|link=none|align=right|width=134|height=200]
First, the map itself is a complete, semantically equivalent representation of an entire software system that is suitable for automated analysis of various types (architectural analysis, static code analysis, standards checking, complexity analysis, and so on). It includes not only a parse of every source file in a software system but also an understanding of how that software is assembled into a final set of object files, executables, jar files, and so on. By essentially understanding all of the code and how it’s built, automated analysis techniques can be sure that they have a complete view of the world that they are analyzing. The technology that we have that supports this includes source code front ends and bytecode readers very similar in nature to how compilers work to compile code into the desired format. However, we simply store the result of the front end of the parse in an Abstract Syntax Tree format that is better suited for analysis than byte code, CLR, or object files.
The second aspect is related to how we can generate such a map automatically. Clearly there’s a tremendous amount of information about source files and how they are assembled, but asking an organization to assemble that information manually for the purpose of automated analysis is antithetical to the promise of any type of automation. Hence we need to be able to generate the Software DNA Map automatically. The way we accomplish this is through technology that observes any native build system and the operations it performs. By observing the build as it interacts with the operating system, we can automatically store not only the entire software assembly process, but also can then see how each file is compiled and hence compile it a second time with the aforementioned front ends. Since we observe builds at the operating system level, there’s no dependence on the type of build system (ant, make, perl or shell scripts, etc.) that an organization leverages.
The Software DNA Map and our technology to automatically create the map from any given build system is the key to practical deployment of next generation automated analysis products.
DZone - Is this a tool I would use during the testing phase?
Ben - Absolutely. The goal of testing, of course, is to weed out as many defects as is possible before releasing the software. However, as everyone knows, it’s impossible to test all the different possible execution scenarios. Hence it’s important to focus testing efforts on the most crucial parts of the code and the parts of the code that are most susceptible to defects. One way to understand the lay of the land of the software is through the architectural analysis that we provide with Coverity Architecture Analyzer. This product can help you see the overly complex parts of the system – the places that are most likely to have problems. By understanding this and focusing additional testing resources there, development organizations can have more confidence in their overall testing plan.
DZone - What would you say are some of the greatest challenges in ensuring that code artifacts and build modifications adhere to original design specifications?
Ben - The biggest problem is in the ability to enforce the design intent on the system as it’s being written and assembled. Architects have grand designs, but currently communicate those designs with UML diagrams, word documents, and whiteboards. If a construction worker looked at a blueprint for a building and didn’t follow it, he or she’d get fired. If a developer looks at the blueprint for the software system and doesn’t follow it, chances are no one will notice. The automated analysis of the code and assembly process enables the architect to specify precisely what they want to have happen as it pertains to the dependencies within the codebase with the knowledge that every developer will follow that design intent as they implement the system.
DZone - The automated architectural visualization feature shows you code-level and other architectural dependencies; however, is this enough to detect inherently bad design decisions?
Yes and no. For some bad design decisions, it is very easy to spot the poor design with Coverity Architecture Analyzer. Most would agree that code that is designed to have a plethora of cyclical dependencies is inherently a bad idea. CAA can point that out right off the bat. But of course there are other design decisions that could be good or bad that it would have no chance of detecting. Picking an n^2 algorithm over an n log n algorithm when n will be large would be dumb. Likewise, choosing a more difficult to implement (hence more bug prone) design to save a couple of cycles when it never will matter is also a bad design decision. These types of things can’t be helped by this type of automation.
DZone - What are some of the common architectural design patterns that are enforced/recommended by the tool?
Ben - Coverity Architecture Analyzer deals primarily with the dependencies of a software system. While not considered a design pattern, per se, it suggests an architecture that is layered (either strictly or not) and discourages cyclical dependencies (A depends on B depends on C depends on A). However, when thinking about automated analysis for design patterns as described in the literature, the key is to start with first a representation of the software system (the Software DNA Map) and then a picture of all the dependencies of the software system (as provided by Architecture Analyzer). From there, we feel it makes perfect sense to extend our work in the automated architecture analysis space to include some higher level design pattern analysis to help organizations understand precisely which design patterns they are using and how well they are using them.
DZone - How do you see the ‘Agile Tools’ space evolving in the next 6 to 12 months and what are some of the new features/tools we can expect to see from Coverity in the coming months?
Ben - With the Agile movement, organizations are seeing more and more firsthand evidence of value in rapid feedback to the development cycle and detecting problems in the requirements, design, and code as quickly as is possible. We need to see products to support this new mindset. The accompanying agile products to the agile process will be those that can help organizations discover problems as early as possible through automation. As an industry, we must stop dealing with software problems downstream – we have to build it right in the first place. Agile supports this by shortening the feedback loop. Automated analysis during the requirements, design, code, and test phases will also support the build it right mentality. You are going to see deeper and faster analysis in the next few quarters in the earliest phases of the development process. You’ll see this analysis sitting on developers desktop as the “automatic pair programmer” whispering “did you consider this case?” or “here’s a bug!” or “you’re about to violate the design of this system.” And what’s better, it’ll be right almost every time. You’ll see the computation for builds, tests, and automated analysis happen over multiple cores, multiple machines, distributed to saturate all the computational power that is available. If the next generation of software development products succeed, development organizations will be able to deliver products to market faster and with higher quality by simply adding more machines to the arsenal to speed up their existing development infrastructure and automatically analyze the software for problems.
Opinions expressed by DZone contributors are their own.
Comments