Q&A: Designing "Testable Architectures" with Savara
Q&A: Designing "Testable Architectures" with Savara
Join the DZone community and get the full member experience.Join For Free
10 Road Signs to watch out for in your Agile journey. Download the whitepaper. Brought to you in partnership with Jile.
DZone recently caught up with Gary Brown, Senior Software Engineer at Red Hat, and Steve Ross-Talbot, Chief Architect at Cognizant to talk about Savara. Savara is an open source JBoss project that leverages the notion of 'Testable Architectures' to ensure that any deployed system can be indirectly shown to conform to its business requirements, both at design and runtime. As a system evolves, through subsequent enhancements to the business requirements, having such a precise understanding of the system, from requirements through deployment, enables more sophisticated change management techniques to be considered. We spoke with Gary and Steve to get their ideas about architectural agility and how Savara fits in.
The complete transcript of the interview has been provided below.
DZone: Gentlemen, it's a real pleasure to have you with us today. Can you tell us a little bit about some of the work you are currently doing?
Gary: I am currently the project leader for Savara, which is the project we will be talking about today. But I am also working on the BPM platform within Red Hat. And I am also on the finalization task force of the BPMN 2 certification.
Steve: Within the scope of the work I do at Cognizant Technology Solutions, I run a fairly large practice and enterprise of solution architects. So some of my day job is spent ensuring that they are gainfully employed. And the rest of the job really is about trying to change the way in which we build software and prompting, and providing, and collecting, and affecting innovations of which Savara is certainly one of our major themes. [This is] largely because we see that as making a huge difference in terms of the costs of the engagements that we do with our customers, which we can pass on some of the benefits back to, and still make money.
DZone: What exactly is Savara?
Steve: Well let me put it this way. I will hand it over to Gary in a second. But obviously it is an open source project. I will let Gary say a little bit more about that. But it is not like a normal open source project. It is a little different than the norm, largely because it has been co-founded by companies: Red Hat, Cognizant, and Amentra. And we have kind of structured it more formally than most open source community projects, and that is really because we need to effectively steer and monitor the progress, because we have a much higher level of usage by end users--by large corporate--that want to gain the benefits.
One example of how it is structured is we actually have co-chairs. Gary is one of the co-chairs on the Savara board, and Bhavish Kumar, who was formerly from Cognizant, is another of the co-chairs. And we have set up some working groups.
In some ways, although it is really an open source community project, it almost behaves like a W3C model because it is standards based. But what we are doing is adding and layering innovation over the top.
That gives you an idea of the structure, and I will let Gary tell you more about essentially what the project is about.
Gary: The project aims to define a methodology and tools to support a concept called Testable Architecture. A Testable Architecture is one where any artifact defined at any stage in the development life cycle can be verified for conformance against artifacts in a preceding stage.
So for example, an architecture defined using a choreography can be validated against the business requirements. Service designs can be verified against the architecture and implementations can be verified against the designs. As the requirements are defined in a machine readable manner, it is also possible to use them as the basis for service unit tests.
Ultimately, the fundamental technical problem the project is looking to solve, is to ensure that any deployed system can be indirectly shown to conform to its business requirements, both at design and runtime.
DZone: How how did Red Hat and Cognizant cross paths in terms of collaborating on this project? What is the common link?
Steve: Well the common link is, I guess, that Red Hat provides tools and technology, and at Cognizant we use that technology. We use it from other vendors too. So it made a lot of sense to us to foster the collaboration more closely so that we can assist in making Red Hat's SOA platform easier for us to use, really in such a way, which is why standards are so important, in such a way that it could be bound to other SOA platforms, not just Red Hat. That is very important for Cognizant. And so it kind of made sense to do this in an open source environment because it removes IP impediments for collaboration. And if you are going to talk about large scale community based open source projects, well Red Hat is really the biggest player in town.
Gary: Yeah. And I think from Red Hat's perspective, obviously it is about the tools and the technology. But I think having a partner like Cognizant was important in this size project, and the project is really aimed at delivering or being able to deal with very large scale solutions. So that is either going to be targeting large end-users or SIs that deal with big user organizations. So we are hoping to build up partnerships with a number of SIs and large user groups.
DZone: So Steve, I know you've been doing a lot of work in the area of SOA governance in terms of orchestration and choreography. How does that fit into the Savara project?
Steve: It really extends the notion of governance. If you look at the earlier collaboration that we've had with Red Hat, it's founded on, again, an open source community project called Overlord. Overlord was really about the run-time governance of a service oriented architecture solution. It's different to any other governance. It's trying to ensure that if we were to observe what happens between a set of services as they play out whatever they are doing, that they are actually doing the right thing as opposed to the sort of governance that you generally get, which is more to do with the versioning of service contracts and nonfunctional requirements for their policy attachments. So, this is more behavioral, so it has a more profound impact on understanding what's happening. What we've done is take that notion of run time governance and apply it to the process by which we gather requirements, create some sort of solution model, and then are able to test that model upfront against those requirements to ensure that it really meets the needs that it says it's going to. That has a massive impact, both in terms of the speed with which we can frame a solution and gather requirements and an even bigger impact downstream in terms of the reduction in system integration testing cycles.
Governance extends from the point that you engage with a customer gathering requirements, all the way through to the delivery and the run-time behavior. So what we've done is extend that notion of governance right across the software development cycle and right across the life cycle of the solution. That's really the key difference, for instance, between Overlord and what Overlord is intending to do, and Savara, which will also leverage Overlord for the run-time aspects.
Let me also say that you can only govern effectively that which is captured effectively, in terms of what the application and the system is supposed to do. That in large part is dictated by the end user.
Dzone: Gary, given this idea of the testability of artifacts, is there any empirical data that you've gathered to support going in this direction?
Gary: Well, actually, I think this is probably one area where Steve is best qualified to answer.
The good thing about having a partner like Cognizant is that they get to test the theory that we're developing in terms of methodology, in practice--in customer situations. So, I think I'll just pass that one off to Steve.
Steve: So yes, there is empirical evidence that we've compiled. We're working with a fairly large global insurance player where we try all this. We set off two parallel streams, one using the sort of classic methodology for requirements gathering all the way through to the generation of technical artifacts. So, the woozles, the bettles, the state machine diagrams and any other supporting collateral to allow a development shop to go in and actually implement. So we set off the classic approach and we set off the testable architecture approach. The results were quite surprising. They surprised me, and it's not easy to surprise me with these things. We found an eighty percent speed up from requirements gathering to technical contracts. The actual numbers were, I think, something like three people in three days versus four people in fifteen days for a specific exercise. That massive speed up was absolutely stunning.
What we also found was that on the back end, downstream in terms of testing, we found two major design flaws using testable architecture, which the classic approach never found. So, we were able to navigate our way around those errors, at design time. That resulted in an excess of twenty percent savings on the entire SDLC program. That was a budget of in excess of a million and a quarter. So, the results were quite profound.
Now, in the latter, in the requirements to technical contracts, that saving was in large part due to the fact that the tool enabled us to have a very agile approach to requirements gathering and solutioning. That obviated the need to schedule meetings upon meetings with business analysts and subject matter experts. So, that's one of the reasons why it was so compressed.
We removed any debate about whether a requirement was real or not, or whether a requirement was or was not in conflict with others, simply by having tools which could show it. So that was really the reason for the eighty percent speed up. In terms of the twenty percent saving over the entire SDLC, it was entirely due to the fact that we found those errors at design time. I think the perceived wisdom is that if you find them when you hit systems integration testing, which is where they would have found them, it costs two hundred times more, usually, to fix. So, the results are quite profound.
So from a systems integration perspective, which is what we do at Cognizant, this completely changes the nature of what we can do in engagements that are like this. It allows us to pass on some of the benefits to our customers and allows us to make better margins, so everyone's a winner.
That's really the single reason why Cognizant is very excited about the prospect of using testable architecture in many more engagements, which is embodied by Savara. We're already doing that. We've got about four engagements on the go, as I speak, fairly large ones.
DZone: What component of Savara is methodology-centric, and what component of it is software-centric in terms of managing projects and gathering requirements? I want to understand if the success of this is really resting on an organization already having adopted Agile methods or whether Savara itself facilitates a lot of the transition to a new methodology?
Gary: I think that what we're trying to do with the methodology is actually center it around a lot of standard approaches. For example if an organization is already using Agile approaches, we don't want to force them to adopt, for example, a very top down structured approach. So for example, because in many SOA's you're going to have existing services that you want to reuse we want to be able to understand the behavioral components and use examples of that to help in building other architectures. What we're trying to do is in analogy, have a set of structures that enable it to be used in a very flexible way to fit around existing customers in the way they work or also in conjunction with other standards. For example, one of the working groups that they (Steve?) mentioned before, is all about looking at standards like TOGAF and ArchiMATE and to see how testable architecture could fit in with those types of methodologies.
Similarly, from a development angle with a programming point of view, we need to make sure we're not constraining -- for example -- how developers work. We don't want to impose anything that would make them unhappy to use it with their methodology.
Steve: Ione sense, the tools are really about capturing descriptions in ultimately what is a formal way, actually making it not obvious that that's what we're really doing. If you can capture the descriptions in a formal way and you can capture the design of the solution in a formal way, then you actually have a chance of testing one or measuring one against the other. In a similar way the engineering people may expect against implementation with a micrometer. So that kind of is the approach that we're taking. The methodology that would be used to allow you use to test one thing against another really are pluggable. That's very important for competency because once we have our own methodologies to customers, some of our customers suggest methodologies back to us.
What tends to happen in real life is we work hard on framing the governance approach and methodology approaches that will be deployed in a large engagement. We try to somehow blend the two so we can use what we have the best we can, and yet still provide all of the necessary governance aspect that the customer's methodology requires. One good example is if you're working in life sciences with a large pharmaceutical company the methodologies are prescribed. So you can add to them but you curtaining can't take things away.
DZone: So Steve, Gary, this sounds like a very innovative approach. Are there any approaches out there that are similar to or that compete with Savara?
Gary: I think there's a lot of the enterprise architecture tools and methodologies available. But the unique aspect to what we're doing is this testable aspect. It's a behind the scenes validation of the different artifacts. As part of the project is to look at how we can fit in with a lot of the other standard based methodologies. But I don't think there's direct competition in terms of this testable aspect.
Steve: Most interesting if you look at TOGAF and you look at some of the common definitions of enterprise solution and architecture, they all often quote an IEEE description, that the architecture must be described in some formal way, in order to allow us to reason over it. And yet none of them actually say how to do that. Effectively what we're doing is answering that question. We're providing that facility. The only company that I know of that is doing anything similar to this--and I think they're announcing some stuff in November--is Microsoft. But what they're doing is they are providing some definition of testability or performance but it's from the bottom up. So what you can do is test your code against the design but that of course means you've already written a code. The problem is, quite often, that's too late.
DZone: So this is still a very different notion, the whole model driven architectural approach we were hearing several years ago. It's more than just code generation using requirements driven artifacts. Is that correct?
Steve: In a sense, yes you're right. We like to think about it in terms of... I'd like to think of it like it's kind of like UML and MDA on steroids. All of the same concepts that underpin model driven architecture you will find in Savara. The real difference, as Gary has said, is this notion of testability and conformance which is always missing. If you look at TOGAF and the other enterprise architecture tools and indeed MDA, people layer governance over it, which is very people intensive and based largely on manual inspection and review. This is simply a designed by a bunch of software guys and by myself, Gary, Mark, Nicole and several others, to try to automate some of the processes that we do every day.
Gary: Yeah I think one of the other points though is that MDA seems to be very top-down. Whereas I think with the kind of technology that we're looking to build, the businesses can easily identify conformance between the different artifacts at the different levels. You could potentially start from bottom-up with adoptive services understanding their behavior. As part of the work we're doing with the academic partners, be able to reverse engineering effectively the choreography. Potentially from that, you can reverse engineer your requirements. You can take a legacy system and actually reverse engineer your design and architecture artifact.
DZone: Who should be interested in Savara primarily? Is it the business analyst or the enterprise architect?
Steve: I think it's a mix of people. I think that the business analyst is interested because the business analyst is still expected to frame the requirements with the subject matter expert. The enterprise and technical architects, because they frame the solution. The developers because they use the generated artifacts to develop again. If we could potentially do code-generation, that might not be appropriate because sometimes we rely on people to do generation for us by hand and program it. I also think CIOs would be interested because one of the things that this allows you to do is ask questions and get answers that today you simply can't get the answers to. It would be nice if a CIO of a large investment bank or a large program of work said, "Hey look, there are a thousand requirements for this new system, how many have you modeled in your new solution?"
With Savara we'll be able to say, "Well we've modeled 658" and that should actually defend the proposition because we'll be able to show computationally that the solution actually really does meet that number. That will be a brave new world for many CIOs because suddenly this mitigates the risk that they operate under. It's not longer so statistical or with people holding their fingers in their air, "Well I'm fifty percent done" or as John Bentley said in Programming Pearls, "eighty percent done." And when they say what's left to do, they say the other eighty percent.
This way it becomes, really there is a hundred percent and we can absolutely give you an accurate picture as to where we are. So I think it's all of those people.
DZone: Is Savara in a sort of 'incubation phase' right now?
Gary: Yeah. I think classifying it as an incubation period is probably the right thing. I mean there is some support out there. For example, Cognizant have been using it to do their initial work with this testable architecture. These are based on early work in the project Overlord. But what we're trying to do at this stage, because we want to build a community with a number of early adopters and large standard organizations, the current phase is really focusing on the methodology. Because Cognizant has done a lot of work on the methodology from the business perspective. But also within Red Hat the SOA development methodology has incorporated aspects of the testable architecture as well but it's focusing more on the governance side.
So what we want to try and do is bring the different experiences and different aspects of the testable architecture approach together and we'll define a definitive methodology. This is the stage where we want to try and get as many interested organizations involved. And then once we've done more work on the methodology that will then help to define the requirements of the first main release of the tooling. So we're going to incorporate some of the existing components we already have. It would hopefully be more geared towards supporting the agreed upon methodology.
DZone: Is there currently a shared vision between both partners in terms of the main deliverable of this project?
Gary: I think the methodology is the first one, but it's an idea that we're trying to release sometime later this year and that will probably be Eclipse-based. But later on next year--because one of the reasons why I'm on the finalization part for BPMN 2 is the Savara folks; these are all organizations that feel that BPMN 2 is probably going to be the future modeling language--that it is going to be important for the methodology. So it's likely the release for next year will be BPMN 2 based but also we've widened the scope from testing Eclipse-based for the first released to probably be more web tooling based.
DZone: Now how can individual organizations help shape and guide the project?
Gary: Well, it's an open source project, so at the end of the day, we would be interested for anyone to register and join. But because of the nature of the project, I think we'd also be looking for, as we said, the SIs and this large user/standards group to sort of join up and become partners. But in terms of what types of contributions and things we'd be looking for, initially, as I said, the methodology is the important part at the moment, but then as you start to roll out the tools, providing feedback on the capabilities, helping us define requirements of features, and even getting involved in developing capabilities would be good.
Steve: And just to emphasize that: I mean, while Cognizant is an SI, and we don't really develop product, we are actually developing some of those capabilities very much alongside the Red Hat people that are involved in Savara on a day-to-day basis. So we're putting our money where our mouth is. It's not just a case of us providing requirements, and reviewing methodologies, we're actually helping to define the methodologies. We're doing all those other things, too, but we're also developing capabilities for Savara. I would imagine that some SIs and some companies that would want to get involved, may want to do something similar, as well as developing their own plug-ins and add-ons that will be their differentiator, and rest assured, Cognizant's thinking in that way for sure.
DZone: Where can I go to learn more about the project?
Gary: There's a project page that's going to be set up at jboss.org/savara. But we're still in the process of loading up content there, so we'd encourage people to sort of keep revisiting, looking at the forums and registering questions they have.
Steve: And the Savara blog, definitely.
DZone: What are the future directions for the project?
Steve: Well, I think what we would expect is that we would have seen and delivered and been through the pain and the pleasure of delivering several solutions with some of our customers who are already using it, and that would yield further concrete benefits that support the empirical observations to date. And that, hopefully, will then deliver some case studies which we can stick on the Savara blog and the website. So, that's from my perspective. In a year... certainly in a year's time I'd expect to see that. And in two year's time, I would expect to see the size of the community that's using Savara to have reached pretty much the critical mass that it needs to be truly successful.
Gary: I think, from my perspective, I would hope that we would have a number of SIs and [Inaudible 31:28] standards? groups be partners to the project. I'm looking forward to delivering on the web tooling and into next year. I think that's going to be good.
DZone: What's the long-term vision for Savara?
Steve: One of the joys of working with Savara -- pretty much everybody's -- is the way in which Savara has enabled us to work effectively with academics, to provide much more far-reaching benefits. Those benefits won't come quickly. Academia works on a different time scale, but we do get some hurry up sometimes. One of the wonderful potential visionary benefits would be if we can reverse engineer an architecture, then the time it takes to model up the as-is of some customer will be significantly reduced. Then the additional benefits that the academics are working on given the description of an as-is, and given the description of the to-be in terms of some of its requirements, can we determine accurately: one, whether the to-be is optimum, is it the best to-be that you can do? And also determine the gap from the as-is to the to-be.
Currently today this is all done manually and it's prone to ambiguity and error. There may be some work going on -- certainly at Imperial College and Queen Mary College in London -- that will allow us potentially to do something like that.
And the idea that you could come up with the optimum to-be over a large program means that the transformations that you head towards, from the as-is to the to-be, become quite defined and refined. As opposed to the way it is now, which is a little bit like Columbus sailing off from Europe thinking that he might find something but never quite knowing where's he going, and he sort of stumbles into the United States.
Actually we know exactly where we're going. That would be fantastic. For me, that's the big prize.
The academic work that's underway initially was looking at -- one of the things with Savara is having a communication module of how all the services work together to deliver some sort of concrete business outcome. For the moment forget about the machinery, about what that is based on and functional requirements, the SLAs and what have you. Half of the battle is understanding whether you have the maximum amount of parallelism in your system because that's often the dominant cause of things going slow and throughput not being as it could be.
That's one of the metrics that we can actually do optimization against. Then you can start thinking about layering in the others. We might then think how do we layer in pull-up the attachments to the rest of the SLAs to ensure that when we do the testing, the model is supportive of the SLA. And one of the things in Savara will be, given the non-functional requirements, given the description of how the system should work, what should be the hardware and operating systems that you want to run all these services on if you're going to be delivering this to spec at the right level of performance.
So all of those are things that Savara will be looking at in the longer term.
Gary: But also I think when you're defining an as-is to to-be model and you have an understanding of the behavioral requirements and how it will change , you have a better chance of controlling that transition. Because the last thing you want to do is deploy incompatible services into production environments. So having that better understanding helps you manage the rollout of new services and understand what the impacts might be. So it's very practical advice at looking at it this way.
DZone: Gary and Steve, on behalf of DZone I want to thank you very much for your time today, and for enlightening us on the Savara project. We look forward to learning more about this in the coming months.
Steve: Thanks very much.
Gary: Thank you.
Opinions expressed by DZone contributors are their own.