Platinum Partner
java,devops,tips and tricks,software quality

Quality Levels: the Hole in Software Methodologies

Introduction

About six years ago one of the authors of this document began almost simultaneously two different software projects. One was a signal processing desktop application that needed to be extensible through plugins. The tool should provide an API to plugins, to facilitate their construction. This project would involve several developers. The second project’s goal was to perform a sensitivity analysis of a three-dimensional model reconstruction of electron microscopy images. This required developing a piece of software to invoke the model (a program) and to perform an analysis of the results obtained using several tools (other existing programs).

Both projects were carried out successfully. The first one was developed in Java and is still maintained and extended today. It has begun to be used outside of the organization where it was created and it is expected that it in ten years from now it will still be in use. The second one took the form of a Python script about three hundred lines long. It ran once successfully (it took about six days on a big machine) and gave appropriate results. At present, the author does not even retain a copy of this program.

The question that this paper seeks to answer is: does it make sense to use the same software development process for both projects? Obviously, not. It is not the same scenario a project designed to be extended, which is expected to be used (and therefore maintained) for many years and which will involve several developers, that a "project" (barely a script) that will only run once, developed by a single person, that will not be extended, and that it will not be necessary to maintain. From the point of view of ROI[1]: what sense makes it to invest time to make that script more readable and maintainable if when it runs successfully once its usefulness is over?

That script was the first program that I (Abraham) wrote in Python. I took many shortcuts and the code was horrible. When I got it to work, I automatically entered in "refactoring mode" and started using my text editor to give better names to variables and try to structure the code in functions. After a while, a question began to haunt me: Why was I doing that? My developer instinct told me I should write readable and maintainable software. However, in reality I was wasting my time. It did not make any sense to invest more effort into that piece of software. All I had to do was to run it and wait six days to get the results.

The white elephant in the room of software development methodologies

Not all software we develop requires the same quality. It is not the same to develop software that will run only once, and probably will never need to be changed except to correct bugs, that software that is expected to be used for years and that will be continually extended. It is not the same to develop software that is going to be used in isolation, that a piece of software that will be integrated with other applications, that a piece of software that exposes an API on top of which more software will be built. The amount of effort and resources required to develop a framework that will be publicly available is not the same amount of effort and resources needed to develop an Intranet web application. Even though both applications may have exactly the same number of lines of code. The first development should have a higher quality and therefore requires more resources and more time.

What is stated above is a real truism. Any experienced developer knows it. Then, Why do software development methodologies not take this into account? All software development methodologies, from agile to those more traditional, tend to define a process that is geared to achieving the highest quality possible. But we do not always want, or can afford, the highest possible quality. Time and resources are always limited, and therefore they dictate that a public framework and a simple Intranet web application do not require the same quality and, therefore, should not use the same development process.

We argue that from the beginning of the development process, the quality level that makes sense to be reached should be explicitly defined, and that this level should influence the development process. We want to make perfectly clear that we are not looking for excuses to develop poor quality software. Perhaps in an ideal world all software developed would have the highest quality (often, this seems to be the goal that software development methodologies try to reach). But in the real world this is not the case, and software development methodologies should take this reality in account defining different processes to achieve different levels of quality.

It might seem an elemental reflection, but since we had this realization, we began to address software projects differently. Explicitly defining at early stages of development what level of quality we wanted/needed to achieve, and based on that, what techniques and tools should be used has changed the way we face software projects. In fact, developers before embarking on a software project do this analysis in their minds, and on that basis they make a series of decisions that will guide the development. However, this process is currently very informal, and it often is undocumented.

We believe that this situation resembles the situation in which refactoring 20 years ago: ​​good developers refactored their software, but it was an informal process. We argue that the definition of the level of quality to be achieved in a software project should be formal, it should be done explicitly, it should be documented and perhaps it should even be part of a business contract for the development of an application. Formalizing these tasks that we all do informally will enable the sharing of knowledge on how to pursue different quality levels in a project in a more effective way, in the same way that the concept of refactoring helped to create a common vocabulary to talk about and share knowledge about refactoring, and facilitated the creation of a set of tools to support these tasks.

Good practices

The main idea of this paper is that the definition of the overall level of quality to be achieved in a software project must be one of the explicit steps in the analysis stage of the project and it should influence the software development process.

This is the most important message we want to convey. Now we shall reflect on various factors that impact the quality level required by a project. Later we will discuss a series of recommendations (which we found useful, but that everyone will have to adapt to their circumstances) to achieve different quality levels corresponding to different stereotypes of projects. As almost any good practice in the software development world, these recommendations should be taken with caution and adapted to the context where they will be applied. These recommendations are only "good practices" that worked for the authors of this paper. Possibly others in the future will share their own best practices, and if this occurs, we are sure they will disagree totally or partially with ours. There will never be a consensus in this regard, just as there is currently no consensus on what is the software development methodology to use. We believe the details are not as important as the overall concept itself.

Factors that impact the quality level

We’ll distinguish two different types of factors: intrinsic and external. The first ones are only related to the nature of the project itself, while the second ones are related to the particular environment where development will take place.

Intrinsic Factors

How much code will depend on our code?

The first intrinsic factor that influences the quality level required is how much code will depend on this project? As a general rule, the more code will use (depend on) our project, the more quality it should have.

Suppose, for example, that the piece of code for which you wish to establish a quality level is a library that we expect to reuse internally within our company in several projects. If the library is buggy, it will impact all projects that use it. Project failures only affect themselves and therefore the impact on the organization will be lower than the impact of the bugs in the library. The library is thus a critical component, and therefore it makes sense to make a greater effort to achieve a higher quality level. As said, the more code will depend on our project, the more critical it will be and therefore more quality should be required.

We can create a hierarchy of stereotype projects that will be used by more code, and therefore require a higher level of quality (from higher to lower):

1.  Core programming language Library / Framework.

2.  Library/framework to be published to different institutions from the one that was developed.

3.  Library/framework to be used internally within an institution.

4.  Custom project whose code will not be reused in any way, but it will be executed directly from UI interactions.

They are two very different scenarios: the one in which the code runs only as a result of interactions with the user interface, and the one in which the code runs as a result of calls from other code. In the first case, the number of possible execution paths in the code will be much lower. It is a limited number and it may even be possible to test it completely in a manually way or using tools such as FEST and Swinger.

For example, let’s say we have a class named XMLParser; to interact with it you have to specify (via invocation of methods) the path of the file to be loaded, then you must specify the encoding of the file, and finally you invoke a method that parses it. Let’s suppose that in the current XMLParser class implementation if we first specify the file encoding and then its path, the encoding specified is ignored and the default one is used to parse it. This has not been documented by the developer of the class; let’s even assume that the developer is not aware of this behavior. However, in a project where this class is only used to load a properties file, and where both actions are performed in the "right order" (the one in which it works properly) it could result in a perfectly functional product which completely satisfies our client. A different issue is that this behavior (that is not documented and/or unknown) can be a time bomb in the future, when we have to make changes to the project. But if we had the certainty that it will not be necessary to make changes in the project, Is it worth correcting/documenting the XMLParser class behavior? A purist would say yes it is, because the current solution is not optimal. A pragmatic person, as the authors of this document, would say that the current solution is acceptable and improving code that already does what is expected to do and that will never need to change (for business reasons), is a waste of time.

Let's suppose now that XMLParser class is part of a library that our company sells to others, and that they use to build their products. Developers of these customer’s companies could in their programs make the calls in the wrong order: first specifying the encoding, then the file path and finally invoking the method that parses. Their programs will not work properly and they have no way of knowing why, unless they read the XMLParser source code. In this case the XMLParser class behavior is not acceptable. It should have been identified by unit tests and should have been corrected or documented.

When a library is called from another code, which can be written by a different developer from the library’s authors, it could be that classes and methods are used in a different way from the one that the authors imagined. Changing the order in which methods are invoked is just a small example. When code is executed only as a result of interactions in the user interface, we can consider that a code as "acceptable" if after making every possible interaction in the user interface, the code behaved as expected. But when the code is a library that will be used by other code, the potential execution paths are not limited to what a user interface allows, but they are potentially infinite. In these cases, it is necessary to think carefully what API is exposing our library, and we must try to prevent a misuse of this API by using the mechanisms provided by the programming languages ​​for this purpose (for example, using the appropriate visibility modifiers for methods and classes, throwing exceptions ...) and, in those cases in which the limitations of our programming language does not allow us to prevent the user from making mistakes, we have to clearly document how to use the library correctly.

Generally speaking, to develop a custom project consists in implementing the functionality specified by business requirements to solve the problem, and nothing else. Developing a library requires the same, plus also doing the mental exercise of asking ourselves over and over "how a developer by making calls to my API can make my code fail and how do I avoid it."

The more developers will use the library, the "more weird things" can happen, and therefore we must be more careful. Moreover, the more code is using our library, the more complicated will be to make a change that affects the public API. If we change a public method in a custom project, the change only affects this project. If we make the same change in an internal library in our company, we may have to tell all our colleagues they have to modify their projects properly. Not ideal, but it is surely permissible. But if we make a similar change in a public library, we have to notify all our users and they probably will take up arms, they will criticize our decision in discussion forums and on their blogs, and they would have lost partially their confidence in us. If we do these things often, sooner or later they will stop using our library.

In special cases, like the Java core API, the cost of changes is even higher. If we commit a big blunder (eg java.util.Date) the error is "forever", at least in regarding Java applications. Millions of developers will suffer the error, but it will not be feasible to modify the API and ask millions of developers around the world to change hundreds, or thousands of millions of lines of code. Therefore, it makes sense to devote more resources to the design and development maximizing the possibilities of getting it right the first time.

Code longevity

For how long will this code be used? An example would be a script that we want to execute once to achieve a specific objective, such as the script in the problem of three-dimensional reconstruction. Once this code has been successfully executed once, we can throw it away. All we need is to know that it will work properly "here and now".

On the opposite side would be something like the Java core library. As we have already argued, longevity here is very extensive, at least regarding Java programs. If we make an error, we will carry it forever. And that is why we must ensure that we get it right the first time.

Longevity affects code quality through several different mechanisms. One is functionalitychanges, whether adding, changing or deleting. The longer your code lives, the most likely you will have to make changes in its functionality. If we know that a portion of code will probably change in future, we should aim for higher level of quality to facilitate these changes. Generally speaking, the number of changes increases proportionally with project’s service life. The more changes we anticipate, the more we should strive to create it with a higher quality; in other words, it has to be easier to modify it.

Another mechanism through which longevity affects the quality of the code is changes in the development team. When a project is going to be developed and maintained throughout its lifetime by the same team and we do not anticipate that there will be changes (either because the project will have a very short life, or because the team composition is very stable) we can afford shortcuts that we cannot afford when we anticipate that over the life of the project there will be significant changes in the team. If the team is always the same, there are things we can entrust to the team's collective memory; for example, why ​​certain design decisions were made, or how calls to a certain internal API should be performed. However, if this team is going to change, this kind of knowledge should be captured in some way in the documentation. And to prevent new developers from making mistakes, we need to have tests that verify that they are not violating the assumptions made in the design of the project.

A third mechanism is changes in the runtime environment. The longer the code will live, the more likely that the code will run on different operative systems or versions of them, different versions of the virtual machine, or of the application server, with different versions of the database, patches will be installed... As every developer knows, it is much easier to write code that runs smoothly "in one machine" (especially if the machine is the developer’s one) that code that we have to ensure it runs smoothly on any machine.

If (due to a short longevity of the project) we know that the runtime environment will remain constant, it may be acceptable (again, we emphasize that this is not ideal) to afford shortcuts that would not be acceptable if we knew that the execution environment will change. If our application will always run in the server of the company, which runs under Linux, and the application will only be used for one month for a campaign of the marketing department: Does it make sense to check that the application works correctly if the operating system uses paths with "\"? We argue that it does not make sense. It is enough to perform tests using the Linux path separator ("/").

The number of environments in which a project will be run also depends on how many code/developers are going to use our code. When building a piece of code that will run only on the server of our company, we have a much more controlled environment than when we are building a library that a priori we do not know which environment (OS, database,server,  ...) it will be executed on. This links with the criterion of "How much code will use this project?" As a general rule, there is a direct relationship between the amount of code that will use our project and the heterogeneity of the environments where it is going to run. All this leads to the need to achieve a higher quality level.

Extrinsic factors

It would be great that the type of project was the only factor bearing upon the quality level that we will set as goal. But the real world is not ideal, and we have to deal with budget constraints, deadlines imposed by business needs, instability in the definition of requirements... All these are factors external to the nature of the project, but they clearly influence the level of quality that we can achieve.

The development team

Although it sounds a truism, we must say that the better the development team is, the higher the quality we can try to achieve. In this sense, some variables that will influence are: the level of knowledge and maturity of the team, whether it is a good cohesive team or a team with internal tensions, the interest of its members in the project and their desire to improve as developers.

If we have a mature team with a high level of knowledge, cohesive and motivated, the goal may be the moon. If we are not working in these ideal conditions, we’ll have to make compromises to accommodate the goals to the reality of what the team can achieve.

A development environment that uses Jenkins for continuous integration, PMD, Findbugs and Checkstyle for static code analysis, Google Guice to inject dependencies, Mockito to create mocks and to make tests, JUnit for unit testing and Selenium for integration testss; Git for version control system... perhaps sounds like the paradise to some readers. But if the development team is composed by junior staff, and none of them have worked with any of these tools, trying to introduce all these technologies at once, in addition to the development of the project, is not an optimal solution: we have to prioritize. Perhaps the most important for this project is to have tests that will help us to maintain the project in the future. We are going to have to forget about continuous integration and instead we’ll use CVS, which (let’s say) is known already by all team members.

Stability of requirements

If requirements are stable, it is more likely that the code we are writing is the one that will be in production and the one that will be maintained in the future; therefore a higher level of quality will be required. But if the requirements can change easily, most probably the code we are developing will not be deployed, and therefore it does not make a lot of sense to create it with a high level of quality.

Let’s suppose that 3,000 lines of code that provided certain functionality will finally be thrown to the trashcan because the customer after seen the initial prototypes has decided that this functionality is not needed anymore. It was a waste of time to create it, but much more time would be wasted if in addition to the 3,000 lines of code we would have written another 6,000 lines of unit tests, that now have also to be thrown away.

If the requirements are not stable, it does not make any sense to invest extra resources in obtaining high quality code. Not at least until the requirements are reasonably stable. For example: from our point of view, to write comprehensive automated tests in a project does not make sense until the requirements are clear and relatively stable. There are projects where this happens from the beginning, there are others where this is not the case.

One of the authors, Abraham Otero, is a biomedical engineering researcher. In the research field, when you have an idea to solve a problem, it is not known in advance whether it will work or not. Many times, several ideas need to be tested before finding the one that solves the problem in an acceptable way; this is the reason why it is called "research". Long ago, I (Abraham) wanted to test the TDD (Test Driven Development) paradigm in my scientific work; in a few weeks it became clear that it was not feasible: it is painful enough to throw away lines of code you have written for several days, or even weeks, because the idea does not work; but also throwing as many lines of testing code, or even more, it is just too much wasted effort. I'm not saying that tests are not useful, or that they should not to be used. Among other benefits, automated tests are very useful for identifying regressions in code. But you have to evaluate whether it is worth to write them after an idea has been coded and you know that it really solves the problem you are dealing with. From my point of view, TDD does not fit in research (at least in what I do), neither in projects where requirements are not well defined from the beginning.

Deadlines

Sometimes business requirements impose shortcuts in software development projects. And to take shortcuts means accepting that we will create a final result with lower quality than what we would like to have. This happens in the real world, and you cannot do anything about it. What should be done, as many authors have written before us, is trying to regain that "technical debt"[2] that we haveacquired.

Taking these shortcuts means that we "borrowed" time from the future, and in the future we will have to "return" this time fixing the shortcuts. That is, in the future we will have to work to increase the quality level of the project up to the level where it should ideally have been from the beginning, but that we could not reach because of time constraints. We’ll not go deeper into this issue, as there are many authors who already have written about the problem of technical debt.

Budget

Contrary to the belief of some technicians and many managers, creating high quality software is not more expensive, in fact, it usually lowers the total cost of a project over its life, because:

●  The technical debt is minimized, and we know the size and scope of the existing one, which is crucial in software development (think about the consequences of the opposite case)

●  It decreases implementation times

●  It decreases maintenance costs

●  The application is extensible and therefore increases its longevity

●  Knowledge transfer is much more fluid

●  It increases general longevity of the project

●  It reduces Total Cost of Ownership[3]

However, in all the projects (at least all that I, Francisco, have found), we have a lower budget than we would like, i.e., we have less people than we need and less time than we have asked for. This is why it is crucial to make good use of the available resources (but we’ll not go into how to manage teams because it is outside the scope of this document).

In our opinion, the amount of resources (time and money) is inversely proportional to the effort needed in the Initial Planning, the Requirements Engineering and Change Management:

●  A good Change Management is always important because, even for small projects, there will always be changes. This is especially important when the project is external or outsourced.

●  In an external project (when the client is not our company) it is crucial to make a good Requirements Engineering.

●  The tighter the resources are, the greater must be the effort invested in initial planning.

Speaking bluntly: the customer will always try to introduce all the changes that come to his mind (both improvements and extensions) for free, and therefore he will argue that "this" (whatever he is asking for) was part of the project’s initial specifications. Therefore, it is crucial that those specifications are clear: Requirements Engineering has to be done flawlessly, otherwise we will have very serious and unpleasant discussions with the customer. Let’s suppose we have done it properly (we can show to the customer that what he is asking for is not part of the project), even under these circumstances, we still have to deal with another two difficult tasks: to evaluate the impact of the change and to negotiate both with the customer and with the team. While the customer will think it requires a lot of cost & time to implement the changes, the development team will think they are given too little time (and therefore cost).

That is why when resources are scarce we need high quality Change Management and Requirements Engineering.

Any other thing that influences the quality level

The items above are not an exhaustive list of external factors, but a compilation of the most common factors that influence in software development. There may be others, like how critical is the project for the future of the company.  And many things that the authors of this document can’t know regarding the particular environment where you work. For example, if the most senior developer of a team composed by three developers is going through a tough divorce and another one broke his leg and will be on sick leave for two months; this situation will clearly affect the project. It could also be that the manager refuses to replace the ill developer, and sometimes it even would not make sense: that team is the one in the whole company that always takes care of this type of project and adding someone new is not going to help when the deadline is just two months ahead, quite the contrary, it would delay the date of delivery. That broken leg and that divorce are likely to be factors to consider when determining the quality that can be achieved in the project. Every team has to do the exercise of analyzing the characteristics of their concrete environment and take into account all factors when determining the level of quality that they can reach in their projects.

Rules of the thumb

According with Wikipedia, a Rule of Thumb is: “a principle with broad application that is not intended to be strictly accurate or reliable for every situation. It is an easily learned and easily applied procedure for approximately calculating or recalling some value, or for making some determination. Compare this to heuristic, a similar concept used in mathematical discourse, psychology, and computer science, particularly in algorithm design”[4].

Following we’ll provide some “rules of thumb” regarding the levels of quality required by different processes involved in the production of software for different stereotypes of projects. It is true that we will not discuss all processes; it is also true that other processes have been grouped. And probably the classification of the projects is not the most appropriate, but after all, these are just “rules of thumb”.

We did not make a distinction between internal developments (those made ​​for the company you work for) and external (those made for a customer of the company you work for), because we understand that the former are too specific to each case, therefore, we will focus in the last ones. In any case, we can provide a rule of thumb also for the former: generally speaking, internal developments require a level of quality that varies between one and two stars less for all processes (see tables below).

Finally, we would like to clarify that what is said below is only the result of the experience of the authors. Thus, surely, the reader will disagree about some (or many) of following recommendations.

Framework or library

By the terms "framework" or "library" we refer to any piece of software that will be used by other developers to build on top of it new pieces of software or final products. Therefore, it is code that will be used by a possibly high number of developers to build applications, and which will be executed in heterogeneous and diverse platforms. Moreover, the project in question will likely have to be maintained for a long period of time.

Process

Quality Level

Rules of thumb

Requirements acquisition

3 stars

●  Define clearly what is within the scope that the framework is intended to cover.

●  Define even more clearly what is not within the scope that the framework is intended to cover.

●  In contrast to the "scope", the functionality must be defined generically, as it will be refined during product implementation.

Architecture and Design

5 stars

●  Choose staff with previous experience solving problems in the area addressed by the framework

●  Sample applications testing the framework and its APIs should be built before a design decision can be considered valid.

External documentation

5 stars

●  There must be a manual/tutorial of the framework.

●  All publicly accessible parts of the code (classes, methods, interfaces ...) must be documented using, in the case of Java, Javadoc, or in other languages a ​​similar tool.

●  Good practices to use the framework should be provided.

●  It is desirable to have sample code showing how to perform common tasks with the framework.

Internal documentation

5 stars

●  Internal design decisions should be well documented.

●  The source code should be well documented

Source code

5 stars

●  Coding conventions should be defined and standardized throughout the code base.

●  It is desirable to use an automated tool like Checkstyle to identify violations of these conventions.

●  It is essential to use a version control system and define a clear policy on how often commits should be performed, whether or not it is acceptable to commit code that does not pass all the tests, comments to be included when committing, etc..

●  The versioning will be twofold: evolutionary and corrective. The first involves major jumps in the version number and the second minor jumps. Both tasks must be performed simultaneously.

●  Before making major changes (in application architecture, technology refresh, deep refactoring, etc.) a fork should be created to avoid affecting current development. A join will be made if the change is shown to be appropriate.

Testing

5 stars

●  Use static code analysis tools such as FindBugs.

●  Develop automated unit tests reaching high coverage (> 90%).

●  It is desirable to use integration tests, using a tool such as, for example, Selenium.

●  Especially if there is no integration testss, an application that exercises all the functionality of the framework should be developed and maintained throughout the life of the project; the correct operation of the application gives some assurance that we are not introducing regressions.

●  The test must be run on different platforms (test multiple operating systems, multiple databases, multiple application servers ...).

The development of a framework to deal with a set of problems requires extensive experience solving the kinds of problems the framework deals with. This is why it is crucial to have a team with extensive experience in the problems that the framework is solving in order to abstract from this experience an appropriate design for the framework.

The acquisition of requirements in these projects is usually not crucial, because we normally add new features in future versions. However, we must be careful that the implementation of the new requirements does not break backwards compatibility.

The errors in the analysis and design will be carried by the project throughout its life. They also hamper the inclusion of improvements and new functionality in later versions. This is why this process is crucial.

Given that the project will be used by other developers, it has to be very well documented and it should be profuse in examples to facilitate its use. Since the code will be used for a long time, it is important that it is readable and well structured, and its internal documentation has to be very good. The extended longevity of the project makes it more likely that the composition of the development team changes over time, which increases the need for good documentation, code clarity, and having a good battery of tests.

Mission Critical Application

Some of the typical applications that may fall into this category are: application servers, web servers, medical and industrial device control, applications that manage health data, banking applications, etc.

Process

Quality Level

Rules of thumb

Requirements acquisition

5 stars

●  It will be comprehensive and it will use some of the existing methodologies specialized in the area.

●  It should not only reflect the functional requirements but also (if applicable) the service level: maximum response times for each process, minimum number of transactions per unit of time, etc.

●  It is useful to write cards to document the service levels in them. Besides the desired levels, the levels obtained after each execution of the performance tests should be added.

Architecture and Design

3 stars

●  Meeting the requirements is more important than the quality of the design. The design is subject to the requirements and we may violate some good practices if for example, that ensures that the application processes the information in the required time.

External documentation

5 stars

●  The external documentation tends to be little more than an installation manual and/or operations, but must be clear and complete, so that anyone outside the development team can install (if necessary) and operate the application.

Internal documentation

5 stars

●  Internal design decisions should be well documented.

●  The source code should be well documented.

●  Given that the design and code sometimes do not follow the best practices, it is imperative that internal documentation counteract these deficiencies by being flawless and explaining the compromises that have been made to achieving the required service levels.

Source code

4 stars

●  Coding conventions should be defined and standardized throughout the code base.

●  It is desirable to use an automated tool like Checkstyle to identify violations of these conventions.

●  It is essential to use a version control system and define a clear policy on how often commits should be performed, whether or not it is acceptable to commit code that does not pass all the tests, comments to be included when committing, etc.

●  The code should be as clear and readable as possible, although in these applications it is often necessary to compromise code clarity to boost efficiency.

●  Before making major changes (in application architecture, technology refresh, deep refactoring, etc.) a fork should be created to avoid affecting current development. A join will be made if the change is shown to be appropriate.

Testing

5 stars

●  Use static code analysis tools such as FindBugs.

●  Develop automated unit tests reaching high coverage (> 90%).

●  It is desirable to use integration tests, using a tool such as, for example, Selenium.

●  The test will not only test functionality, but they will also verify the performance requirements related to the service level (for example, stress tests).

●  The test will be run on all execution environments in which the application will be used in production.

Because they handle sensitive information, and/or because its character of technological infrastructure, these applications are very delicate. They require a high level of quality in all its aspects. They should not fail, but above all, they cannot process the data wrongly. The only parts of the system in which we will relax these criteria are the quality of the user interface, which is usually not very flashy nor does it follow the "state of the art" of the moment, and the external documentation (the one delivered to the user).

Meeting the requirements (response time, number of transactions per unit of time, etc.) is imperative. Therefore, following the good practices is not as important as meeting the requirements. Any trick, shortcut or bad practice are acceptable if it would get the required performance. This does not mean that you should not try to achieve the highest quality; it simply means that if following good practices the desired results are not achieved, you must skip them. However you should always try to mitigate this malpractice through good internal documentation.

Standard management application

Typical applications that we consider fall into this category are: applications oriented towards information management, B2B (business-to-business), B2C (business-to-customer), etc.

Process

Quality Level

Rules of thumb

Requirements acquisition

4 stars

●  As much as we may strive to get it right from the beginning, in these applications new functionality is always added in the process of creating them. This does not mean that you should not approach this process with a high level of quality, but we must be prepared for new requirements.

Architecture and Design

4 stars

●  These applications are usually quite large and therefore a good analysis and application design will permit code to be written easily and faster

●  Although the architecture does not usually undergo many changes, the design itself suffers, so it is desirable to have mechanisms (and tools) to facilitate the management of change in this area.

External documentation

4 stars

●  The external documentation tends to be little more than a deployment manual and an operations manual; the more clear and concise best.

Internal documentation

3 stars

●  The main purpose of this document is to facilitate ongoing maintenance.

●  Internal design decisions should be well documented.

●  The source code should be reasonably documented.

Source code

4 stars

●  Coding conventions should be defined and standardized throughout the code base.

●  It is desirable to use an automated tool like Checkstyle to identify violations of these conventions.

●  It is desirable to use a version control system.

Testing

4 stars

●  It is desirable to use static code analysis tools such as FindBugs.

●  It is desirable to Develop automated unit tests reaching a moderate coverage (> 70%).

●  It is desirable to use integration tests, using a tool such as, for example, Selenium.

●  The test must be run on different platforms (provided that the application will run in different platforms).

The level of quality required by these applications will be in direct relation to the expected longevity of the application and the number of different execution environments in which it will have to run. A management application to be sold as a product to multiple companies and around which we expect  to do business for many years should at least meet the quality standards set out in the table above. In a management application that we do not expect to have a long life and for which there is only going to be a single installation, we may cut between one and two stars in the quality level of each of the processes listed in the table above.

Informative applications

Typical applications that we consider fall into this category include corporate website, bulletin boards, magazine, wikis, etc.

Process

Quality Level

Rules of thumb

Requirements acquisition

2 stars

●  In these applications customers tend to adapt their requirements to the functionality provided by the tool (usually a CMS or similar) which is going to be used. This tends to simplify the requirements acquisition.

Architecture and Design

1 stars

●  Since these applications typically rely heavily on some framework/tool (CMS or similar) it usually is not necessary to write a lot of code; thus the work of analysis and design are minimal or nonexistent.

External documentation

1 stars

●  Most often external documentation is provided by the base tool (CMS or alike) used.

Internal documentation

2 stars

●  The main purpose of this document is to facilitate ongoing maintenance.

●  The source code should be reasonably documented.

Source code

3 stars

●  It is important to follow the best practices suggested by the manufacturer of the product used (CMS or a similar).

●  It is desirable to define and standardize coding conventions throughout the code base.

●  It is desirable to use a version control system.

Testing

1 stars

●  It is desirable to use static code analysis tools such as FindBugs.

●  It is desirable to develop automated unit tests reaching a moderate coverage (> 70%). Although it is not optimal, this type of application often can be reasonably tested manually by interacting with the user interface.

In these applications usually little code is written comparatively with other applications. Much of the work to be performed is related to the system configuration and the transfer of information to and from other systems. On the other hand, the functionality of these applications does not usually present a large variability. This has fostered the creation of a number of products (CMS and alike) that greatly facilitate the creation and ongoing maintenance, covering "out-of-the-box" (in most cases), 90% of the customer needs.

Disposable application

Typical applications we consider fall into this category are proof of concepts, applications for data collection or data analysis when these processes will not be repeated in the future, etc.

Process

Quality Level

Rules of thumb

Requirements acquisition

1 stars

●  The requirements are usually quite simple;  quite often they are not worth defining in a formal way, with keeping them in our mind is enough

Architecture and Design

Proof of concept 0 stars

Other cases

1 stars

●  If we are going to do a proof of concept, it is best not to have any preconceptions, the freer we are, the best the proof of concept will be.

●  If it is anything else, then it is advisable to spend a couple of hours focusing on how we will structure the code, which third-party libraries will be needed, etc. But we will do this informally, without analysis or design tools; we may simply take some notes and perform some Internet searches.

External documentation

1 stars

●  Only in the case we anticipate that in the future will need to run again this code we will generate the minimum documentation to permit its execution.

Internal documentation

0 stars

●  It makes no sense.

Source code

1 stars

●  Provided that we are able to understand what we have written during the development process of the application is sufficient.

Testing

0 stars

●  It makes no sense.

Sometimes, especially if the purpose of the application is not to be a proof of concept, but import, export, extract or analyze data, in these developments "anything goes" as long as we get the program executed in the right way once; after this the life of the program will be over.

Conclusions

The authors assume that probably most of the readers of this document will not agree with many of the details presented here, especially with most of the rules of thumb in section 3. But we hope that at least this document can serve to make explicit in the mind of the reader something that surely was in it, at least implicitly: environmental constraints, limited resources and the nature of the project influence the level of quality that is possible, and that it makes sense, to try to reach. Moreover, to achieve different quality levels, it makes sense to use different development processes. If we have managed to convey this, we are satisfied.

The authors

Abraham Otero Quintana is Associate Professor of Computer Science at the University of San Pablo CEU. He is a founding member of javaHispano  non-profit  association (2001) and its president since 2008. In this time frame I have participated in the organization of several conferences on Java and software development.
Francisco Morero Peyrona  works in Oracle as "EMEA Java Community Leader" and helps customers in the areas of "Software Quality Assurance" and "Software Architecture". Until Oracles adquisition of Sun, he worked in the same fields from the positions of "Java Ambassador" and "Awarded Sun Engineer". He is also member of "javaHispano  Association Management Board" and leads the publications department in javaHispano.

[1] Regarding this concept, please refer to: http://en.wikipedia.org/wiki/Return_on_Investment

[2] Regarding this concept, please, refer to:http://en.wikipedia.org/wiki/Technical_debt

[2]Sidenote: the authors of this document also handle the opposite concept: "Technologic Credit" which refers to the extra and (sometimes) unnecessary work that is done in anticipation, thinking that it may be useful in the future.

[3] Regarding this concept, please refer to: http://en.wikipedia.org/wiki/Total_cost_of_ownership

{{ tag }}, {{tag}},

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}
{{ parent.authors[0].realName || parent.author}}

{{ parent.authors[0].tagline || parent.tagline }}

{{ parent.views }} ViewsClicks
Tweet

{{parent.nComments}}