# We Can't Seem to Escape the Problem of Complexity in Software Development

### We have A LOT of complexity to contend with when getting from point A to point B in most any software project.

Join the DZone community and get the full member experience.

Join For FreeA typical project in the IT industry often looks like a giant roller coaster. At first, it seems that everything is fine, and superhero programmers will find a solution for any kind of problem.

But a little while later, we learn that even good ideas can fail. And we have to continue building the project, but this time with the help of crutches.

In the end, all of the project's developers no longer even dream about a normal solution and just want to assemble something that will work.

This ridiculous story lasts until a revolution happens. The current version of the project is abandoned, and everything starts from scratch. But alas, no one can give a guarantee that the second attempt will not follow the first one's scenario.

With most projects, this cycle repeats practically the same way -- at first there is an ambitious plan, then something unexpected happens, and then all decisions are thrown into the trash.

This doesn't look quite so scary when it comes to a small project. But sometimes we just have to admit, no matter how much it pains us, that it's best to rewrite a multi-million dollar project from scratch than to perpetually keep in on life support and fix the bugs that will inevitably reoccur.

Now, we all know that IT specialists are anything but lazy and as smart as the day is long, but why are projects still failing?

**The problem of the hundredth brick**

The problem of complexity seems to be an unavoidable issue in the software industry. As Grady Booch explains in his book Object-Oriented Analysis and Design with Applications:

“Alas, this complexity we speak of seems to be an essential property of all large software systems. By essential we mean that we may master this complexity, but we can never make it go away."

We could also explain it like this: The amount of resources, connections, and components that are required to solve a task grows non-linearly as the size of the project increases.

To dive a bit deeper into this phenomenon, let's imagine the work of a bricklayer and a programmer.

If a bricklayer builds a wall, we can expect that he will place the first brick in roughly two minutes, and the second one in about the same amount of time. But when he gets to the hundredth brick, he will place it much faster because he has developed the skill over time. By the thousandth brick, he may very well be a virtuoso and will do everything at lightning speed and with his eyes closed.

And what about the programmer? Well, this is a completely different scenario.

The programmer places his first virtual brick in two minutes, the second in the same, but the hundredth brick takes somewhere in the neighborhood on an hour, while the thousandth brick takes the better part of a week.

But why? The problem of complexity strikes again: the project grows, needs change, and the necessary effort, time, and labor costs grow along with it.

While we may have started the project with two other people in a garage somewhere, fast forward a couple of years, and thousands of people could be involved in it.

**Types of Complexity Problems in Software Development**

Let's now dive even deeper and explore four separate manifestations of this issue: algorithmic complexity, development complexity, information complexity, and testing complexity.

**Algorithmic complexity**

Also called computational complexity, algorithmic complexity means the function depends on the size of the input data and outputs the amount of work done by a certain algorithm. The amount of work, in this case, is usually measured by abstract concepts of time and space, which are called "computational resources."

But how does algorithmic complexity affect development? First, we have a variety of data structures and algorithms. All sorts of stacks, decks, trees, B-trees, hash tables are different ways of presenting the information. The developer chooses the structure depending on what operations he will use more often. So, he must perfectly understand all these data structures.

Another manifestation of algorithmic complexity is the 'P versus NP' problem. It's the major unsolved problem in computer science. It asks whether every problem whose solution can be quickly verified (technically, verified in polynomial time) can also be solved quickly (again, in polynomial time). By the way, it has remained at the top of the list of central open problems of the theory of algorithms for over 30 years.

In simple terms, the problem of equality P = NP is about this: If a positive answer to a question can be checked quickly enough, is it true that the answer to this question can be found quickly enough? So, is it really not easier to verify the solution of a problem than to find it? It is believed that if humankind can answer "yes" to these questions, then, theoretically, it will become possible to solve many complex tasks much faster than now.

### Development complexity

This problem is described as follows: The speed of development decreases as the project grows.** **

How does this problem affect work? First, as we have already mentioned, it is impossible to correctly predict the timing. Secondly, technology is rapidly changing. Thirdly, the complexity of development is the reason that projects need more and more programmers, but they do less and less work.

What do companies do in this situation? Luckily, a number of companies have been able to use this aspect of complexity for their own benefit, building a business based on the complexity of development. They do not make money by breaking deadlines, but they also aren't trying to produce the perfect product. These companies are constantly releasing new versions of their software, providing them to users for money.

An Agile methodology comes in handy here, since it works on the principle that "if chaos cannot be prevented, you need to lead it." And then you have to find a benefit in all the chaos, and turn *that* into a benefit for the client.

### Information complexity

Also called Kolmogorov complexity, this concept was introduced in 1939 by the famous Soviet mathematician Andrey Kolmogorov. Briefly, this complexity can be described as follows: The information complexity of a task/object is determined by the number of characters that need to be written to solve this problem.

It means that the complexity of the problem is equal to the number of characters describing its solution. This, in fact, is the size of the code programmers need to write to solve the problem. Of course, everyone wants to use different notations and techniques to reduce the number of characters. So, the idea was that this complexity can be reduced, too.

But, it has unfortunately been proven that determining the informational complexity is an algorithmically unsolvable problem. For a computer, there is no such algorithm that can tell us what the minimum number of characters is necessary to solve a particular task.

So, where do we go from here? First, it is impossible to estimate how large our applications will be. Secondly, more and more new programming languages appear. Now there are constructions that are created to reduce the number of characters in the code and make it more compact. And no one can find out when this process will stop, because no one knows how many characters are minimal to describe any operation.

Third, adherents of various programming languages regularly provoke holy wars. And we can watch the Blub Paradox invented by the American programmer Paul Graham. The essence of this paradox is that a programmer who knows a certain language (in this case, the fictional language Blub) thinks on it, expresses the solution to any task with its help, while the more powerful tools of another language seem useless to him, since he does not even realize how to apply it.

At the same time, the limited capabilities of Blub cannot become a stimulus for learning a more powerful language, because to understand this limitation, the programmer should already know another powerful language.

By the way, the larger the subject area (for which the program is written), the larger the task, and the greater its information complexity will be. And the process of creating a more compact code still does not keep up with the needs of the business.

### The complexity of testing

The complexity of testing a program also grows nonlinearly and very quickly during the increase of the input data.

There is a book called *Art of Software Testing*, which provides an example of a program from almost a hundred lines of code that has 10 to 10 degrees of program execution options. If it only took five seconds to check each option, then, in general, the tester would take more time to do this work than the Earth exists.

In fact, the industry's need for software testing resources is also very high. And we have to accept the fact that each program may contain an error, and all the programs do not work correctly, because we cannot prove that they don't.

Sir Charles Antony Richard Hoare (Tony Hoare), a British computer scientist, in 1969 tried to find a solution to the problem by publishing an article "An Axiomatic Basis for Computer Programming." He attempted to axiomatize, which means to present a formal logical system with which one he could prove the correctness of programs.

In the beginning, the research results were very good, as long as everyone worked with ifs, cycles, etc. But when the community turned to procedural programming, everything "collapsed." It turned out that the proposed axiom systems describing procedures, as a rule, lead to a contradiction.

And in 1980, the article "A Critique of the Foundations of Hoare-Style Programming Logics" was published, which, with the help of counterexamples, put an end to the Hoare's axiomatic basis. In fact, the contradiction, in this case, is integrated into the concept of computability and is connected with the halting problem of the Turing machine.

There is no algorithm in which we can determine from the input values whether it is looping or not. And this halting problem leads to inconsistency of axioms, and a contradictory axiom, unfortunately, allows us to prove absolutely any statement, both true and false. If the axiomatic approach had proved itself, then it would be possible to fully automate the process of verifying programs, but at the moment this is impossible.

As a result, the complexity of development grows, the size of programs grows, the need for meta-programming, in particular, the use of artificial intelligence grows, too. But we cannot guarantee that our AI program will work correctly because we can not check it thoroughly.

Published at DZone with permission of Olga Zaytceva. See the original article here.

Opinions expressed by DZone contributors are their own.

Comments