Over a million developers have joined DZone.

Web Application Security: Fighting Yourself or Finding the Edge of Sanity

DZone's Guide to

Web Application Security: Fighting Yourself or Finding the Edge of Sanity

Development teams have a lot of balls in the air, of which security is one. One developer discusses how to handle them all.

· Security Zone ·
Free Resource

DON’T STRESS! Assess your OSS. Get your free code scanner from FlexeraFlexNet Code Aware scans Java, NuGet, and NPM packages.

How secure should a web application be? Well, for many of us web developers, the question doesn't make much sense. "An application must be as secure as possible. The more secure it is the better." But it is not a definite answer. It doesn't help to form a security policy of a project. Moreover, sticking to just this single directive ("The more secure it is, the better") may prove to be an ill service. Why? That's what I'm going to discuss in this article.

Security Often Makes Usability Worse

Excessive security checks certainly make an application more annoying. Mostly it's true for two parts of an application: authentication and forgotten password functionality.

Multistage authentication that includes SMS verification and additional protective fields, apart from a password, makes the user experience a little more secure but less enjoyable. And the user certainly won't appreciate your attempts to make his experience more secure if all your service does is allow the exchange of funny pictures with other users.

Best security practices advise showing as little information as possible in cases of authentication errors, to prevent an intruder from collecting a list of users. According to this advice, if a user went through 33 stages of authentication and made a typo in one field, the best solution would be to show a message like: "Sorry, something went wrong. Please, try again." Gratitude to developers and sincere admiration for their efforts to make the user experience as safe as possible are the emotions that the user is unlikely to experience in that case.

You must fully realize when user experience gets worse, and decide if this is acceptable in your specific situation.

Security Makes Applications Harder to Develop and Support

The more defense mechanisms an application has, the more complicated it is. The time required for creating some parts of the application might increase by several times to include a minor security improvement.

A lot of effort can be spent just on making the life of intruders more frustrating, and not on fixing actual security problems. For example, the project may choose to obfuscate method names and parameter names in its REST API.

Frequently, developers spend a lot of time trying to prevent an intruder from harvesting a list of usernames through a login form, a registration form, and/or a forgotten password form.

There are approaches where an app marks a user as an intruder but doesn't reveal it. All user requests will be simply ignored.

If a multi-stage authentication process includes a secret question, that is unique for every user, then we still can show a question for a username that doesn't exist in our entries. Moreover, the application can store this username and the shown question in a session or in a DB to consistently ask for the same information.

There are plenty of other ways how to confuse an intruder. But surely they all require development time. And this logic might be quite intricate even for its authors, even if it's written well and has been commented. But the most important thing is that it doesn't actually fix any security issue, it just prevents further issues.

It's not always that simple to separate "a well-designed and truly safe functionality" from "wild mind games with an imaginary hacker." Especially because a fine edge between these two extremes is not absolute and greatly depends on how attractive your application is attractive to potential hackers.

Security Makes Applications Harder to Test

All our security logic must be tested. Unit tests, integration tests or manual testing - we should choose an appropriate approach for every single security mechanism we have.

We can't just give up testing our defense logic, because bugs tend to appear in our work. And even if we were able to write everything correctly in the first place, there is always a chance that bugs will be added during maintenance, support, and refactoring. Nobody starts a project by writing legacy code. The code becomes legacy over time.

It's not sensible to thoroughly test all business logic, but at the same time, we shouldn't assume that our security mechanisms are perfect, absolute, and error-free.

If security logic is tested manually, then there is the question of how often must it be done. If our application is more or less complicated, then there can be dozens, if not hundreds, of places where broken authentication can be hiding. For instance, if in some request some ID parameter is changed, the server returns information that must not be accessible to us. Checking every similar possible case is a lot of work. Should we check it before every major release? Should we assign an individual person for this task? Or should we even have a whole team for this?

These questions are important. Broken authentication can be easily introduced into the project. We must be vigilant while making any tiny changes to our model and adding new REST methods. There is no simple and universal answer to this problem. But there are approaches that allow dealing with the problem consistently throughout a project. For instance, we, in the CUBA platform, use roles and access groups. They allow us to configure which entities are accessible to which users. There is still some work to configure these rules, but the rules themselves are uniform and consistent.

Apart from broken authentication, there are dozens of security problems that should be tested. And, when implementing a new mechanism or logic, we must consider how it will be tested. Things that are not tested tend to break over time. And we not only get problems with our security but also a false sense of confidence that everything is ok.

There are two types of security mechanisms that cause the most trouble: mechanisms that work only on prod environments and mechanisms that represent a second (or third, or fourth) layer of security.

Defense Mechanisms That Work Only on Production

Let's assume that there is a session token cookie, which must have a "secure" flag. But if we use HTTP everywhere in our test environment that means that there are separated configurations for testing and production. And, therefore, we are not exactly testing the product that will be released. During migrations and various changes, the "secure" flag can be lost. And we won't even notice. How do we deal with the problem? Should we introduce one more environment that will be used as pre-production? If so, then what part of our functionality should be tested on this environment?

Multilayered Defense Mechanisms

People, experienced in security issues, tend to create a security logic that can be tested only when other security mechanisms are turned off. It actually makes sense. Even if an intruder manages to find a vulnerability in the first layer of our security barrier, he will be stuck on the second. But how is it supposed to be tested? A typical example of this approach is the use of different DB users for different users of the app. Even if our REST API contains broken authentication, the hacker won't be able to edit or delete any information, because the DB user doesn't have the proper permissions for these actions. But, evidently, such configurations tend to fall out-of-date and break, if they are not maintained and tested properly.

Too Many Security Mechanisms Make Our Applications Less Secure

The more defense checks we have, the more complicated an app is. The more complicated the app is the higher probability is of making a mistake. The higher the probability of making a mistake, the less secure our application is.

Once again, let's consider a login form. It's quite simple to implement a login form with two fields: username and password. All we need to do is to check if there is a user in the system with a provided name and if a password is entered correctly. Well, it's also advisable to check that our application doesn't reveal in which field a mistake was made, to prevent an intruder from harvesting usernames, although this practice can be sacrificed for some applications to make a more pleasant user experience. Anyway, we also have to implement some kind of brute-force defense mechanism. That, of course, should not contain a fail-open vulnerability. It's also a good idea not to reveal to the intruder that we know that they are an intruder. We can just ignore their requests. Let them think that they are continuing to hack us. Another thing to check is that we don't log user passwords. Well, actually, there is a bunch of less important things to consider. But, all-in-all, a standard login form is a piece of cake, isn't it?

Multistage authentication is a completely different thing. Some kind of token can be sent to the e-mail address or via SMS. Or there can be several steps, involving entering more and more information. This is all quite complicated. In theory, this approach should diminish the possibility of a user account being hacked. And if the functionality is implemented properly, then that is indeed the case. There is still a possibility of being hacked (neither SMS nor e-mail message nor anything else will give us a 100% guarantee), but, by these means, the risk is reduced. But the authentication logic that already was quite complex becomes much more complicated. And the probability of making a mistake increases. And the existence of a single bug will cause our new model to be less secure than it was while it was just a simple form with two fields.

Moreover, intrusive and inconvenient security measures may force users to store their sensitive data in less secure ways. For example, if in a corporate network there is a requirement to change passwords monthly, then users that don't understand such annoying measures might start to write their passwords on stickers and put them on their screens. "It's totally a fault of users, if they commit such follies," you can object. Well, maybe. But it's definitely your problem too. At the end of the day, isn't the satisfaction of users' needs our final goal as developers?

Got it. So What Are You Suggesting?

I suggest deciding from the start, how far are we ready to go to obstruct an intruder. Are we ready to optimize our login form so that the response time on login requests won't reveal if a user with such a name exists or not? Are we ready to implement checks that are so reliable that even a close friend of a victim sitting on his/her cell phone is not able to access an application? Are we ready to complicate development by several times, inflate the budget and sacrifice good user experience for the sake of making the life of the intruder a little more miserable?

We can endlessly work on security, building new layers of protection, improving monitoring and user behavior analysis, impeding the obtaining of information by bad actors. But, we should draw a line that will separate the things we must do from the things that we must not do. Certainly, during a project evolution's, this line can be re-considered and moved.

In the worst case scenario, a project can spend a lot of resources on building an impenetrable defense against one type of attack, while having an enormous security flaw in some other place.

When making a choice, if we are going to implement some security mechanism or if we are going to build another layer of security, we must consider many things:

  • How easy is it to exploit a vulnerability? Broken authentication can be exploited easily. And it doesn't require any serious technical background for it. Therefore, the problem is important and should be dealt with accordingly.

  • How critical is a vulnerability? If an intruder is able to obtain some sensitive information about other users or, even worse, can edit it, then it's a rather serious problem. If an intruder can collect the IDs of some of the products in our system and cannot use these IDs for anything particularly interesting, then the problem is much less severe.

  • How much more secure will an application be if we implement this feature? If we are talking about additional layers of security (for instance, checking XSS problems on an output, when we already implemented a good mechanism for input sanitization), or we are just trying to make the life of an intruder harder (for example, we try to conceal the fact that we marked him as a hacker), then the priority of these changes is not high. They might be not be implemented at all.

  • How much time will it take?

  • How much will it cost?

  • How much worse will the user experience get?

  • How difficult will it be to maintain and test the feature? A common practice is never to return a 403 code after an attempt to access a restricted resource but return a 404 code instead. This will make it harder to collect identifiers of resources. This solution, while it makes it more difficult to get information about the system, complicates testing and production error analysis. And it can even prove to be harmful to the user experience because a user can get a confusing message that there is no such resource, even though the resource exists, but, for some reason, became inaccessible to the user.

Well, surely, in your specific case there may be a need for a multistage authentication mechanism. But you must fully understand the ways it impedes development and makes an application less enjoyable for users.

You Are Justifying a Negligent Approach to Security

Well, not really. There are certainly security-sensitive applications, which will gain from additional security measures. Even if these measures increase expenses and destroy user experience.

And, of course, there are a number of vulnerabilities that should not appear in any application, no matter how small it is. CSRF is a typical example of such a vulnerability. Defending against it doesn't make the user experience worse and doesn't cost a lot. Many server-side frameworks (such as Spring MVC) and front-end frameworks (such as Angular) support CSRF-tokens out-of-the-box. Furthermore, with Spring MVC we can quickly add any required security header: Access-Control-*, Content-Security-Policy, etc.

Broken authentication, XSS, SQL injection, and several other vulnerabilities are not permitted in our applications. Defense against them is easy to grasp and is perfectly explained in a great range of books and articles. We also can add to this list the passing of sensitive information inside URL parameters, storing weakly hashed passwords, and other bad security practices.

In the best possible way, there should be a manifest in a project, which describes a security policy of the project and answers such questions as:

  • What security practices are we following?

  • What is our password policy?

  • What and how often do we test?

This manifesto will be different for different projects. If a program has an insertion of user input into OS command, the security policy must contain an explanation of how to do it safely. If the project can upload files (such as avatars) to a server, the security policy must enumerate possible security problems and how to deal with them.

Certainly, it's not an easy task to create and support such a manifesto. But, expecting that each member of a team (including QA and support) remembers and sticks to every security practice is kind of naive. Moreover, there is the problem that, for many vulnerabilities, there are several ways to handle them. And if there is no definite policy on the matter, then it can occur that in some places developers use one practice (for example, they validate input information) and, in other places, they do something completely different (for example, they sanitize an output). Even if the code is good, it's still inconsistent. And inconsistency is perfect ground for bugs, support problems, and false expectations.

For small projects, a technical leader code review may be enough to avoid the aforementioned problems, even if there is no manifesto.


  • While working on security, we should consider how security sensitive our application is. Bank applications and applications for sharing funny stories require different approaches.

  • While working on security, we should consider how harmful it will be to the user experience.

  • While working on security, we should consider how much it will complicate the code and make maintenance more difficult.

  • Security mechanisms should be tested.

  • It's advisable to teach team members how to deal with security issues and/or perform a thorough code review for every commit in a project.

  • There are certain vulnerabilities that have to be eliminated for every application: XSS, XSRF, injections (including SQL injection), broken authentication, etc.

Try FlexNet Code Aware Today! A free scan tool for developers. Scan Java, NuGet, and NPM packages for open source security and license compliance issues.

web security ,security testing ,security ,appsec ,shift left

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}