Regular Expressions Denial of the Service (ReDOS) Attacks: From the Exploitation to the Prevention

DZone 's Guide to

Regular Expressions Denial of the Service (ReDOS) Attacks: From the Exploitation to the Prevention

· Cloud Zone ·
Free Resource

Autors :Michael Hidalgo, Dinis Cruz


When it comes to Web application security, one of the recommendations to write software that is resilient to attacks is to perform a correct input data validation. However, as Mobile applications and APIs (Application Programming Interface) proliferates, the number of untrusted sources where data comes from goes up, and a potential attacker can take advantage of the lack of validations to compromise our applications. 

Regular expressions provides a versatile mechanism to perform input data validation. Developers use them to validate email addresses, zip codes, phone numbers and many other task that are easily implemented thought them.

Unfortunately most of the time Software engineers don't fully understand how regular expressions works in the background and by choosing a wrong regular expression pattern they can introduce a risk in the application.

In this article we are going to discuss about the so called Regular Expression Denial of the service (ReDoS) vulnerability and how we can identify this problems early in the Software Development Life Cycle (SDLC) stages by enforcing a culture focused on Unit Testing.

Hardware features for this article

In order to provide information about execution time, performance, cpu utilisation and other facts, we are relying on virtual machine that uses Windows 7 32-bit operating System, 5.22 GB RAM. Intel(R) Core (TM) iT-3820QM CPU @2.7 GHz. We are also using 4 cores.

Understanding the Problem.

The OWASP Foundation (2012) defines a Regular regular expression Denial of Service attack as follows:

"The Regular expression Denial of Service (ReDoS) is a Denial of Service attack, that exploits the fact that most Regular Expression implementations may reach extreme situations that cause them to work very slowly (exponentially related to input size). An attacker can then cause a program using a Regular Expression to enter these extreme situations and then hang for a very long time."

Although a broad explanation about regular expression engines is out of the scope of this article,it is important to understand that, according to Stubblebine,T (Regular Expressions Pocket Reference), a pattern matching consist of finding a section of text that is described (matched) by a regular expression. Two main rules are used to match results:

  1. The earliest (leftmost) wins : The regular expression is applied to the input starting at the first character and moving toward the last. As soon as the regular expression engine finds a match,it returns.
  2. Standard quantifiers are greedy : According to Stubblebine, "Quantifiers specify how many times something can be repeated. The standard quantifiers attempt to match as many times as possible. The process of giving up characters and trying less-greedy matches is called backtracking."

For this article we are focused a regular expression engine called Nondeterministic Finite Automaton (NFA).This engines usually compare each element of the regex to the input string, keeping track of positions where it chose between two options in the regex. If an option fails, the engine backtracks to the most recently saved position.(Stubblebine,T 2007). It is important to note that this engine is also implemented in .NET, Java, Python, PHP and Ruby on rails.

This article is focused on C# and therefore we are relying on The Microsoft .NET Framework System.Text.RegularExpression classes which at the heart uses NFA engines.

According to Bryan Sullivan

"One important side effect of backtracking is that while the regex engine can fairly quickly confirm a positive match (that is, an input string does match a given regex), confirming a negative match (the input string does not match the regex) can take quite a bit longer. In fact, the engine must confirm that none of the possible “paths” through the input string match the regex, which means that all paths have to be tested. With a simple non-grouping regular expression, the time spent to confirm negative matches is not a huge problem."

In order to illustrate the problem, let's use this regular expression (\w+\d+)+C which basically performs the following checks:

  1. Between one and unlimited times, as many times as possible, giving back as needed.
  2. \w+ match any word character a-zA-Z0-9_.
  3. \d+ match a digit 0-9
  4. Matches the character C literally (case sensitive)

So matching values are 12C,1232323232C and !!!!cD4C and non matching values are for instance !!!!!C,aaaaaaC and abababababC.

The following unit test was created to verify both cases.

const string RegExPattern = @"(\w+\d+)+C";

public void TestRegularExpression() 
    var validInput   = "1234567C";
    var invalidInput = "aaaaaaaC";
    Regex.IsMatch(validInput, RegExPattern).assert_Is_True();
    Regex.IsMatch(invalidInput, RegExPattern).assert_Is_False();

Execution time : 6 milliseconds

Now that we've verified that our regular expression works well, let's write a new unit test to understand the backtracking problem and the performance effects.

Note that the longer the string, the longer the time the regular expression engine will take to resolve it. We will generate 10 random strings, starting at the length of 15 characters, incrementing the length until get to 25 characters,and then we will see the execution times.

const string RegExPattern = @"(\w+\d+)+C";
public void IsValidInput()
    var sw = new Stopwatch();
    Int16 maxIterations = 25;
    for (var index = 15; index < maxIterations; index++)
        //Generating x random numbers using FluentSharp API
        var input = index.randomNumbers() + "!";
        Regex.IsMatch(input, RegExPattern).assert_False();

Now let's take a look at the test results:

Random String Character Length Elapsed Time (ms)
360817709111694! 16 16ms
2639383945572745! 17 23ms
57994905459869261! 18 50ms
327218096525942566! 19 106ms
4700367489525396856! 20 207ms
24889747040739379138! 21 394ms
156014309536784168029! 22 795ms
8797112169446577775348! 23 1595ms
41494510101927739218368! 24 3200ms
112649159593822679584363! 25 6323ms

By looking at this results we can understand that the execution time (total time to resolve the input text against the regular expression) goes up exponentially to the size of the input. 

We can also see that when we append a new character, the execution time almost duplicates. This is an important finding because shows how expensive this process is, if we do not have a correct input data validation we can introduce performance issues in our application.

A real-life use-case and an appeal for a unit testing approach

Now that we have seen the problems we can face by selecting a wrong (evil) regular expression, let's discuss about a realistic scenario where we need to validate input data thought regular expressions.

We strongly believe that unit testing techniques can not only help to write quality code but also we can use them to find vulnerabilities in the code we are writing. By writing unit test that performs security checks (like input data validation)

A common task in Web applications consist on request an email address to the user signing in our application. From a UX (user experience perspective) complaining browsers support friendly error messages when an input, that was supposed to be an email address, does not match with the requirements in terms of format. Here is a UI validation when a input textbox (with the email type is set) and the value is not a valid email address.

However relying on a UI validation is not longer enough. An eavesdropper can easily perform an HTTP request without using a browser (namely by using a proxy to capture data in transit) and then send a payload that can compromise our application.

 In the following use case, we are using a backend validation for the email address by using a regular expression. We will show you the real power of regular expressions here, we are not only testing that the regular expression validates the input but also how it behaves when it receives any arbitrary input.

We are using this evil regular expression to validate the email: ^(0-9a-zA-Z@([0-9a-zA-Z][-\w][0-9a-zA-Z].)+[a-zA-Z]{2,9})$ .

With the following test we are verifying that a valid email and invalid emails formats are correctly processed by the regular expression, which is the functional aspect from a development point of view.

const string EmailRegex = @"^([0-9a-zA-Z]([-.\w]*[0-9a-zA-Z])*@([0-9a-zA-Z][-\w]*[0-9a-zA-Z]\.)+[a-zA-Z]{2,9})$";
public void ValidateEmailAddress() 
    var validEmailAddress   = "michael.hidalgo@owasp.org";
    var invalidEmailAddress = new String[] { "a", "abc.com", "1212", "aa.bb.cc", "aabcr@s" };
    Regex.IsMatch(validEmailAddress, EmailRegex).assert_Is_True();
    //Looping throught invalid email address
    foreach (var email in invalidEmailAddress) 
        Regex.IsMatch(email, EmailRegex).assert_Is_False();
Elapsed time: 6ms.

So both cases are validate correctly. One could state that both scenarios supported by the unit test are enough to select this regular expression for our input data validations. However we can do a more extensive testing as you'll see.

The exploit

So far the previous regular expression selected to valid an email address seems to work well, we have added some unit test that verifies valid an invalid inputs.

But how does it behaves when we send an arbitrary input?, from a variable length, do we face a denial of the service attack?.

This kind of questions can be solved wit unit testing technique like this one:

const string EmailRegex = @"^([0-9a-zA-Z]([-.\w]*[0-9a-zA-Z])*@([0-9a-zA-Z][-\w]*[0-9a-zA-Z]\.)+[a-zA-Z]{2,9})$";
public void ValidateEmailAddress() 
    var validEmailAddress   = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa!";
    var watch = new Stopwatch();
    Console.WriteLine("Elapsed time {0}ms", watch.ElapsedMilliseconds);
**Elapsed Time : ~23 minutes (1423127 milliseconds).**

Results are disturbing. We can clearly see the performance problem introduced by evaluating the given input.It takes roughly 23 minutes to validate the input given the hardware characteristics described before. 

In the following images you will see the CPU behaviour when running this unit test.

Here is another CPU utlization:

And this is another image from the CPU utilization while the test is running.

Fuzzing and Unit Testing: A perfect combination of techniques

In the previous unit test we found that a given input string can lead to have denial of the service issue in our application. Note that we didn't need an extreme large payload, in our scenario 34 characters can illustrate this problem or even less. When using any regular expression it is recomendable to always test it against unit testing to cover most of the possible ways a user (which can be a potential attacker) can send. 

Here is where we can use Fuzzing.

Tobias Klein in his book A Bug Hunter's Diary A Guide Tour Throught the Wilds of Sofware security defines Fuzzing as

"A complete different approach to bug hunting is known as fuzzing. Fuzzing is a dynamic-analysis technique that consist of testing an application by providing it with malformed or unexpected input.

Then Klein continues adding that:

"It isn't easy to identify the entry points of such complex applications, but complex software often tends to crash while processing malformed input data. Page 05"

Mano Paul in his book Official (ISC)2 Guide To the CSSLP talking about Fuzzing states that:

"Also known as fuzz testing or fault injection testing, fuzzing is a brute-force type of testing in which faults (random and pseudo-random input data) are injected into the software and it's behaviour is observed. It is a test whose results are indicative of the extended and effectiveness of the input validation.Page 336".

Taking previous definitions into consideration, we are going to implement a new unit test that can allow us to generate random input data and test our regular expression.

In this case, we are using this email regular expression "^[\w-.]{1,}\@([\w]{1,}.){1,}[a-z]{2,4}$"; and by doing an exhaustive testing we will see if we are not introducing a denial of the service problem.

We want to make sure that the elapsed time to resolve if the random string matches the regular expression is evaluated in less than 3 seconds:

const string EmailRegex = @"^[\w-\.]{1,}\@([\w]{1,}\.){1,}[a-z]{2,4}$";
//Number of random strings to generte.
const int maxIterations = 10000;
public void Fuzz_EmailAddress() 
    //Valid email should return true
    //Invalid email should return false
    "abce"                     .regEx(EmailRegex).assert_Is_False();
    //Testing maxIterations times
    for (int index = 0; index < maxIterations; index++) 
        //Generating a random string
        var fuzzInput = (index * 5).randomString();
        var sw = new Stopwatch();
        //Elapsed time should be less than 3 seconds per input.

Under the hardware features described before, this test passes. Considering that we are using this computation (index * 5), the largest string generate is of 49995 character (which is 9999 *5).

Having said that we were able to test a large string against the regular expression and we confirmed that even thought it is quite large input value, the time involved to verify if it was or not a valid email, it was less than 3 seconds.

Now assuming that a check for the length of the email in the first place, it will guarantee that a malicious user can't inject a large payload in our application.

Countermeasures provided in Microsoft .NET 4.5 and upper

If you are developing applications in Microsoft .NET 4.5 then you can take advantage of a new implementation on top of the IsMatch method from the Regex class. Starting from .NET 4.5 the IsMatch method provides an overload that allows you to enter a timeout. Note that this overload is not available in .NET 4.0.

This new parameter is called matchTimeout and according to Microsoft :

"The matchTimeout parameter specifies how long a pattern matching method should try to find a match before it times out. Setting a time-out interval prevents regular expressions that rely on excessive backtracking from appearing to stop responding when they process input that contains near matches. For more information, seeBest Practices for Regular Expressions in the .NET Framework and Backtracking in Regular Expressions. If no match is found in that time interval, the method throws a RegexMatchTimeoutException exception. matchTimeout overrides any default time-out value defined for the application domain in which the method executes." Taken from here.

We've written a new unit test where we're using a regular expression that we know can lead to denial of the service. In this case we'll test an email address that previously generated a significant side effect in the performance of the application. We'll see then how we can reduce the impact of this process by setting up a timeout.

const string EmailRegexPattern = @"^([0-9a-zA-Z]([-.\w]*[0-9a-zA-Z])*@([0-9a-zA-Z][-\w]*[0-9a-zA-Z]\.)+[a-zA-Z]{2,9})$";
public void ValidateEmailAddress() 
    var emailAddress   = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa!";
    var watch = new Stopwatch();
    //Timeout of 5 seconds
        Regex.IsMatch(emailAddress, EmailRegexPattern,
    catch (Exception ex)

Running this test in Visual Studio we can confirm it passes, which means that the backtracking mechanism is taking longer than 5 seconds to resolve. 

It will throw a RegexMatchTimeoutException exception indicating that it might take longer than 5 seconds to evaluate the input. Ideally one would expect this process to take less than a second, however several conditions or requirements might lead to allow a timeout in seconds.

Note how this model provides a very needed defensive programming style where the software engineers make informed decisions on the code they write, In this case we can establish the next steps when our method times and that way we can decrease any denial of the service attack.

Final thoughts

No one size fits all is so cliché that has to be true. We are not sure if the regular expressions you are currently using in your applications are vulnerable to this attack. What we can do for sure is to show you how you can take advantage of unit testing to write secure code.

When we write code we want to make sure that each single line of code is covered by a unit testing, which at the end of the day will guarantee early detections of error. However if we can combine this exercise with the adoption and implementation of test that can also try to attack/compromise the application (and we are not talking about anything fancy) like sending random strings, using fuzzing techniques, using combination of characters, exceeding the expected length, we will be helping to write software that is resilient to attacks.

As a recommendation always test your regular expressions agains uni test, make sure that they are resilient to the attack we have covered in this article and if you are able to identify those problematic patterns out there, do a contribution and report them so we are not introduce them in the software we write.


1.Cruz,Dinis(2013) The Email RegEx that (could had) DOSed a site.

2.Hollos,S. Hollos,R (2013) Finite Automata and Regular Expressions Problems and Solutions.

3.Kirrage,J. Rathnayake , Thielecke, H.: Static Analysis for Regular Expression Denial-of-Service Attacks. University of Birmingham, UK

4.Klein, T. A bug Hunter's Diary A guided Tour Through the Wilds of Software Security (2011).

5.The OWASP Foundation (2012) Regular expression Denial of Service - ReDoS.

6.Stubblebine, T(2007) Regular Expression Pocket Reference, Second Edition.

7.Sullivan, B (2010) Regular Expression Denial of Service Attacks and Defenses 

cloud, owasp, regular expressions, security, unit testing

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}