For the past two years (2015 and 2016), Raygun has tested the Node.js framework against other popular frameworks including Hapi, Express.js, Restify, and Koa. This year (2017), we’ve added some more frameworks due to popular demand: Sails.js and Adonis.js.
The aim of these performance tests is to help you benchmark popular frameworks so you can see which one best suits your project.
As always, we’ve broken the results down and compared them to last year. We’ve also included instructions on how to reproduce the test.
Node.js performance tests were performed on the Ubuntu subsystem on Windows 10, along with a VM provisioned from Digital Ocean. The tests only utilize the most basic capabilities of the frameworks in question, therefore the main goal was to show the relative overhead these frameworks add to the handling of a request. This is not a test of the absolute performance as this will vary greatly depending on the environment and network conditions. This test also doesn’t cover the utility each framework provides and how this enables complex applications to be built with them.
Node.js Performance 2015
Node.js Performance June 2016
This year, we added one additional framework, the results of which you can see below:
Node.js Performance December 2016
It turned out that Total.js was the fastest of the frameworks tested, being only 15% slower than using the raw Node.js HTTP library. The other frameworks, Koa, Restify, and Express all performed similarly and Hapi performed the worst. For this update, we introduced a secondary test environment:
Node.js Performance 2017: The Test
The original reasoning behind these posts was an exploration of Node.js frameworks for use in the development of a lightweight public facing ingestion API for possible use in developing Raygun’s public API.
For this round of testing, as requested by our readers, we added two new frameworks (Sails.js and Adonis.js) to the test. We also used the updated versions of the previously tested frameworks which included a major version bump for Koa to Koa 2.0 and minor updates for some of the other frameworks.
As with the previous tests, I used Apache Bench to fire GET requests at the endpoint for each sample app. We configured the applications to respond with a simple “Hello World!” string. Note that this doesn't necessarily correlate with any desired real world behavior, but is, rather, testing the theoretical maximum number of requests which can be handled.
Apache Bench was configured to make 100 requests concurrently until 50,000 requests had been completed or 20 seconds had elapsed. The request responses per second were then recorded.
Both Apache Bench and the server were hosted within the same environment, thus removing any confounding factors network stability could introduce.
All tests were repeated five times in each environment. The displayed figures refer to the average values produced by the test.
The rough code used in the test is available on Github, here. Depending on the environment additional packages or permissions may be required to run the tests.
Software Versions Tested
- The Ubuntu subsystem environment running on Windows 10 PC – 32 GB RAM, i7-4790 CPU.
- A $20 Digital Ocean VM running Ubuntu 16.04 – 2GB Memory, 2 Cores.
Ubuntu Subsystem on Windows 10:
Express 1745 requests/second Hapi 1094 r/s Raw Node 2291 r/s Restify 1759 r/s Koa2 1887 r/s Total 2144 r/s Sails 1554 r/s Adonis 2177 r/s
Ubuntu VM (Digital Ocean):
Express 2875 r/s Hapi 688 r/s Raw Node 5092 r/s Restify 2380 r/s Koa2 3317 r/s Total 3924 r/s Sails 772 r/s Adonis 962 r/s
What These Results Mean for You
The results of this test shouldn’t be taken as an indication of the framework’s merits or flaws as a whole, as this test is focused on a simplistic operation. If you are going to write a high-performance API, we recommend having some idea about the base performance overhead. This will better inform you on whether or not it is suitable.
What the Performance on the Different Systems Means
As the test was performed on environments which aren’t similar to a professionally hosted service’s environment, the raw numbers here mean little. The windows subsystem is mainly useful for emulating nix-like functionality within Windows with ease and a $20 VM is very much at the lower end of the scale for any professional web service. Because of this, the main takeaway should be the relative performance of each framework.
Comparing With Prior Tests
The results of the tests were similar to previous runs. There was very little variation in performance between the VM performance of the last test and the most recent one. This makes sense as Node.js has not undergone any major updates since the last test was run. However, we wanted to monitor the performance against other frameworks as they have had minor updates.
Both of the new frameworks were relatively opinionated, and these dependency heavy frameworks came with many out-of-the-box features. That they performed somewhat unfavorably in this micro-benchmark was not surprising. Both were leaning towards the creation of larger applications rather than simple API endpoints. Their performance was surprisingly good, however, on the Subsystem environment. Having access to a larger amount of memory may have been a factor in both these frameworks performing better than on the relatively resource light Virtual Machine.
How to Reproduce the Test
These tests can be replicated by cloning this repository and running:
The first script should install all the required tooling to run the test as well as npm and the latest version of Node.js. The second script then runs all the servers and benchmarks in sequence outputting the results to a file. We highly recommend reviewing the scripts to ensure it will not interfere with any other processes on your machine before running it.
Overall these tests showed that Total.js remains the fastest framework with performance within 25% of a raw Node.js service. The large amount of built-in features might make them unsuitable for use of an API project without significant modifications to their default behavior.