That raises the question, why is JavaScript so popular/successful? There is no one great answer to this that I’m aware of. There are many good reasons to use JavaScript today, probably most importantly the great ecosystem that was built around it and the huge amount of resources available out there. But, all of this is actually a consequence to some extent. Why did JavaScript become popular in the first place? Well, it was the lingua franca of the web for ages, you might say. But that was the case for a long time, and people hated JavaScript with passion. Looking back in time, it seems the first JavaScript popularity boosts happened in the second half of the last decade. Unsurprisingly, this was the time when JavaScript engines accomplished huge speed-ups on various different workloads, which probably changed the way that many people looked at JavaScript.

Back in the days, these speed-ups were measured with what is now called traditional JavaScript benchmarks, starting with Apple’s SunSpider benchmark, the mother of all JavaScript micro-benchmarks, followed by Mozilla’s Kraken benchmark and Google’s V8 benchmark. Later the V8 benchmark was superseded by the Octane benchmark and Apple released its new JetStream benchmark. These traditional JavaScript benchmarks drove amazing efforts to bring a level of performance to JavaScript that no one would have expected at the beginning of the century. Speed-ups up to a factor of 1000 were reported, and all of a sudden using <script> within a website was no longer a dance with the devil, and doing work client-side was not only possible, but even encouraged.

Measuring performance, A simplified history of benchmarking JS Source: Advanced JS performance with V8 and Web Assembly, Chrome Developer Summit 2016, @s3ththompson.


Now in 2016, all (relevant) JavaScript engines have reached a level of performance that is incredible, and web apps are as snappy as native apps (or at least can be as snappy as native apps). The engines ship with sophisticated optimizing compilers that generate short sequences of highly optimized machine code by speculating on the type/shape that hit certain operations (i.e. property access, binary operations, comparisons, calls, etc.) based on feedback collected about types/shapes seen in the past. Most of these optimizations were driven by micro-benchmarks like SunSpider or Kraken, and static test suites like Octane or JetStream. Thanks to JavaScript-based technologies like asm.js and Emscripten it is even possible to compile large C++ applications to JavaScript and run them in your web browser without having to download or install anything. For example, you can play AngryBots on the web out-of-the-box, whereas in the past gaming on the web required special plugins like Adobe Flash or Chrome’s PNaCl.

The vast majority of these accomplishments were due to the presence of these micro-benchmarks and static performance test suites, and the vital competition that resulted from having these traditional JavaScript benchmarks. You can say what you want about SunSpider, but it’s clear that without SunSpider, JavaScript performance would likely not be where it is today. Okay, so much for the praise… now on to the flip side of the coin: Any kind of static performance test—be it a micro-benchmark or a large application macro-benchmark—is doomed to become irrelevant over time! Why? Because the benchmark can only teach you so much before you start gaming it. Once you get above (or below) a certain threshold, the general applicability of optimizations that benefit a particular benchmark will decrease exponentially. For example, we built Octane as a proxy for performance of real-world web applications, and it probably did a fairly good job at that for quite some time, but nowadays the distribution of time in Octane vs. the real world is quite different, so optimizing for Octane beyond where it is currently, is likely not going to yield any significant improvements in the real world (neither general web nor Node.js workloads).

Distribution of time in benchmarks vs. real world Source: Real-World JavaScript Performance, BlinkOn 6 conference, @tverwaes.


Since it became more and more obvious that all the traditional benchmarks for measuring JavaScript performance, including the most recent versions of JetStream and Octane, might have outlived their usefulness, we started investigating new ways to measure real-world performance at the beginning of the year, and added a lot of new profiling and tracing hooks to V8 and Chrome. We especially added mechanisms to see where exactly we spend time when browsing the web, i.e. whether it’s script execution, garbage collection, compilation, etc., and the results of these investigations were highly interesting and surprising. As you can see from the slide above, running Octane spends more than 70% of the time executing JavaScript and collecting garbage, while browsing the web you always spend less than 30% of the time actually executing JavaScript, and never more than 5% collecting garbage. Instead, a significant amount of time goes to parsing and compiling, which is not reflected in Octane. So, spending a lot of time to optimize JavaScript execution will boost your score on Octane, but won’t have any positive impact on loading youtube.com. In fact, spending more time on optimizing JavaScript execution might even hurt your real-world performance since the compiler takes more time, or you need to track additional feedback, thus eventually adding more time to the Compile, IC and Runtime buckets.

Speedometer

There’s another set of benchmarks, which try to measure overall browser performance, including JavaScript and DOM performance, with the most recent addition being the Speedometer benchmark. The benchmark tries to capture real world performance more realistically by running a simple TodoMVC application implemented with different popular web frameworks (it’s a bit outdated now, but a new version is in the works). The various tests are included in the slide above next to Octane (Angular, Ember, React, Vanilla, Flight, and Backbone), and as you can see these seem to be a better proxy for real-world performance at this point in time. Note however, that this data is already six months old at the time of this writing and things might have changed as we optimized more real-world patterns (for example, we are refactoring the IC system to reduce overhead significantly, and the parser is being redesigned). Also note that while this looks like it’s only relevant in the browser space, we have very strong evidence that traditional peak performance benchmarks are also not a good proxy for real world Node.js application performance.

Speedometer vs. Octane Source: Real-World JavaScript Performance, BlinkOn 6 conference, @tverwaes.


All of this is probably already known to a wider audience, so I'll use the coming posts to highlight a few concrete examples—why I think it’s not only useful, but crucial for the health of the JavaScript community to stop paying attention to static peak performance benchmarks above a certain threshold. Come back to this series tomorrow, and let me run you through a couple of examples of how JavaScript engines can and do game benchmarks.

Stay tuned for part two!