The Truth About Traditional JavaScript Benchmarks (Part 4 - Octane)
All the optimizations that went into JavaScript engines driven by Octane in the past were added on good faith that Octane is a good proxy for real world performance.
Join the DZone community and get the full member experience.
Join For Freewrapping up with our series , i'll tell you a bit about octane. the plan for this series was to highlight a few concrete examples that illustrate why i think it’s not only useful but crucial for the health of the javascript community to stop paying attention to static peak performance benchmarks above a certain threshold. i hope that i've convinced you to not just take benchmarks at face value, but to really dig down into how they work in real-world circumstances. and keep in mind that benchmarks which made sense in their time, may not (probably do not) apply to today.
last post , we talked about the kraken benchmark. now let's wrap things up with octane.
a closer look at octane
the octane benchmark is the successor of the v8 benchmark. it was initially announced by google in mid-2012 and the current version octane 2.0 was announced in late 2013 . this version contains 15 individual tests, where for two of them (splay and mandreel), we measure both the throughput and the latency. these tests range from microsofts typescript compiler compiling itself, to raw asm.js performance being measured by the zlib test, to a performance test for the regexp engine, to a ray tracer, to a full 2d physics engine, etc. see the description for a detailed overview of the individual benchmark line items. all these line items were carefully chosen to reflect a certain aspect of javascript performance that we considered important in 2012 or expected to become important in the near future.
to a large extent, octane was super successful in achieving its goals of taking javascript performance to the next level, it resulted in a healthy competition in 2012 and 2013 where great performance achievements were driven by octane. however, it’s almost 2017 now, and the world looks fairly different than in 2012, really, really different actually. besides the usual and often cited criticism that most items in octane are essentially outdated (i.e., ancient versions of typescript, zlib being compiled via an ancient version of emscripten , mandreel not even being available anymore, etc.), something way more important affects octane's usefulness.
we saw big web frameworks winning the race on the web, especially heavy frameworks like ember and angularjs , that use patterns of javascript execution, which are not reflected at all by octane and are often hurt by (our) octane specific optimizations. we also saw javascript winning on the server and tooling front, which means there are large scale javascript applications that now often run for weeks if not years, which was also not captured by octane. as stated in the beginning, we have hard data that suggests that the execution and memory profile of octane are completely different than what we see on the web daily.
so, let’s look into some concrete examples of benchmark gaming that is happening today with octane, where optimizations are no longer reflected in the real world. note that even though this might sound a bit negative in retrospect, it’s definitely not meant that way! as i said a couple of times already, octane is an important chapter in the javascript performance story, and it played a very important role. all the optimizations that went into javascript engines driven by octane in the past were added on good faith that octane is a good proxy for real world performance! every age has its benchmark, and for every benchmark, there comes a time when you have to let go.
that being said, let’s get this show on the road and start by looking at the box2d test, which is based on
box2dweb
, a popular 2d physics engine originally written by erin catto, ported to javascript. overall does a lot of floating-point math and drove a lot of good optimizations in javascript engines. however, as it turns out, it contains a bug that can be exploited to game the benchmark a bit (blame it on me, i spotted the bug and added the exploit in this case). there’s a function
d.prototype.updatepairs
in the benchmark that looks like this (deminified):
d.prototype.updatepairs = function(b) {
var e = this;
var f = e.m_paircount = 0,
m;
for (f = 0; f < e.m_movebuffer.length; ++f) {
m = e.m_movebuffer[f];
var r = e.m_tree.getfataabb(m);
e.m_tree.query(function(t) {
if (t == m) return true;
if (e.m_paircount == e.m_pairbuffer.length) e.m_pairbuffer[e.m_paircount] = new o;
var x = e.m_pairbuffer[e.m_paircount];
x.proxya = t < m ? t : m;
x.proxyb = t >= m ? t : m;
++e.m_paircount;
return true
},
r)
}
for (f = e.m_movebuffer.length = 0; f < e.m_paircount;) {
r = e.m_pairbuffer[f];
var s = e.m_tree.getuserdata(r.proxya),
v = e.m_tree.getuserdata(r.proxyb);
b(s, v);
for (++f; f < e.m_paircount;) {
s = e.m_pairbuffer[f];
if (s.proxya != r.proxya || s.proxyb != r.proxyb) break;
++f
}
}
};
some profiling shows that a lot of time is spent in the innocent looking inner function passed to
e.m_tree.query
in the first loop:
function(t) {
if (t == m) return true;
if (e.m_paircount == e.m_pairbuffer.length) e.m_pairbuffer[e.m_paircount] = new o;
var x = e.m_pairbuffer[e.m_paircount];
x.proxya = t < m ? t : m;
x.proxyb = t >= m ? t : m;
++e.m_paircount;
return true
}
more precisely, the time is not spent in this function itself, but rather operations and built-in library functions triggered by this. as it turned out, we spent 4-7% of the overall execution time of the benchmark calling into the
compare
runtime function
, which implements the general case for the
abstract relational comparison
.
almost all the calls to the runtime function came from the
compareicstub
, which is used for the two relational comparisons in the inner function:
x.proxya = t < m ? t : m;
x.proxyb = t >= m ? t : m;
so, these two innocent-looking lines of code are responsible for 99% of the time spent in this function! how come? well, as with so many things in javascript, the
abstract relational comparison
is not necessarily intuitive to use properly. in this function, both
t
and
m
are always instances of
l
, which is a central class in this application but doesn’t override either any
symbol.toprimitive
,
"tostring"
,
"valueof"
or
symbol.tostringtag
properties that are relevant for the abstract relation comparison. so, what happens if you write
t < m
is this:
-
calls
toprimitive
(
t
,hint number
). -
runs
ordinarytoprimitive
(
t
,"number"
) since there’s nosymbol.toprimitive
. -
executes
t.valueof()
, which yieldst
itself since it calls the defaultobject.prototype.valueof
. -
continues with
t.tostring()
, which yields"[object object]"
since the defaultobject.prototype.tostring
is being used and nosymbol.tostringtag
was found forl
. -
calls
toprimitive
(
m
,hint number
). -
runs
ordinarytoprimitive
(
m
,"number"
) since there’s nosymbol.toprimitive
. -
executes
m.valueof()
, which yieldsm
itself since it calls the defaultobject.prototype.valueof
. -
continues with
m.tostring()
, which yields"[object object]"
, since the defaultobject.prototype.tostring
is being used and nosymbol.tostringtag
was found forl
. -
does the comparison
"[object object]" < "[object object]"
, which yieldsfalse
.
it's the same for
t >= m
, which always produces
true
. so, the bug here is that using abstract relational comparison this way just doesn’t make sense. the way to exploit it is to have the compiler constant-fold it, i.e., similar to applying this patch to the benchmark:
--- octane-box2d.js.orig 2016-12-16 07:28:58.442977631 +0100
+++ octane-box2d.js 2016-12-16 07:29:05.615028272 +0100
@@ -2021,8 +2021,8 @@
if (t == m) return true;
if (e.m_paircount == e.m_pairbuffer.length) e.m_pairbuffer[e.m_paircount] = new o;
var x = e.m_pairbuffer[e.m_paircount];
- x.proxya = t < m ? t : m;
- x.proxyb = t >= m ? t : m;
+ x.proxya = m;
+ x.proxyb = t;
++e.m_paircount;
return true
},
because doing so results in a serious speed-up of 13% by not having to do the comparison, and all the propery lookups and builtin function calls triggered by it.
$ ~/projects/v8/out/release/d8 octane-box2d.js.orig
score (box2d): 48063
$ ~/projects/v8/out/release/d8 octane-box2d.js
score (box2d): 55359
$
so, how did we do that? as it turned out, we already had a mechanism for tracking the shape of objects that are being compared in the
compareic
, the so-called
known receiver
map tracking (where
map
is v8 speak for object shape+prototype), but that was limited to abstract and strict equality comparisons. however, i could easily extend the tracking to also collect the feedback for abstract relational comparison:
$ ~/projects/v8/out/release/d8 --trace-ic octane-box2d.js
[...snip...]
[compareic in ~+557 at octane-box2d.js:2024 ((uninitialized+uninitialized=uninitialized)->(receiver+receiver=known_receiver))#lt @ 0x1d5a860493a1]
[compareic in ~+649 at octane-box2d.js:2025 ((uninitialized+uninitialized=uninitialized)->(receiver+receiver=known_receiver))#gte @ 0x1d5a860496e1]
[...snip...]
$
here the
compareic
used in the baseline code tells us that for the lt (less than) and the gte (greater than or equal) comparisons in the function we’re looking at, it had only seen
receiver
s so far (which is v8 speak for javascript objects), and all these receivers had the same map
0x1d5a860493a1
, which corresponds to the map of
l
instances.
so, in optimized code, we can constant-fold these operations to
false
and
true
respectively as long as we know that both sides of the comparison are instances with the map
0x1d5a860493a1
and noone messed with
l
s prototype chain, i.e., the
symbol.toprimitive
,
"valueof"
, and
"tostring"
methods are the default ones, and no one installed a
symbol.tostringtag
accessor property. the rest of the story is black voodoo magic in crankshaft, with a lot of cursing and initially forgetting to check
symbol.tostringtag
properly:
in the end, there was a rather huge performance boost on this particular benchmark:
to my defense, back then i was not convinced that this particular behavior would always point to a bug in the original code, so i was even expecting that code in the wild might hit this case fairly often, also because i was assuming that javascript developers wouldn’t always care about these kinds of potential bugs. however, i was so wrong, and here i stand corrected! i have to admit that this particular optimization is purely a benchmark thing, and will not help any real code (unless the code is written to benefit from this optimization, but then you could as well write
true
or
false
directly in your code instead of using an always-constant relational comparison).
you might wonder why we slightly regressed soon after my patch. that was the period where we threw the whole team at implementing es2015, which was really a dance with the devil to get all the new stuff in (es2015 is a monster!) without seriously regressing the traditional benchmarks.
enough said about box2d. let’s have a look at the mandreel benchmark. mandreel was a compiler for compiling c/c++ code to javascript. it didn’t use the asm.js subset of javascript that is being used by the more recent emscripten compiler and has been deprecated (and more or less disappeared from the internet) for roughly three years now. nevertheless, octane still has a version of the bullet physics engine compiled via mandreel .
an interesting test here is the mandreellatency test, which instruments the mandreel benchmark with frequent time measurement checkpoints. the idea here was that since mandreel stresses the vm’s compiler, this test provides an indication of the latency introduced by the compiler, and long pauses between measurement checkpoints lower the final score. in theory, that sounds very reasonable and it does indeed make some sense. however, as usual vendors figured out ways to cheat on this benchmark.
mandreel contains a huge initialization function
global_init
that takes an incredible amount of time just parsing this function and generating baseline code for it. since engines usually parse various functions in scripts multiple times, one so-called pre-parse step to discovering functions inside the script and then as the function is invoked for the first time a full parse step to actually generate baseline code (or bytecode) for the function.
this is called
lazy parsing
in v8 speak. v8 has some heuristics in place to detect functions that are invoked immediately where pre-parsing is actually a waste of time, but that’s not clear for the
global_init
function in the mandreel benchmark, thus we’d have an incredibly long pause for pre-parsing + parsing + compiling the big function. so, we
added an additional heuristic
that would also avoid the pre-parsing for this
global_init
function.
source: arewefastyet.com .
so, we saw an almost 200% improvement just by detecting
global_init
and avoiding the expensive pre-parse step. we are somewhat certain that this should not negatively impact real-world use cases, but there’s no guarantee that this won’t bite you on large functions where pre-parsing would be beneficial (because they aren’t immediately executed).
so, let’s look into another slightly less controversial benchmark: the
splay.js
test, which is meant to be a data manipulation benchmark that deals with splay trees and exercises the automatic memory management subsystem (aka the garbage collector). it comes bundled with a latency test that instruments the splay code with frequent measurement checkpoints, where a long pause between checkpoints is an indication of high latency in the garbage collector. this test measures the frequency of latency pauses, classifies them into buckets, and penalizes frequent long pauses with a low score. sounds great! no gc pauses, no jank. so much for the theory. let’s have a look at the benchmark. here’s what’s at the core of the whole splay tree business:
this is the core of the splay tree construction, and despite what you might think looking at the full benchmark, this is more or less all that matters for the splaylatency score. how come? actually what the benchmark does is to construct huge splay trees, so that the majority of nodes survive, thus making it to old space. with a generational garbage collector like the one in v8 this is super expensive if a program violates the generational hypothesis leading to extreme pause times for essentially evacuating everything from new space to old space. running v8 in the old configuration clearly shows this problem:
$ out/release/d8 --trace-gc --noallocation_site_pretenuring octane-splay.js
[20872:0x7f26f24c70d0] 10 ms: scavenge 2.7 (6.0) -> 2.7 (7.0) mb, 1.1 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 12 ms: scavenge 2.7 (7.0) -> 2.7 (8.0) mb, 1.7 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 14 ms: scavenge 3.7 (8.0) -> 3.6 (10.0) mb, 0.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 18 ms: scavenge 4.8 (10.5) -> 4.7 (11.0) mb, 2.5 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 22 ms: scavenge 5.7 (11.0) -> 5.6 (16.0) mb, 2.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 28 ms: scavenge 8.7 (16.0) -> 8.6 (17.0) mb, 4.3 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 35 ms: scavenge 9.6 (17.0) -> 9.6 (28.0) mb, 6.9 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 49 ms: scavenge 16.6 (28.5) -> 16.4 (29.0) mb, 8.2 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 65 ms: scavenge 17.5 (29.0) -> 17.5 (52.0) mb, 15.3 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 93 ms: scavenge 32.3 (52.5) -> 32.0 (53.5) mb, 17.6 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 126 ms: scavenge 33.4 (53.5) -> 33.3 (68.0) mb, 31.5 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 151 ms: scavenge 47.9 (68.0) -> 47.6 (69.5) mb, 15.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 183 ms: scavenge 49.2 (69.5) -> 49.2 (84.0) mb, 30.9 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 210 ms: scavenge 63.5 (84.0) -> 62.4 (85.0) mb, 14.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 241 ms: scavenge 64.7 (85.0) -> 64.6 (99.0) mb, 28.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 268 ms: scavenge 78.2 (99.0) -> 77.6 (101.0) mb, 16.1 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 298 ms: scavenge 80.4 (101.0) -> 80.3 (114.5) mb, 28.2 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 324 ms: scavenge 93.5 (114.5) -> 92.9 (117.0) mb, 16.4 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 354 ms: scavenge 96.2 (117.0) -> 96.0 (130.0) mb, 27.6 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 383 ms: scavenge 108.8 (130.0) -> 108.2 (133.0) mb, 16.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 413 ms: scavenge 111.9 (133.0) -> 111.7 (145.5) mb, 27.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 440 ms: scavenge 124.1 (145.5) -> 123.5 (149.0) mb, 17.4 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 473 ms: scavenge 127.6 (149.0) -> 127.4 (161.0) mb, 29.5 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 502 ms: scavenge 139.4 (161.0) -> 138.8 (165.0) mb, 18.7 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 534 ms: scavenge 143.3 (165.0) -> 143.1 (176.5) mb, 28.5 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 561 ms: scavenge 154.7 (176.5) -> 154.2 (181.0) mb, 19.0 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 594 ms: scavenge 158.9 (181.0) -> 158.7 (192.0) mb, 29.2 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 622 ms: scavenge 170.0 (192.5) -> 169.5 (197.0) mb, 19.5 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 655 ms: scavenge 174.6 (197.0) -> 174.3 (208.0) mb, 28.7 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 683 ms: scavenge 185.4 (208.0) -> 184.9 (212.5) mb, 19.4 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 715 ms: scavenge 190.2 (213.0) -> 190.0 (223.5) mb, 27.7 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 743 ms: scavenge 200.7 (223.5) -> 200.3 (228.5) mb, 19.7 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 774 ms: scavenge 205.8 (228.5) -> 205.6 (239.0) mb, 27.1 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 802 ms: scavenge 216.1 (239.0) -> 215.7 (244.5) mb, 19.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 833 ms: scavenge 221.4 (244.5) -> 221.2 (254.5) mb, 26.2 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 861 ms: scavenge 231.5 (255.0) -> 231.1 (260.5) mb, 19.9 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 892 ms: scavenge 237.0 (260.5) -> 236.7 (270.5) mb, 26.3 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 920 ms: scavenge 246.9 (270.5) -> 246.5 (276.0) mb, 20.1 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 951 ms: scavenge 252.6 (276.0) -> 252.3 (286.0) mb, 25.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 979 ms: scavenge 262.3 (286.0) -> 261.9 (292.0) mb, 20.3 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1014 ms: scavenge 268.2 (292.0) -> 267.9 (301.5) mb, 29.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1046 ms: scavenge 277.7 (302.0) -> 277.3 (308.0) mb, 22.4 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1077 ms: scavenge 283.8 (308.0) -> 283.5 (317.5) mb, 25.1 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1105 ms: scavenge 293.1 (317.5) -> 292.7 (323.5) mb, 20.7 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1135 ms: scavenge 299.3 (323.5) -> 299.0 (333.0) mb, 24.9 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1164 ms: scavenge 308.6 (333.0) -> 308.1 (339.5) mb, 20.9 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1194 ms: scavenge 314.9 (339.5) -> 314.6 (349.0) mb, 25.0 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1222 ms: scavenge 324.0 (349.0) -> 323.6 (355.5) mb, 21.1 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1253 ms: scavenge 330.4 (355.5) -> 330.1 (364.5) mb, 25.1 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1282 ms: scavenge 339.4 (364.5) -> 339.0 (371.0) mb, 22.2 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1315 ms: scavenge 346.0 (371.0) -> 345.6 (380.0) mb, 25.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1413 ms: mark-sweep 349.9 (380.0) -> 54.2 (305.0) mb, 5.8 / 0.0 ms (+ 87.5 ms in 73 steps since start of marking, biggest step 8.2 ms, walltime since start of marking 131 ms) finalize incremental marking via stack guard gc in old space requested
[20872:0x7f26f24c70d0] 1457 ms: scavenge 65.8 (305.0) -> 65.1 (305.0) mb, 31.0 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1489 ms: scavenge 69.9 (305.0) -> 69.7 (305.0) mb, 27.1 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1523 ms: scavenge 80.9 (305.0) -> 80.4 (305.0) mb, 22.9 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1553 ms: scavenge 85.5 (305.0) -> 85.3 (305.0) mb, 24.2 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1581 ms: scavenge 96.3 (305.0) -> 95.7 (305.0) mb, 18.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1616 ms: scavenge 101.1 (305.0) -> 100.9 (305.0) mb, 29.2 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1648 ms: scavenge 111.6 (305.0) -> 111.1 (305.0) mb, 22.5 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1678 ms: scavenge 116.7 (305.0) -> 116.5 (305.0) mb, 25.0 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1709 ms: scavenge 127.0 (305.0) -> 126.5 (305.0) mb, 20.7 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1738 ms: scavenge 132.3 (305.0) -> 132.1 (305.0) mb, 23.9 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1767 ms: scavenge 142.4 (305.0) -> 141.9 (305.0) mb, 19.6 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1796 ms: scavenge 147.9 (305.0) -> 147.7 (305.0) mb, 23.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1825 ms: scavenge 157.8 (305.0) -> 157.3 (305.0) mb, 19.9 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1853 ms: scavenge 163.5 (305.0) -> 163.2 (305.0) mb, 22.2 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1881 ms: scavenge 173.2 (305.0) -> 172.7 (305.0) mb, 19.1 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1910 ms: scavenge 179.1 (305.0) -> 178.8 (305.0) mb, 23.0 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1944 ms: scavenge 188.6 (305.0) -> 188.1 (305.0) mb, 25.1 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 1979 ms: scavenge 194.7 (305.0) -> 194.4 (305.0) mb, 28.4 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 2011 ms: scavenge 204.0 (305.0) -> 203.6 (305.0) mb, 23.4 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 2041 ms: scavenge 210.2 (305.0) -> 209.9 (305.0) mb, 23.8 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 2074 ms: scavenge 219.4 (305.0) -> 219.0 (305.0) mb, 24.5 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 2105 ms: scavenge 225.8 (305.0) -> 225.4 (305.0) mb, 24.7 / 0.0 ms allocation failure
[20872:0x7f26f24c70d0] 2138 ms: scavenge 234.8 (305.0) -> 234.4 (305.0) mb, 23.1 / 0.0 ms allocation failure
[...snip...]
$
so, the key observation here is that allocating the splay tree nodes in old space directly would avoid essentially all the overhead of copying objects around and reduce the number of minor gc cycles to the bare minimum (thereby reducing the pauses caused by the gc). we came up with a mechanism called allocation site pretenuring that would try to dynamically gather feedback at allocation sites when it's run in baseline code to decide whether a certain percent of the objects allocated here survives, and if so, instrument the optimized code to allocate objects in old space directly, i.e., pre-tenure the objects.
$ out/release/d8 --trace-gc octane-splay.js
[20885:0x7ff4d7c220a0] 8 ms: scavenge 2.7 (6.0) -> 2.6 (7.0) mb, 1.2 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 10 ms: scavenge 2.7 (7.0) -> 2.7 (8.0) mb, 1.6 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 11 ms: scavenge 3.6 (8.0) -> 3.6 (10.0) mb, 0.9 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 17 ms: scavenge 4.8 (10.5) -> 4.7 (11.0) mb, 2.9 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 20 ms: scavenge 5.6 (11.0) -> 5.6 (16.0) mb, 2.8 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 26 ms: scavenge 8.7 (16.0) -> 8.6 (17.0) mb, 4.5 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 34 ms: scavenge 9.6 (17.0) -> 9.5 (28.0) mb, 6.8 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 48 ms: scavenge 16.6 (28.5) -> 16.4 (29.0) mb, 8.6 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 64 ms: scavenge 17.5 (29.0) -> 17.5 (52.0) mb, 15.2 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 96 ms: scavenge 32.3 (52.5) -> 32.0 (53.5) mb, 19.6 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 153 ms: scavenge 61.3 (81.5) -> 57.4 (93.5) mb, 27.9 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 432 ms: scavenge 339.3 (364.5) -> 326.6 (364.5) mb, 12.7 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 666 ms: scavenge 563.7 (592.5) -> 553.3 (595.5) mb, 20.5 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 825 ms: mark-sweep 603.9 (644.0) -> 96.0 (528.0) mb, 4.0 / 0.0 ms (+ 92.5 ms in 51 steps since start of marking, biggest step 4.6 ms, walltime since start of marking 160 ms) finalize incremental marking via stack guard gc in old space requested
[20885:0x7ff4d7c220a0] 1068 ms: scavenge 374.8 (528.0) -> 362.6 (528.0) mb, 19.1 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 1304 ms: mark-sweep 460.1 (528.0) -> 102.5 (444.5) mb, 10.3 / 0.0 ms (+ 117.1 ms in 59 steps since start of marking, biggest step 7.3 ms, walltime since start of marking 200 ms) finalize incremental marking via stack guard gc in old space requested
[20885:0x7ff4d7c220a0] 1587 ms: scavenge 374.2 (444.5) -> 361.6 (444.5) mb, 13.6 / 0.0 ms allocation failure
[20885:0x7ff4d7c220a0] 1828 ms: mark-sweep 485.2 (520.0) -> 101.5 (519.5) mb, 3.4 / 0.0 ms (+ 102.8 ms in 58 steps since start of marking, biggest step 4.5 ms, walltime since start of marking 183 ms) finalize incremental marking via stack guard gc in old space requested
[20885:0x7ff4d7c220a0] 2028 ms: scavenge 371.4 (519.5) -> 358.5 (519.5) mb, 12.1 / 0.0 ms allocation failure
[...snip...]
$
indeed, that essentially fixed the problem for the splaylatency benchmark completely and boosted our score by over 250%!
source: arewefastyet.com .
as mentioned in the sigplan paper , we had good reasons to believe that allocation site pre-tenuring might be a win for real world applications, and were really looking forward to seeing improvements and extending the mechanism to cover more than just object and array literals. it didn’t take long to realize that allocation site pre-tenuring can have a pretty serious negative impact on real-world application performance. we actually got a lot of negative press, including a sh*t storm from ember.js developers and users, not only because of allocation site pre-tenuring (but that was a big part of the story).
the fundamental problem with allocation site pre-tenuring, as we learned, are factories, which are very common in applications today (mostly because of frameworks but also for other reasons). another fundamental problem is assuming that your object factory is initially used to create the long living objects that form your object model and the views, which transitions the allocation site in your factory method(s) to tenured state, and everything allocated from the factory immediately goes to old space. after the initial setup is done, your application starts doing stuff, and as part of that, allocates temporary objects from the factory that now start polluting old space, eventually leading to expensive major garbage collection cycles and other negative side effects like triggering incremental marking way too early.
we started to reconsider the benchmark driven effort and started looking for real-world-driven solutions instead, which resulted in an effort called orinoco with the goal to incrementally improve the garbage collector. part of that effort is a project called unified heap, which will try to avoid copying objects if almost everything in a page survives. for example, on a high level, if new space is full of live objects, just mark all new space pages as belonging to old space now and create a fresh new space from empty pages. this might not yield the same score on the splaylatency benchmark, but it’s a lot better for real world use cases and it automatically adapts to the concrete use case. we are also considering concurrent marking to offload the marking work to a separate thread and thus further reducing the negative impact of incremental marking on both latency and throughput.
cuteness break!
breathe.
ok, i think that should be sufficient to underline the point. i could go on pointing to even more examples where octane driven improvements turned out to be a bad idea later, and maybe i’ll do that another day. but let’s stop right here for now…
conclusion
i hope it should be clear by now why benchmarks are generally a good idea, but are only useful to a certain level, and once you cross the line of useful competition , you’ll start wasting the time of your engineers or even start hurting your real world performance! if we are serious about performance for the web, we need to start judging the browser by real-world performance and not their ability to game four-year-old benchmarks. we need to start educating the (tech) press, or failing that, at least ignore them.
source: browser benchmark battle october 2016: chrome vs. firefox vs. edge, venturebeat.com .
no one is afraid of competition, but gaming potentially broken benchmarks is not really a useful investment of engineering time. we can do a lot more and take javascript to the next level. let’s work on meaningful performance tests that can drive competition on areas of interest for the end user and the developer. additionally, let’s also drive meaningful improvements for server and tooling side code running in node.js (either on v8 or chakracore)!
one closing comment: don’t use traditional javascript benchmarks to compare phones. it’s really the most useless thing you can do, as the javascript performance often depends a lot on the software and not necessarily on the hardware, and chrome ships a new version every six weeks, so whatever you measure in march may be irrelevant already in april. and if there’s no way to avoid running something in a browser that assigns a number to a phone, then at least use a recent full browser benchmark that has at least something to do with what people will do with their browsers, i.e. consider speedometer benchmark .
thank you!
here are parts one , two , and three in case you missed them.
Published at DZone with permission of Benedikt Meurer, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Trending
-
Writing a Vector Database in a Week in Rust
-
What Is React? A Complete Guide
-
Building a Flask Web Application With Docker: A Step-by-Step Guide
-
Why You Should Consider Using React Router V6: An Overview of Changes
Comments