What Did All This Optimization Give Us?
Sometimes, you need to switch things up to get a better picture. When looking at optimizations, considering raw numbers instead of percentages can be a good idea.
Join the DZone community and get the full member experience.Join For Free
I’ve been writing a lot about performance and optimizations, and mostly, I’m giving out percentages because it is useful to compare to before the optimizations.
But when you start looking at the raw numbers, you see a whole different picture.
On the left, we have RavenDB 4.0 doing work (import and indexing) over about 4.5 million documents. On the right, you have RavenDB 3.5, doing the same exact work.
We are tracking allocations here, and this is part of a work we have been doing to measure our relative change in costs. In particular, we focused on the cost of using strings.
A typical application will use about 30% of memory just for strings, and you can see that RavenDB 3.5 (on the right) is no different.
On the other hand, RavenDB 4.0 is using just 2.4% of its memory for strings. But what is even more interesting is to look at the total allocations. RavenDB 3.5 allocated about 300 GB to deal with the workload, and RavenDB 4.0 allocated about 32GB.
Note that those are allocations, not total memory used, but on just about every metric. Take a look at those numbers:
RavenDB 4.0 is spending less time overall in GC than RavenDB 3.5 will spend just on blocking collections.
Amusingly enough, here are the saved profile runs:
What are your thoughts?
Published at DZone with permission of Oren Eini, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.