Hunting a Performance Bottleneck
Hunting a Performance Bottleneck
Join the DZone community and get the full member experience.Join For Free
RavenDB vs MongoDB: Which is Better? This White Paper compares the two leading NoSQL Document Databases on 9 features to find out which is the best solution for your next project.
We got a support request from a customer, and this time it was quite a doozie. They were using the FreeDB dataset as a test bed for a lot of experiments, and they experienced very slow indexing speed with it. In fact, what they found was utterly unacceptable indexing speed, to be frank. It took days to complete. However, that runs contrary to all of the optimizations that we have done in the past few years.
So something there was quite fishy. Luckily, it was fairly simple to reproduce. And the culprit was very easily identified. It was the SQL Replication Bundle. But why? That turned out to be a pretty complex answer.
The FreeDB dataset currently contains over 3.1 million records, and the sample the customer send us had about 8 indexes, varying in complexity from the trivial to full text analyzed and map reduce indexes. We expect such work load plus the replication to SQL to take a while. But it should be pretty fast process.
What turned out to be the problem was the way the SQL Replication bundle work. Since we don’t know what changes you are going to replicate to SQL, the first thing we do is to delete all the data that might have previously been replicated. In practice, it means that we execute something like:
DELETE FROM Disks WHERE DocumentId in (@p1, @p2, @p3) INSERT INTO Disks (...) VALUES( ... ) INSERT INTO Disks (...) VALUES( ... ) INSERT INTO Disks (...) VALUES( ... )
And over time, this became slower and slower. Now, SQL Replication needs access to a lot of documents (potentially all of them), so we use the same prefetcher technique that we use for indexing. And we also use the same optimizations to decide how much to load.
However, in this case, we had the SQL Replication being slow, and because we use the same optimization, to the optimizer it looked like we were having a very slow index. That calls for reducing the load on the server so we can have greater responsiveness and to reduce overall resource utilization. And that impacted the indexes. In effect, SQL Replication being slow forced us to feed the data into the indexes in small tidbits, and that drastically increased the I/O costs that we had to pay.
So the first thing to do was to actually break it apart, we now have different optimizers instances for indexes and SQL Replication (and RavenDB Replication, for that matter), and they cannot impact one another.
But the root cause was that SQL Replication was slow. And I think you should be able to figure out why from the information outline above.
As we replicated more and more data into SQL, we increased the table size. And as we increased the table size, statements like our DELETE would take more and more time. SQL was doing a lot of table scans.
To be honest, I never really thought about it. RavenDB in the same circumstances would just self optimize and thing would get faster fast. SQL Server (and any other relational database) would just be dog slow until you came and added the appropriate index.
Once that was done, our performance was back on track and we could run things speedily both for indexes and for SQL Replication.
Published at DZone with permission of Oren Eini, CEO RavenDB , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.