How to Handle Heavy Workloads
How to Handle Heavy Workloads
Even with moderate workloads, using Tarantool can be valuable as a way to ensure good latency — in most cases, it’s one millisecond or less.
Join the DZone community and get the full member experience.Join For Free
RavenDB vs MongoDB: Which is Better? This White Paper compares the two leading NoSQL Document Databases on 9 features to find out which is the best solution for your next project.
You have probably heard of Tarantool. It’s a super fast DBMS with an application server inside. Tarantool is an open-source project and has been around for more than eight years. At Mail.Ru Group, we’re using it in more than half of our services, such as email, cloud, MyWorld, and Mail.Ru Agent. But being open source, we’re giving all of our work on Tarantool back to the community so that our users have the same version of Tarantool as we do.
Tarantool has client libraries for nearly all popular languages, and many of these were partially written by the community, which we immensely appreciate. When we come across a really efficient library, we immediately include it in our public package repositories, as we’re trying hard to deliver the DBMS and libraries right out of the box.
Tarantool is an elegant combination of a DBMS and a cache. A classical DBMS consists of a durable storage system with ACID transactions, a server-side language, tables, primary/secondary indexes, stored procedures, and other features. A cache, on the other hand, is nothing like a DBMS but is lightning fast in terms of throughput and latency. So these are two different worlds, and they converge in Tarantool. And Tarantool’s main purpose is to be the single source of truth for web-scale applications when they need to work with hot data.
Comparison With Classical DBMSes
If you’re using a traditional DBMS, such as Oracle or MySQL, then you’re lacking some advantageous cache features. I’m talking about things like fast request processing and low latency. Traditional DBMSes are good at many kinds of work, but they don’t have such speedy things. At the same time, caches have their own cons, like a lack of transactions and stored procedures. As a third option, you can try a DBMS with a cache on top of it, but you will still face some trade-offs: you’ll lose some DBMS features like ACID transactions, stored procedures, and secondary indexes; and you’ll also lose some cache features, like high throughput of write operations. Plus, new problems will crop up, the most serious being data inconsistency and the “cold start.”
If you aren’t okay with these trade-offs, and you would like a real DBMS and a real cache together in one, you should try Tarantool. It was designed specifically to address these issues.
Tarantool’s in-memory engine stores a database basically in two files: a snapshot file capturing the whole database image at some point in time and a transaction log file that stores all of the transactions committed after that point. This architecture allows “warming up” a database on startup as quickly as possible. A completely “cold start” means reading the whole snapshot to memory. Tarantool reads from disk at the speed of 100Mb/sec even with a spinning disk. So, if the data set size is 100Gb, for example, it’ll be uploaded to memory in 1,000 seconds, which is approximately 15 minutes.
Conversely, when we tested the cold start time of MySQL and PostgreSQL, the result was much worse. Unlike Tarantool, they begin accepting queries before the database is warmed up, but you still can’t use a cold database, and the warm-up speed is a couple of orders of magnitude slower than with Tarantool — around 1-2 Mb/sec. Basically, they require you to use some dirty tricks, i.e. run the “cat” command against some database files to warm up an index in advance; otherwise, your database will be warming up for decades. DBAs who administer MySQL know this kind of stuff, and they’re mostly unhappy with it. As for Tarantool, it’s immediately up and running, requiring the shortest possible cold start time.
Tarantool has a fantastic memory footprint. Its overhead for storing data is very low. The real size of data on disk or in RAM is usually only a couple of percent more than the size of the raw data itself. The overhead never exceeds 10%, plus the memory used by indexes.
Our Use Cases
At Mail.Ru Group, Tarantool is used for a wide variety of tasks. We have as many as a couple hundred Tarantool deployments. Three of them handle the heaviest workloads — the authentication system, the push notification system, and the advertising system. Let’s talk about each of these in more detail.
Most websites and mobile applications out there use some sort of authentication system — I mean a system that authenticates an end user with a login/password pair, or a session ID, or a token. Mail.Ru Group is no exception here, as it authenticates all web and mobile users. Our system has non-trivial requirements, which could be considered contradictory:
- Heavy workloads. Each page, AJAX request, and API call in our mobile applications uses this system in order to authenticate the users being served.
- Low latency. Users are surprisingly impatient. They want all of their requested information right away. So each call must be handled ASAP.
- High availability. The authentication system must serve every single request; otherwise, a user is 100% likely to get an HTTP error of 500 because we can’t handle a request until the user is authenticated.
- Each request hits a database. Each hit to the authentication system triggers a check of some credentials, which are stored in a database. Moreover, each hit needs to be checked against anti-brute force and anti-fraud systems, which in turn query the database and add a record to the user’s authentication history (current IP address, geographical location, time, authentication client, etc.). Plus, we need to update the last session/token access time, and we need to update the anti-brute force/anti-fraud databases with all of the changes that we’ve made. With regards to login/password authentication, we need to create a session in a session database, i.e. we need to insert a row into a table. So as you can see, there is a lot of work to do — and let me say it again — this must be done for EACH hit to our web and mobile applications. And this work includes not only elementary read-only operations but also SELECT and UPDATE queries. On top of that, there are many hackers who are constantly trying to break into our authentication system. This entails additional workload, which is quite heavy but is also absolutely useless to us.
- Fairly large dataset. This system stores a lot of information about each of our users.
- Expiration. Some pieces of the data set need to be expired, i.e. sessions/tokens. Each expiration requires an UPDATE transaction.
- Persistence. Every single change must be written securely to disk. We can’t store sessions in Memcached because when it goes down and up again, we’ll lose all of the sessions, forcing our users to remember and enter their logins and passwords again, which will probably make them hate us. It’s the same story with anti-brute force data: it’s our main weapon against hackers and we can’t risk losing it. And of course, we don’t want to lose our database with the hashes of user passwords — that’s one of the worst things that could happen to a website or a mobile application.
So that’s our authentication system. How do you like it? It has a bunch of very strict requirements. Some of them would be met just by using a cache, i.e. the ability to withstand heavy workloads, low request latency, and date expiration. Others would be met only using a database — persistence, for example. This implies that the authentication system should be comprised of a cache and a database combined into a single solution: it has to be as durable as a truck, but as fast as a red sports car — or at least as a yellow sports car!
Right now we get around 50K queries per second that require login/password authentication. This rate seems relatively low, but there is a lot of work to do on each request, i.e. a number of queries and transactions need to be performed. Tarantool takes care of them all perfectly.
On the other hand, the session/token authentication workload reaches one million queries per second! This is the total workload of the entire Mail.Ru Group portal. And this workload is handled by only 12 servers with Tarantool: four servers with sessions and eight servers with user profiles. Replicas are already included in these numbers! By the way, these servers are far from reaching their maximum CPU capacity. The CPU usage is around 15-20% as of now.
These days there are many users switching from laptops to mobile devices, and they are primarily using applications rather than mobile web interfaces. And of course, mobile devices need push notifications. A push notification is sent when there is an event on the server side, and this event needs to be delivered to the mobile devices of end users.
An interesting fact about the notification delivery process is that the server side does not send messages directly to users’ mobile devices. Rather, it sends them to a special iOS or Android service that then takes care of delivery.
These iOS/Android services need to somehow authenticate users, and this is done via tokens. These tokens need to be stored in a database. Plus, a user can have more than one device and, therefore, there can be many tokens per user. So, there are lots of events on the server side, and the more often you notify your users, the more engaged they are with your application.
So it is clear that a push notification system generates a heavy workload on an underlying database. Worse still, the heavy workload is generated along with sub-millisecond latency requirements, because you don’t want to slow down your backend and keep it waiting for a database. Fortunately, heavy workloads and small latencies are what Tarantool was created for.
But this is not the only job for Tarantool in the push notification system. What else is there? The short answer is: queues. The long answer follows.
What would you say if your server side had to connect with iOS/Android APIs directly? I bet, you’d say “never!” And I would agree because iOS/Android APIs can slow down, or become unavailable or unreachable. And in each of these cases, your backend performance will be affected severely. Obviously, you need a queue so that it can serve as an intermediate storage system for all of the notifications. Also, this queue must be fast, reliable, durable, and must have replication. Again, this is Tarantool. It can be perfectly used as a queue, and here is an interesting article on the topic.
Our push notification system at Mail.Ru Group handles 200K queries and transactions per second. By the way, each access to the queue is a transaction because, whether you push or pop, you still need to change the state of the queue and commit all of the changes to disk.
Mail.Ru is a huge web portal, and of course, it has advertisements on the majority of its pages. We have a sophisticated high-performance system that determines which ads to show. The system maintains a lot of information about users, their interests, and other kinds of things, which helps us to understand which ads to show to a specific user on a specific page.
The main challenge of the advertising system is how to handle heavy workloads at millisecond speeds. This system is exposed to even heavier workloads than the authentication system.
As an example, let’s say that we have 10 advertising slots on a page. For each slot, we need to look up many data sources, aggregate the results, determine which advertisement to show — and then actually show it. My next point is obvious, but I want to be clear: ads don’t offer any functionality to the end user, so their existence can’t be an excuse for a slowdown in the main services. Basically, everything needs to be done in a few milliseconds.
Our advertising system runs on one of the biggest Tarantool clusters in the world. Every second, it handles three million read transactions and one million write transactions.
Last but Not Least
Tarantool was born for heavy workloads. And even with moderate workloads, using Tarantool can be valuable as a way to ensure good latency — in most cases, it’s one millisecond or less. Traditional databases can’t do this; sometimes, in order to process a single user request to a traditional database, you may need to do many queries, and as all of the latencies sum up, the total latency per user request reaches a really unpleasant value. In this scenario, Tarantool would be really helpful — the shorter the time for processing one query, the shorter the total request time.
To sum up, Tarantool provides you with high throughput, low latency, and great uptime. It squeezes every last drop of performance out of your servers, yet is a real database with transactions, replication, and stored procedures.
Opinions expressed by DZone contributors are their own.