Can Prepared Statements Improve Your Scalability?
The Cloud Zone is brought to you in partnership with Iron.io. Discover how Microservices have transformed the way developers are building and deploying applications in the era of modern cloud infrastructure.
Everybody knows that using prepared statements for your database access greatly improves latency times. My point, in this blog post, is that it can also improve your database scalability. How? Just read on.
What does a database do with a statement?
Well, each statement takes a toll on the database resources, as there are many tasks to be done with each SQL statement. For example, the database needs to maintain isolation level (read more about the pain that surrounds this issue here). It needs to flush buffers to disk when a commit occurs. It needs to maintain locking. And it also needs to do a very complex task – parse and optimize the SQL statement.
This is a necessity. The database has to parse the SQL command, and string parsing is a CPU intensive action, especially for a complex language such as SQL.
Optimizing – well, optimizing is even more complex than parsing. Since optimizing involves building the best execution plan for the query, and can greatly improve the performance of a query, it has to be executed.
So, since we must parse, and we must optimize – why mention it? There is no way around it. However, let’s just be clear. Since both actions are CPU intensive they have great impact on the performance of the database and the number of concurrent actions it can perform – hence their impact on database scalability.
What is the database scalability barrier?
A good proof for this point is the wonder post by Yoshinori Matsunobu. You can read it here. It explains just how much time the database spends on SQL parsing, and shows what happens when you “cut out” the SQL parsing bit of the database and use the InnoDB engine API directly.
In the blog post Mr. Matsunobu was able to reach a state where his MySQL supported more hits than Memcache.
So I think it is safe to say that SQL parsing and statement optimization have a great effect on database scalability.
What does a database do with a prepared statement?
So what can we do? Must we remove our beloved ORM code and replace it with direct InnoDB calls? Well, for some applications that’s definitely true; but not for all, and for most applications it’s just not feasible.
Welcome to the world of prepared statements. With prepared statements, the database first gets the statement, parses and optimizes it, and from that moment on just gets parameters and executes an already parsed and optimized statement – resulting in great performance improvements for the client and a much more “relaxed” action for the database.
Note that the prepared statement is handled on a connection level – one more great reason to use connection-pooling mechanisms.
How to do it
If you’re not familiar with using prepared statements, I gathered some links that explain how to write them in your programming language.
Note that for legacy reasons the MySQL JDBC driver only mimics prepared statements on the client side, and does not translate those to Server Side Prepared Statements – so the performance benefits are lost. To enforce Server Side Prepared Statements use the useServerPrepStmts parameter. Check here for more info.
I’m not a Ruby expert, but it looks like ActiveRecord doesn’t support prepared statements, so that’s a big problem with Rails.
Ruby itself however, does support prepared statements, http://blog.aizatto.com/2007/05/19/connecting-to-mysql-using-ruby/
Prepared statements offer many attractions – in performance, security and more. I strongly urge you to use prepared statements whenever you can – you will feel the benefits immediately.