This is the second post in the series Getting Started with Drupal, by guest writer Tobias Sjösten. Tobias is a web technician and open source aficionado who specializes in the LAMP stack, with Symfony and Drupal being especially close to his heart. He writes code in Vim, plays MUDs in gnome-terminal and naturally believes a well configured text interface trumps its graphical equivalent any day of the week.
In my last post, we talked about Drupal’s building blocks. You learned how to assemble your site using fundamental concepts such as Entities with Fields and how to string them together in lists with Views. If you were feeling especially brave, I hope you tried building pages with Page manager and Panels.
Going forward, I’ll assume you’ve built your site and are creating a lot of great content for it. I’ll also bet my repository of Vim scripts that your site has become increasing sluggish as you’ve piled stuff onto it. While in the past you used to be able to load your meager content in the blink of an eye, it now it takes a couple of seconds to load your impressive collection of nodes.
We can’t have that. Site speed and performance are important for both visitors and search engine crawlers. Studies from Amazon show that a business loses one percent of revenue for each 0.1s increment in page load time. And studies from Google show that increasing load times from 0.4s to 0.9s decreases traffic and ad revenue.
So slow pages can have a very real impact on your business. These are some very real incentives for whipping your site into shape, so let’s move on from the why to the how.
Gather Performance Data
The first rule of performance optimization is that you don’t know how your site is performing until you’ve measured it. And if you can’t find the bottlenecks, how can you know what to optimize? Fumbling around in the dark can (and will) lead to wasted time focusing on the wrong sections of your site with little or no significant improvement.
Like any professional craftsperson, you need a good set of tools. The ones I usually use when optimizing performance are XHProf, Siege, YSlow, Google Analytics and of course, New Relic.
XHProf lets you profile your application to see which functions call what other functions and how long they take to do so. There is one part written in C as a PHP extension which gathers the data and another part in PHP which visualizes this data for you. Some PHP programmers will recognize this functionality from XDebug, which is even more powerful. However the whole idea with XHProf is to keep it slim, so that you may use the tool even in production with minimal affect on your performance. This way you are able to gather reports in very real scenarios and this is always the best data.
It’s one thing if your site feels fast on your development machine. It’s a whole other matter if it keeps humming along after its been featured on some big news site and tens of thousands users suddenly start putting load on it. Siege allows you to simulate just that.
Siege is a command line tool in which you define a number of concurrent users and how many requests to simulate for them. You can give it a file with URLs so that Siege knows what paths in your application to request. If your site is already live, it makes a lot of sense to base this file on real traffic. If 60% of your requests are for you home page, 35% for some article page and 5% for the About Us page, then have Siege use that pattern.
After you’ve completed a siege, you’ll get a summary of your results. It will tell you how many requests failed, how many succeeded, how much data was delivered, the average response time, the longest response time, etc. Remember to always do this before and after you do performance work. This helps you establish a baseline and see exactly what improvements you incisions have brought.
Your PHP execution time is only part of the equation. An equally important one is frontend performance — how quickly the parts are put together in your browser and how responsive it is to interactions.
With YSlow you can measure how well your website follows the best practices laid down by Yahoo, such as using native HTTP caching and minimizing DNS lookups and requests. There are a bunch of them and I really encourage you to read through the list, but with YSlow you can start with what is immediately relevant to your site.
Running this tool generates a list of various aspects of your frontend performance, complete with a grade from A (best) to F (worst). This gives you a nice ordered list of problematic areas.
As a developer, you often use more powerful machines than your average user has. This is a common problem in web development and can easily lead to a disparity in how your performance is experienced.
One way to measure your real load times is Google Analytics. Since November 2011, Google Analytics automatically tracks this through their standard service, but you should still tweak the vanilla script some. By default, only 1% of your visitors are sampled. Google itself recommends increasing this if you have less than a meager (heh) 100,000 visitors a day.
Once Google Analytics is enabled, give it a day or two to gather and aggregate data, and then look for your statistics under Content > Site Speed. You’ll easily be able to see which pages are the slowest and how long they take to load on average. This is a great source for building a prioritized list of parts to speed up. But there’s more!
By default you’ll see the pages as primary dimension. You can switch this to have it list load times for variables like continents, browser, operating systems, etc. This means you can see if any browser is struggling with your site or if your pages are loading especially slow in any part of the world. For my own site I can, for example, see that Eastern Europe has an average load time of 1.16 seconds, while Southern Asia clocks in at 29.05 seconds. Perhaps I should consider a CDN in Asia.
There is also a secondary dimension you can use to further segment your reports. In my opinion this is especially useful if you’re also using custom variables, for example to track logged in users differently from anonymous users. Adding a secondary dimension for this can give you important insights into whether those slow pages are sluggish because of some widget that is exclusive to logged in users.
If that isn’t enough, you can also track custom timing yourself. Just remember to get whatever data you can in there and then sift through it later for intelligence.
Since you are reading this on New Relic’s blog, I probably don’t need to introduce their service to you. But in short, it’s a performance management tool with features like slow page request analysis, error reporting, slow database query analysis, performance warnings, weekly reports, etc.
It’s a great tool, it’s free and I encourage you to go sign up for an account if you haven’t already.
By using the tools above, you should be able to get a good understanding of what parts of your application are slow, why they are slow and hopefully what you can do to improve them. Now comes an important part of software development — being lazy.
You’ll want to apply yourself to the most low hanging fruit. Start with whatever gives the most bang for your buck. Manually replacing all prints with echos would be an improvement but a stupid one. Use the intelligence you’ve gathered, find your biggest culprits and start with the easiest solutions. Here are some examples of easy wins for your Drupal site:
Alternative PHP Cache (Otherwise Known as APC)
Every time PHP is executed it usually has to parse your files and compile their content to bytecode. You can easily make your site several times faster just by installing an opcode cache like APC and configuring it correctly.
Embed CSS Images
Your CSS files probably contain a couple of design images. Once the CSS has been downloaded and parsed, those images need to be downloaded as well. This can be helped by installing css emimage.module, which embeds the images into the CSS file itself and thereby decreases the number of HTTP requests.
Drupal also sends a Vary: Cookie header with those cached pages, meaning they won’t be valid for logged in users. And that’s alright. We don’t want to serve the same pages for anonymous as for authenticated users as we have something else in store for them.
Drupal is a generalist that tries to cater to as many use cases as possible. Therefore, it’s obviously no specialist when it comes to your unique website. Since you know you shouldn’t hack core or contrib modules, you might feel hopeless when you find a slow part of your site that you know you could improve by customizing it for your data.
But there is hope and it’s spelled c-a-c-h-i-n-g. By saving the results of our heavy operations in a cache we can more or less make the pain points obsolete. As luck would have it, Drupal is an expert cache operator!
When Drupal is about to do something heavy, it can check if that operation has previously already been done and if the results are still fresh enough to be usable. If so, a lot of heavy lifting can be avoided, resulting in faster pages.
Of course this “automatic” caching only happens where it has been enabled. Content built with modules like Views and Panels are two such areas. Both are considered fundamental building blocks and will arm you with some very powerful, yet simple to use, caching mechanisms.
For Views, you can cache in two layers; first the results of your database query and then the resulting rendition of that data. Check for Caching under Advanced when you go to edit your view. You can switch that from none to time-based and then configure it for how long you want to cache each layer.
When editing Panels, you also have two layers of cache; one for the overall rendition and another for the individual widgets in the panel. Under Content you will find a link to Display settings, which lets you configure the overall caching. Click the cogwheels of the content widgets to change each of their cache settings.
Per default, Views will save a cache per combination of your roles, current language, argument, etc. This can sometimes be too granular, such as when you want to cache the view just once and then have that served to all users. Panels is a bit better when it comes to granularity. Here you can chose to split the cache up on arguments, context or just keep one cache. However, the latter does still split up in an admin/non-admin cache. While it’s even more flexible than Views, there might be cases where this is not enough for you. For example, if your site receives irregular traffic throughout the day (most local sites) you could want different caching depending on the time of day.
Thankfully you can create your own cache plugins and apply whatever logic you need. To let the two modules know you have plugins for them, implement for Views hook_views_plugins() and for Panels hook_ctools_plugin_directory(). Examples of how to do this can be found in views_content_cache.module and panels_page_cache.module.
Finally on the topic of modules and caching, there’s cache invalidation. It’s widely known to be one of the hardest things in computer science, next to naming. In Drupal land though, we have a lot of our bases covered with cache_actions.module, which supplies actions to clear select caches. This enables you to build Rules which fires on specific events and clears your views and panels caches. For example, you could use it to flush your frontpage whenever a news headline is posted. It’s easy enough to configure your Views and Panels to cache their content, but what really happens what you click those buttons? What is that magic that makes everything work? The answer is Drupal’s pluggable cache system.
Behind the Scenes
It’s easy enough to configure your Views and Panels to cache their content but what really happens what you click those buttons? What is that magic that makes everything work? The answer is Drupal’s pluggable cache system.
It begins by generating a cache ID (cid). This is the unique key for the given context and should be the same everytime the cached result would be the same. Using Panels simple cache plugin would for example use panels_simple_cache:1:5 when caching the fifth panel in the first display.
To start working with the cache you also need a cache bin. Think of this like buckets for separating the cache items. Not for the sake of namespacing them however, that is what the cid is for. The bin is to enable implementation to configure different backends for different cache types. Default is “cache” but Views for example uses “cache_views” instead.
With a cid and cache bin decided you can call cache_get() to see if the results are already cached. This is when the pluggable part kicks in.
Drupal looks up the variable cache_class_cache (or cache_class_cache_views for Views) to determine which class to instantiate for the cache bin. This defaults to DrupalDatabaseCache, which will write to and read from your database. Either by running variable_set() to change this, or setting the value in $conf in your settings.php, you can change the class to something else as long as it implements DrupalCacheInterface.
Once your class is instantiated, its set() method will correspond to cache_set(), its get() to cache_get() and its clear() to cache_clear_all(). How the class chooses to implement those are entirely up to that specific cache engine.
A famous saying in Drupal is: “there’s a module for that” and of course this rings true in the case of caching as well. You can download and start using a custom cache implementation for APC, Memcache, Redis, etc.
Drupal performance is largely achieved with clever caching, but there are other ways to shave even more milliseconds of your page loads. I won’t go into exact detail, but will leave that as something you can search for on Google.
You are probably running Apache with the Prefork MPM but there are others, like Worker and Event. With each there are also opportunities to tweak your configuration to perform better for your specific environment.
Boost Page Caching
Instead of Drupal’s native functionality you can use boost.module to cache your pages. This module will generate files from the saved pages, which your web server can then serve without loading a single line of PHP.
Switch to Nginx
Replacing Apache altogether could be an even better suggestion, as that would remove the overhead of loading the PHP engine also for static files. Like those from Boost, for example. One popular alternative is Nginx, used together with PHP-FPM.
Since your database is probably the bottleneck of your application, it makes sense to tune it some. Using the mysqltuner.pl tool make this much easier. Consider it the YSlow of databases.
By placing a reverse proxy in front of your website, you can configure it to cache pages even faster than what Drupal’s built-in mechanism can deliver. Using Varnish as your reverse proxy has the added benefit of supporting Edge Side Includes. This means you can cache entire pages but still keep the dynamic parts. The Drupal module for this is still under development but it’s definitely worth looking into.
I hope I’ve helped you find some tools you didn’t know of before. Remember that performance work is a continuous process. You need to keep monitoring your application’s performance in a production environment. Measure with real, full data and don’t optimize until you know exactly what needs to be worked on. Start with the low hanging fruit and work your way up. Let’s keep teh Internets snappy!