Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Memcache 102

DZone's Guide to

Memcache 102

· Web Dev Zone
Free Resource

Add user login and MFA to your next project in minutes. Create a free Okta developer account, drop in one of our SDKs to your application and get back to building.

Memcache is a cache server that stores data in RAM for your convenience, providing items by key at the maximum speed while you're serving requests inside a web application.

How it works

PHP has a mature and stable PECL memcache extension:

sudo pecl install memcache

and Memcache itself takes the form a daemon memcached that must be running on the webserver. PHP processes can connect to that daemon over a standard port:

$memcache = new Memcache('localhost', 11211);

and insert values in the cache:

$memcache->set('key', 'value', 0, $timeout = 3600);

Roughly, here it's how to take out a value from the cache if existing, and to insert it in case the key is not populated yet.

if ($value = $memcache->get('sum_1+1')) {
  return $value;
} else {
  $value = 1 + 1;
  $memcache->set('sum_1+1', $value, false, 3600);
  return $value;
}

Tips: installation

How difficult can it be to use a cache? It's just a key-value store. However there are some gotchas that derive from Memcache implementation and that are specific to this server.

You have to install one memcache instance for each web server, on its own localhost. Since Memcache provides data from the main memory of the machine, it makes little sense to have a centralized machine since you'll have to make network calls to reach it and add some orders of magnitude to the access time (unless you have an hardware architecture that guarantees you a very fast network and limited RAM).

You'll have to install Memcache locally too, both the daemon and the PHP extension. Your development machine can support the very little overhead, as most of the pain is in keeping the extension and the server on the right version. Despite this, Memcache seemed very stable while I would never trust different versions of the Mongo extension in development and production (while CI should always be equal to production anyway).

Tips: operations

You can implement your cache so that if Memcache is down, the calculated value is returned. My colleagues wrapped the Memcache object into our own, and provided this additional logic in order to make the process as much as possible; you can't do this with databases as you depend on what is store on the other side, but when it's possible to skip the cache as a last resort, why not?

In your build and deployment scripts (you automate, right?) you should clean up the server from cached instances, mainly to avoid the risk of changing the keys or values format and have the old ones clash with the new code (think of passing from storing a string to storing a Value Object and you'll find out a lot of Fatal Errors).

Fatal error: Call to a member function method() on a non-object in Command line code on line 1

While databases have to be migrated, for caches it's easier to just throw away the old and regenerate them. Stopping and starting memcached is very fast (after all it just has to abandon its memory segment, not to write anything on disk).

Optionally and depending on your traffic and dependence on cache, you could perform some cache warmup by hitting your servers just right after the deployment with fake HTTP requests and transactions. That way the real HTTP requests from users will always get a nice load time because of the presence of already cached items instead of being the first, unfortunate user.

Memcache allocates memory in chunks of several megabytes, don't be disturbed by seeing a few entries allocate 7 MB. We were startled by this finding after inserting a single User-Agent into the cache, but it stayed in fact constant after the items grow to several thousands. PHP itself allocates memory in bigger chunks than the single variable through the Zend Memory Manager: the rationale is that isolated malloc() calls to the OS have a lower overhead than making one at each new variable to store, and the free RAM that become occupied is several megabytes anyway.

Tips: keys and values

Memcache can store objects, via serialization. No database connection can be stored of course, but immutable Value Objects are a nice fit for this model as long as they do not have external references, which is the usual case.

You shouldn't store scalars such as booleans as Memcache returns false when it cannot find a key. So it's impossible to distinguish between a cache miss and a cache hit where the value is false: store Value Objects instead.

Keys weigh too: each cache item is independent from each other and contains both the complete key and serialized value. When storing lots of small values this becomes important as the largest part of the item becomes the key itself: you may want to optimized a bit on what you're using while at the same time maintaining the key unique to avoid painful collisions.

Launch your application faster with Okta’s user management API. Register today for the free forever developer edition!

Topics:

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}