Long-running PHP processes: external resources
Join the DZone community and get the full member experience.
Join For FreePHP has multiple SAPIs : for instance, the web-based ones, which run processes inside a web server such as Apache, and the cli-based, which runs processes as independent entities with no time limit.
When you start using the cli SAPI, you may be tempted to setup long-running processes that never terminate; for example, to run workers picking jobs from a queue or Gearman. These processes look like this:
setup(); while (true) { $job = pick(); $job->execute(0); }
It's not trivial to correctly implement this kind of processes, as they immediately negate the advantages of a shared-nothing architecture like that of PHP. For example, when a process crashes for any reason you will have to resort to a supervisor process that respawns it (like supervisor itself).
Today I'm going to write about one of the issues that come up with long-running processes - the permanent access to external resources.
The process architecture
Since each one of our workers needs to execute multiple jobs, we usually don't want them to terminate when a particular job raises an error. For example we may have been sending notifications to external web services and we may get HTTP-related exceptions of unreachable hosts.
When zooming in, our worker looks more like this:
while (true) { $job = pick(); try { $job->execute(0); } catch (Exception $e) { // manage the error } }
What to do in the catch block is application- and job-dependent: we may schedule the job to retry later, or declaring it failed. The only certain things are that 1) we don't our worker process to terminate for just an error and 2) we want the option to do something instead of bubbling up the exception to the PHP log.
External resources
However, PHP processes usually set up links to external resources in their bootstrap:
- opening database connections for MySQL or MongoDB.
- Opening file descriptors to stream some form of output.
- Setting up a connection to Memcache.
- Opening a connection to the queue or to the job server to get something to do.
Even when these resources are created lazily, we can pretty much assume that after a few jobs a worker would have to create them for its execution (it the worker never needs them, congratulations, you can skip the rest of its article).
Thus usually these resources live as long as the process; if you happen to restart your MySQL instance (or another external server that the process is connect to), you will start getting errors like:
[18-Dec-2013 18:30:38 Europe/Rome] DB error: MySQL server has gone away in SELECT * FROM ...
If we catch these exceptions, our workers will start to fail immediately while executing each job, possibly marking them all as failed very quickly as at the first query they will encounter the exception. This is a very dangerous behavior because when you are put into the condition of having to restart daemons the last thing you want to do is to have to remember to restart your workers too; even if you do the workers will start failing for several seconds, which can mean hundreds of jobs are lost. We can definitely do better than this.
Deadly exceptions
An initial solution for us was to setup a periodical restart of the workers via a cron job, which was not efficient for several reasons:
- even if you trap SIGTERM and similar signals sent to workers for their termination, process managers like Supervisor just kill them without notice, leaving some jobs in an intermediate state where they started executing. I guess this is why you should never trust software entities with Manager in the name.
- If you restart with a frequency of X minutes, in the worst case a worker can run X minutes without a MySQL connection (or similar requirement), with obvious results.
Recently we introduced the concept of DeadlyException to avoid the periodical restart.
while (true) { $job = pick(); try { $job->execute(0); } catch (DeadlyException $e) { break; } catch (Exception $e) { // manage the error } }
DeadlyException can be a class or an interface, what you're most comfortable with using. It's part of the package of the workers, usually; so for the dependency inversion principle it could be extracted as an interface that production code and your workers both depend upon.
When a query fails, we now check the error inside the driver and:
throw new DeadlyException("Serious error with MySQL connection or query, -1, $e); // $e may be a PDOException
Since we cannot change the type of the exceptions thrown by driver, we have to throw a new one that can be recognized by the long-running processes. This gives us also the flexibility of only terminating when MongoExceptions happen only during certain operations and not for each of them, if that's your use case.
An alternative is to define deadly exceptions as a list of classes, which let you use an already established hierarchy (possibly not under your control):
$deadlyExceptions = [ 'PDOException', 'MongoException' ];
It can get quite tricky to only consider some MongoException deadly, however. That use case is best dealt in your own PHP driver around the standard one as it already contains most of the information about MongoDB (substitute MongoDB with any external deamon and the reasoning still works.)
Opinions expressed by DZone contributors are their own.
Comments