Over a million developers have joined DZone.

Delivering web software on a VPS


Much of my time is spent as a developer of web related software1. But recently I've found it impossible to avoid being drawn into some quite complex hosting issues - for handling my own sites and those of clients. Experts in internet related software will be familiar with the problems I have found; many software developers will be less familiar with them. This article is intended to help those who, like me, approach the use of hosting with less than comprehensive knowledge of the area.

Shared hosting has been familiar for some time, and often resource limits on processing or memory have not been particularly apparent to the user of the service. Lately, the use of a Virtual Private Server (VPS) has rapidly gained in popularity. Prices have fallen dramatically, virtualization is trendy and there are some perceived advantages. However, the available resources are rather more tightly controlled than in most shared hosting. Although there are other considerations, my focus here is on questions of resource management in a VPS.

Before we go into the detailed issues, though, let us consider the gains to be made through use of a VPS. One gain is said to be having root access to the server, or at least to the virtual server that is being rented. This is obviously a mixed blessing. It does mean that many aspects of the server can be tuned to suit the purpose for which it is being used, and the user of the VPS is free to make changes without considering anyone else. The downside is that you need to know what you are doing in order to benefit from this, and a good number of people buying a VPS have little or no idea where to start. Significant time and effort needs to be spent to see any gain.

Another gain, and the one that most attracted me, is that a VPS should be relatively protected from resource overloads caused by other people on the same server. On the basis of my personal experience, this advantage is only partly realised. The more extreme incidents where a server almost grinds to a halt, dragging down every site, seem to be avoided. On the other hand, there are still significant variations in performance that appear to be caused by factors outside my own VPS.

Let me explain how I came face to face with some quite complex resource issues in VPS management. My reseller account for shared hosting was working quite well, providing for both my own sites and a few client sites. The provider was a US based host, which suited me quite well as US hosts seem to be able to provide good value for money as well as being a suitable location for material that is being delivered globally. But the issue of performance troughs caused by other sites on the same server was irking me. The host had apparently done quite a lot of work to improve SQL that was the cause of the problems, but there were still concerns. So I started looking around.

After a few experiments with shared schemes, a VPS started to look attractive. As I am based in the UK, it seemed worth looking to see if a European host could provide a good service. Following a brief selection process, I signed up with a UK host for a managed VPS at the lowest level that was claimed by the host to be suitable for use with cpanel. I wanted cpanel for the benefit of clients whose site hosting was my responsibility. It was not long before I started to experience problems. Services were repeatedly failing.

Contacting technical support resulted in little improvement, and claims that I was using too much memory. It was suggested that a higher level of hosting plan was needed, involving higher cost. This seemed to me unreasonable, since there were no active sites loaded at the time, and the VPS had yet to do any useful work. This was the first indication that hosts were selling VPS packages that are inherently unstable.

The technical support was so negative that the simplest solution seemed to be to ditch the account and claim on the 30 day money back guarantee (it actually took 3 months and many complaints to get the money back!). I talked to an American friend and decided to take my hosting to the US once more. Oddly, the VPS chosen this time had a nominally lower memory specification, but it seemed to work somewhat better. I had loaded up a number of sites and got them active before problems started.

But then the problems began again. The whole VPS seemed to be down for significant periods, and I was told that too much memory was being used, and that a higher level of hosting plan was needed. Sounds familiar? I was still resistant to the suggestion, not only on grounds of cost, but also on principle. If someone sells me something, I like it to do what it is claimed to do for the price offered. It was time to start getting technical.

Let's review the position. Although I talked earlier about a VPS constraining resources, it turns out that there is usually only one critical resource that matters. Memory. Unless your sites are doing exceptionally heavy processing, or the server is grossly overloaded or under specified, then the server will have plenty of processing power to handle a reasonable VPS load. Although hosting plans specify disk space limits, these are rapidly becoming academic. Disk space is now so readily available that the host I am using has decided to make it free - if you run out, you simply have to ask for more (provided it is actually being used for hosting, not for something like an archive). And most people can easily buy a plan that has ample bandwidth for their needs. But for a VPS, memory is often a critical issue.

Now the obvious first step is to ask what tools are available for monitoring memory usage. The short answer is that, for a VPS, practically none are normally supplied. My host offered a script that would provide a spot figure for the memory currently in use. In terms of analysing where the memory is being used, that is useless, and in the overall context of VPS memory management, it is of very limited use. The only readily available way to break down memory use is to run the "top" utility that lists running processes and watch to see which processes have large memory use.

This is a hit and miss process, and could be greatly improved by a monitor that stored the breakdown between processes to produce averages and trends, I have not had time to write such a thing. If anyone does create an open source program of that kind, please let me know! There is an obvious need for easy to use tools in this area, given the large number of VPS plans being sold and the almost total availability of good memory monitoring tools.

To observe the general trends and to monitor any incidents, I have found the "loadavg" software from http://www.labradordata.ca/home/37

extremely valuable. It provides graphs of incoming and outgoing traffic, server load, and also the key memory parameters. The standard version generates quite a few warnings and notices, and I can offer a corrected version to anyone who needs it.

Although I cannot offer tools to analyse the situation, I can offer some general conclusions from my own experiences.

Mail ought to place very little load on a server, but it can run away with a lot of memory, particularly for handling anti-virus and anti-spam processes. Fortuitously, I had a solution for this. All my mail handling had been moved to a different hosting provider, mainly to take advantage of a managed Postini anti-spam service. It was cheaper and easier to buy the whole hosting service than to buy Postini accounts. Simply moving the mail elsewhere did not reduce the load on my VPS, it was still consuming large amounts of memory. After a fair amount of pressure on my host, and largely by making changes myself, the VPS configuration was modified to remove most of the mail load. The only services left were those required to allow web sites to send mail.

After that, MySQL is likely to be a major user of memory. There are many configuration variables available to control how MySQL operates, and in a VPS you have the freedom to tweak them to your heart's content. On the plus side, this can significantly improve performance, on the minus side you have to be careful about how much memory is consumed in the process.

There do seem to be considerable difficulties in carrying out effective MySQL tuning. Please correct me if I am wrong, but my impression so far is that many utilities that purport to interpret the run time statistics from MySQL and make recommendations for improvement operate in much too simple a fashion. It is easy enough to look at isolated aspects of database operation and suggest that some buffer should be larger. However, the various mechanisms in MySQL interact with one another, and problems are not always as simple as they appear. Nor do they really deal with the issue that extra memory is quite costly, and so the goal may not be simply maximising MySQL performance, but may instead be getting the best performance that can be achieved within memory constraints.

One service that I have not yet been able to control is the DNS, which is normally the BIND program, running as the process "named". Even when it has little data to manage, it seems that BIND allocates a substantial amount of memory.

But I haven't explained what was meant earlier by saying that an off the shelf VPS is quite likely to be inherently unstable. It took me a while to figure out how memory was controlled for a typical VPS that is running under Virtuozzo, so I will try to summarise it here to save others some trouble.

The terminology is pretty confusing, and in my experience, many technical support people at hosting companies do not properly understand the workings of memory controls. Much VPS hosting is offered with two figures quoted for memory: a guaranteed level with a common basic figure of 256 Mb, and a burstable level, quite often 1024 Mb. Few people seem clear what these numbers mean.

In a Linux system, there is a distinction between memory that has been allocated and memory that has actually been used. The system tracks these separately, and Virtuozzo applies constraints to them in separate ways. There are Virtuozzo configuration variables, and they are confusing because some of them simply have static constraints associated with them, and some of them also have a current value that measures the VPS use of memory. And there are actually two distinct guarantees relating to memory, although they are often (and unreasonably) set to the same value. Just to confuse matters further, all the Virtuozzo variables work in units of 4 Kb blocks, so you have to do some arithmetic to get megabytes.

Virtual (allocated memory, whether used or not) is measured by privvmpages. This also has a barrier, and it is the barrier on virtual memory that is usually described as the burstable limit. Normally, there is one barrier that will result in warning alerts being generated, and a slightly higher barrier that will always result in requests to allocate memory being refused.

Note that you are very unlikely to ever use memory to the burstable limit, since the level of allocated memory is normally substantially higher than the level of used memory. The used memory is monitored by the Virtuozzo variable oomguarpages. This is another confusing factor, since the primary function of oomguarpages is to carry a guarantee, but we will return to that in a moment.

If a server is provided with a lot of memory in relation to the number of installed VPS, then you could think about your own VPS simply in terms of allocated memory, which would be allowed to run up to the specified barrier, the warning level for which equates to the burstable memory quoted in sales material. But to get good hardware utilization, hosts will not provide so much memory, and then the configuration of guarantees comes into play.

One of the guarantees is straightforward, the other is not. There is a Virtuozzo variable called vmguarpages which is the figure up to which a request to allocate memory is guaranteed to be met. Remember, this is allocated memory, not used memory. If you have a guarantee of 256 Mb, then you do not have a guarantee of being able to use 256 Mb, only a guarantee of being able to allocate 256 Mb. Because of the way many software processes work, the memory actually used is likely to be significantly lower than the allocated memory.

When memory is requested beyond the level of vmguarpages, it will be allocated if it is available, but it may be refused. So, at any point beyond the guarantee, an allocation request may be refused. A process that has a memory allocation refused will usually fail. The second guarantee is more convoluted: the guarantee on oomguarpages relates to memory actually used, but what the guarantee says is that provided your actual memory usage is within the oomguarpages level, none of your processes will be terminated if the server is running out of memory. Contrarily, if actual memory usage is above the oomguarpages guarantee, memory may actually be claimed back, with a near certainty of the relevant process failing.

It is now possible to see why it is common for a VPS to be inherently unstable. As delivered, and before any web sites or mail boxes have been added, many VPS plans are running with actual memory usage within the oomguarpages guarantee, but the allocated memory well outside the vmguarpages guarantee (with both guarantees often being the same figure, that figure being quoted in sales material as guaranteed memory for the plan). The consequence is that every request to allocate memory is at risk, and therefore processes may fail at any time. No process will be terminated to grab back memory, but any new request has a possibility of failure. How often failures occur will depend on the provisioning of the whole server. It seems a fair assumption that the VPS that I ditched was less generously provisioned than the one I am currently using.

Another point is important in relation to VPS offerings. Absolutely any failure that can be linked to memory is likely to provoke a response from technical support that tells you to buy a higher plan. But if the failures are resulting from running into the limit on allocated memory (the privvmpages barrier) then the critical factor is the "burstable" limit. Often, plans with different so-called guaranteed levels have the same burstable limit, so upgrading the plan will not solve this particular problem.

It would make sense to configure a VPS with a higher figure for the guarantee on allocated memory (vmguarpages) while leaving the oomguarpages guarantee referring to used memory unchanged. However, I have seen little sign of this being done in practice, and it would require hosts to be quite careful in their provisioning.

Partly because of its complexity, there is a trend towards hosts replacing this memory management scheme with something simpler. It is likely to be some time before this becomes universal. The memory scheme known as "SLM" simply controls allocated memory. This removes the uncertainty that exists in the grey area between the guaranteed and burstable limits. In a comparison, for similar expenditure, one expects a higher SLM level than "guaranteed" level, although possibly not as high as the burstable limit.

Well, I never intended to get involved in all this detail, but found that I could not effectively manage a VPS to control both its reliability and its cost without doing so. So I hope that describing my experiences and the technical issues will help others to travel the same path more quickly and less painfully.

Original Author

Original article written by Martin Brampton author of PHP 5 CMS Framework Development


The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}