Crouching Supervisor, Hidden File Descriptor Setting
Learn about RabbitMQ load balancing problems, and how these issues were eventually solved.
Join the DZone community and get the full member experience.Join For Free
here’s an interesting problem our team faced last month that was extremely infuriating. we were in the process of launching replacement haproxy instances that are used to load balance to nodes in our rabbitmq cluster. we’ve done this a lot of times before and set all the usual user settings required under
to ensure proper file descriptors are allocated for the haproxy process. while creating this new role we also decided to use supervisor to supervise the haproxy process as it was previously observed in an older release that it didn’t automatically restart when it crashed (which in itself is a rarity).
everything looked solid and we began throwing some traffic at the new balancer. eventually, we discovered something had gone horribly wrong! tons of connection refused errors began showing up and the behavior exhibited was what one would expect if file descriptors weren’t being allocated correctly. sure enough, a quick look at
revealed that maximum open file descriptors were set to the very low value of 1024. we directed traffic back to the old balance and began the investigation. how could this be? all of the settings were correct so why is it being set to 1024?
supervisor was one new variable in the mix so i decided to begin pursuing the
and scanning for the number 1024 to see what might be tied to that. sure enough, i came to discover the
setting. let’s take a look at what the supervisor documentation has to say about this setting.
the minimum number of file descriptors that must be available before supervisord will start successfully. a call to
setrlimitwill be made to attempt to raise the soft and hard limits of the supervisord process to satisfy
the hard limit may only be raised if supervisord is run as root. supervisord uses file descriptors liberally, and will enter a failure mode when one cannot be obtained from the os, so it’s useful to be able to specify a minimum value to ensure it doesn’t run out of them during execution. this option is particularly useful on solaris, which has a low per-process fd limit by default.
default : 1024
well, that doesn’t make much sense… if i’m reading this correctly it’s simply saying that the number specified is the minimum that should be available, right? the devil as they say is in the details. if we look at the documentation on
we’ll clearly see that this will actually set the limits without any reservations on what it currently is. the call basically is going to set max open files to whatever the value minfds is defined to in supervisor. sure enough, as an experiment, i set
in supervisor’s configuration to a higher number and after restarting supervisor the number of open file descriptors allocated to the haproxy process were greatly increased and reflected what
was set to.
in the end, this pain also turned out to be unnecessary. while we had used supervisor because it was “what we know well” it turned out that the newer distribution we were releasing on already managed services via systemd which by default was also configured to respawn on failure.
hopefully this story will prevent a similar trail of sorrow for others who may encounter the same situation!
tldr; if you’re having supervisor supervise an application that is sensitive to max open file descriptors you’ll want to ensure
is set to match!
Published at DZone with permission of James Carr, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.