7 Obstacles to Overcome When Deploying Ruby-on-Rails to Production
Ruby on Rails isn't new, but to many devs, it's unfamiliar territory. If you're considering using it, here are a few obstacles to keep in mind.
Join the DZone community and get the full member experience.Join For Free
I've been using Ruby on Rails for more than 10 years now. When it first came out, deploying a Rails application to production (from configuring your server to making all the pieces work together) was a huge challenge.
Things have definitely improved since then, yet even after 10 years, it's still not as straightforward as it could be. This article takes a closer look at the reasons why and how we can overcome some of these challenges:
Reason #1: Cloud Servers Aren't Dedicated Servers
When deploying to a data center or to the public cloud using a traditional approach, servers are considered the lifeblood of the product and are meant to live on for years. This was the approach used by many when deploying Rails applications 10 years ago. We've begun to move towards more dynamic infrastructure made up of elastic resources that can grow and shrink on demand. However, our tools haven't changed with it.
Tools such as Capistrano are optimized for a static infrastructure. They need to know the IPs or hostnames of target servers, as well as the role(s) they play within your infrastructure. They're not designed to support a changing infrastructure landscape. While plugins can help address this issue, they can be difficult to configure and often require heavy customization to fit into your deployment strategy.
In short, the tools built for a traditional deployment model don't translate well to a cloud-native architecture, resulting in more effort required to script and patch ourselves into a deployment solution.
Reason #2: Server Configuration and Management is Time-Consuming
Read any Rails deployment how-to and you'll usually see the following steps required to prepare a server to run Rails in production:
- Provision a server instance.
- Update the server instance with the latest version of all packages.
- Install the software required, such as database drivers, a front-end web server, libraries. required for JSON and/or XML support, image thumbnail libraries, and whatever else you may need.
- Configure your software, front-end web server, database, and other services and verify you have everything wired up correctly.
- Install server monitoring tools to ensure your processes are restarted if they crash.
- Monitor server CPU, network usage, disk, and inbound/outbound network traffic to ensure servers aren't overwhelmed or experiencing an outage.
- Monitor popular sites for announcements regarding kernel issues and package exploits.
That's a lot to do before you even begin to install your Rails app on a production server. A common solution is to move to a server configuration management tool such as Chef or Puppet. Khash previously outlined why scripting tools are not the best solution:
Even if you and your team are extremely disciplined and keep your scripts clean and up-to-date, there's always a high chance of scripts not working when you run them next time. Not because your scripts are broken but because of all of the external dependencies you cannot control: Linux Kernel updates from your cloud provider, apt repository changes or new library releases with loose dependency definitions are just a few that you cannot control and nobody warns you about so you can prepare for. They always get you at the worst time possible: when there's a fire.
Most developers have minimal experience with managing a Linux server in production. Developers that are experts in managing a production infrastructure are likely not to want to spend their time putting it all together. Larger teams that can afford to have operations specialists on hand may be constrained by their availability when a new app needs to be added to the deployment process. This work takes valuable time away from delivering customer-facing functionality. We documented some of these costs in the past, though, in hindsight, I would argue that setup time is going to be more than five hours unless your team has experience in using the tools and is scripting the exact same deployment approach used in the past.
If you decide to build and manage your own Linux servers for your Rails-based deployments, be sure to allocate enough time to configure, instrument, optimize, and automate your server infrastructure. And be prepared to maintain it over time by monitoring emerging exploits and installing OS patches and upgrading critical package dependencies, as these exploits are found.
Reason #3: The Rails Asset Pipeline
Most developers find that once their servers are prepared and their application is up-and-running, it's only just the start. The next step is to debug the asset pipeline process and perhaps change how their deployment scripts operate to make everything fit with the way the Asset Pipeline should work in production.
Here are some tips for getting your Rails app deployed with the Asset Pipeline:
- Disable compiling-on-demand for production. For development, compiling-on-demand is acceptable, as you'll likely need to make modifications to assets as you develop.
- During deployment, compile your assets using one server and then make them available to your front-end web server(s). Deployment time will take longer, but your customers will not incur the time penalty of waiting for each asset to be compiled by the first request, to the new version of your app.
- For better performance, use AssetSync to push your assets to a CDN or file storage service such as Amazon S3. This will push your assets closer to your customers and take the load off of your app servers.
You can read more detail about the Rails Asset Pipeline, including how it works and how to customize it, in the Asset Pipeline Rails Guide.
Reason #4: Maintaining a Reverse Proxy Configuration
Most Rails deployments involve a reverse proxy, such as nginx, to serve static asset requests and route Rails-related requests to one or more backend Rails processes. When deploying to a single server, this isn't difficult to manage. But as more servers are added and removed, keeping your nginx configuration up-to-date can require time and effort to automate.
There are two common solutions to simplify the deployment process when using a reverse proxy:
- Use a load balancer in front of each Rails server, confining your reverse proxy to routing requests to local Rails processes on the specific instance. This works well if you're comfortable deploying your own load balancers and configuring round-robin DNS to rotate between them.
- Place two or more reverse proxies in front of all Rails servers, then use a server configuration management tool such as Chef or Puppet to push updated reverse proxy configuration files as servers are added and removed from your infrastructure.
Finally, remember that getting the most performance out of your reverse proxy also requires some fine-tuning of configuration settings, based on your infrastructure choices. Budget time for this step as well, otherwise your reverse proxies may become a bottleneck under load.
Reason #5: Security is Hard
If standing up a Rails stack in production is hard, securing it is even harder. From properly configuring firewalls to restrict incoming traffic to staying on top of the latest systems and service patches, preventing unauthorized access using intrusion detection tools to blacklisting IPs, there are a variety of tasks involved. Skip these critical steps to save a little setup time, and you'll be paying the price later.
There are a variety of guides available on web security, including a 7-step guide from Digital Ocean and a whitepaper from AWS on applying proper security practices to your infrastructure. After reading these guides, you'll likely come away with a deeper appreciation of what it takes to secure your cloud infrastructure.
Reason #6: High Availability is Important
The deployment process doesn't stop once you have your app up-and-running. You'll also need to support pushing updates to all of your servers. In the early days of your app, you can take your application offline to perform upgrades of the software and operating system. As your customer base grows, this isn't an option. You'll need to be able to support rolling upgrades or stack-swapping to migrate to new versions of your app, while continuing to support incoming requests in real-time for the previous version.
There are several approaches that can reduce the downtime for an application, including:
- Removing servers from load balancers one-at-a-time, performing the deployment steps, then re-adding the server back to your load balancer instances (sometimes called a rolling update or "serialized deployment")
- Deploying updates to multiple servers at a time (sometimes called "parallelized deployment"). Note that this only works if the underlying services support zero downtime deployments, to prevent having limited or no available servers during the deployment window
- Swapping auto scale groups running older versions of the application, with server groups containing the latest version
- Swapping entire infrastructure stacks, including all necessary infrastructure components, for a fresh stack that's running the latest version of the application (sometimes called "blue-green deployment", "stack swapping", or "immutable stacks")
Not all approaches may be the best fit for the application and the team process, but every approach requires investing time and testing to make sure your deployment scripts are reliable. Refer to our article to learn more about deployment strategies and options for high availability deployments.
Reason #7: Every Cloud Vendor is Different
So far, we've assumed your deployment process is designed for one cloud vendor. This is often the case whether you use a vendor-specific deployment solution — such as AWS Elastic Beanstalk, or even a platform-as-a-service solution such as Heroku (AWS-based). However, supporting multiple cloud vendors becomes an important factor to withstand regional or vendor-wide outages. Assuming you can "just move" your application to another vendor should something go wrong is a bad idea.
Taking a multi-vendor approach to deployment isn't easy. It requires diligence in monitoring and maintaining scripts across two or more vendors. Even Cloud 66 can experience a cloud vendor bug occasionally. Be prepared to invest time to ensure you can replicate your deployment process across at least two cloud vendors, along with maintaining support over the life of your application.
How can Rails Deployments be Simplified?
After deploying numerous Rails apps across bare metal, dedicated cloud servers, and now elastic cloud infrastructure, the best solution I've found is to offload the work to a third-party. Teams are freed from scripting a deployment process that's production-ready, secure, highly available, and supports multiple cloud vendors. Here are just some of the features you can benefit from using such a service:
- No scripting required — just configure your application and you're ready to deploy across any number of servers
- Multi-vendor cloud support, preventing you from being stuck with only one cloud vendor in the case of a major outage or need to change providers.
- Pre-installed firewall that ensures only the proper traffic makes it to your servers, while preventing common attack vectors such as brute-force attacks.
- Automatic OS-level patching to handle emerging exploits.
- A managed database option with automated backups that are verified, so you can have confidence the restoration process will succeed
- Managed front-end nginx servers that reverse proxy to all server instances and are updated as servers are added or removed.
- Support for Docker-based deployments for microservice-based projects.
Published at DZone with permission of James Higginbotham, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.