This blog post aims to dissect the various deployment options available for Rails applications in 2024, emphasizing the choice between native and containerized approaches. It dives into the advantages and drawbacks of each method and explores the hosting options available for each. Let’s dive right in!
First of all, let’s collect a few definitions. What exactly encompasses native vs containerized deployments? What sets them apart, and what platforms and solutions are generally available?
Native deployment involves shipping your Rails app directly to the server (which might be a VPS or even bare metal). This typically includes:
- setting up the Ruby runtime environment. This can be simplified by using a version manager such as
- for modern Rails apps that are not using the importmap approach, you will also need a Node or Bun runtime,
- optionally, when running on the same machine, you will also have to set up your database server, Redis server, and other operational dependencies. You can use "Infrastructure as Code" tooling such as Ansible or Puppet to streamline this.
- actual deployment will take place by copying your Rails app code to the server via a secure connection and starting it. Historically, capistrano was the go-to solution for this, although nowadays this can also be managed in your CI/CD pipeline.
Important to note, Cloud 66 has a comprehensive native deployment offering to facilitate each of the steps above.
Nowadays a lot of application deployment happens in (Docker) containers. Let's go through a few options, in increasing complexity:
- Kamal is a recently developed instrument by 37signals that helps you manage your containerized deployment on your own private servers. It is especially interesting for apps of smaller scale, both in terms of application code as well as operational requirements. It comes with a couple of drawbacks as well, for example you need to set up your own container registry.
- Platform as a Services (PaaS) like Fly.io and Render.com, which encapsulate some of the complexity behind deploying containers. Commonly, they expose an interface for your convenience - often in the form of a command line interface (CLI). This often encompasses setting up the appropriate runtimes, supplying DevOps resources such as container registries etc.
- Full-fledged container cluster management solutions like Kubernetes. Essentially this will let you control each aspect of a containerized deployment but entails a lot of (Dev)Ops complexity. You will need a dedicated Kubernetes cluster setup, which can either be provided by a cloud provider such as AWS or Azure, or managed by your own ops team. Cloud 66 helps you provision and manage your Kubernetes clusters and related infrastructure like load balancers, firewalls etc. on a vast number of providers. Additionally, it supplies a build pipeline in the form of BuildGrid to create and store your Docker images.
Impact on Development and Production Environments
When considering whether to deploy your application natively or via containers, one of the first things to assess is how it affects your workflow. Let's look at this next.
Native: Developing with a native database, Redis instance etc. is typically lighter on the developer machine’s resources (esp. memory) because there's no additional abstraction layer. That's why it's especially recommendable for small teams or single developers where provisioning the developer computers can be done manually.
Containerized: The main advantage here is having an exact copy of the production setup at your disposal. This minimizes the error surface when deploying and is especially favorable when provisioning development setups for large teams or organizations.
Native: In general this results in a better, or more direct use of server resources for lack of the virtualization layer. In general such setups lean towards vertical scaling, as scaling horizontally would entail cloning the entire server setup.
Containerized: Containers are abstracted pieces of “infrastructure as code”. They are therefore better suited for rolling out on an arbitrary number of instances, i.e. horizontal scaling.
Rails Deployment Solutions and Tooling
Native and containerized deployments display different strengths and weaknesses when it comes to flexibility and scalability. Let's explore a few of them.
- Customization: private ("root") servers can be completely tailored to your needs. Keep in mind though, that this can entail a significant ops overhead you need to address.
- Vendor Lock-In: Typically, deployments are less portable to other providers. This can be mitigated by using Infrastructure as Code tools like Puppet or Ansible, but this introduces another liability.
- Quick Tweaks: A native setup allows for direct intervention, but can lead to configuration drift. Configuration drift denotes the veering away from a certain fixed server configuration without any representation in a version control repository. Thus, manual tweaks tend to go unnoted and cannot be reproduced in a failover scenario.
- Infrastructure Agnostic: The very definition of containers implies that they can be easily moved between providers.
- Infrastructure as Code: container definitions ("Dockerfiles") can be treated like any other piece of code, i.e. versioned, reviewed, etc.
- Separation of Concerns: In large enterprises the organizational divide between development and ops teams can be significant and communication difficult. Declaratively encoding your infrastructure needs helps clarifying the responsibilities and communicating requirements between the teams.
- Vertical scaling is generally the best strategy, but limited by the sizes of servers your provider's can furnish. For a lot of small apps this will still be sufficient if you plan carefully.
- Horizontal scaling requires cloning the server setup, which is either labor intensive or requires special tooling. Ansible, Chef or Puppet can help, but you'll still need a manual load balancing setup. Luckily, by taking over the communication to your cloud provider account, Cloud 66 helps streamlining the definition and buildout of the required infrastructure.
- Auto scaling is not usually possible, at least not without additional tooling. Cloud 66 offers a way to add or remove resources based on your server load.
- Horizontal scaling can be easily achieved to provide for high availability and redundancy. Typical container orchestration setups allow you to specify the number of instances to run.
- Auto scaling is often provided by PaaSes according to the load metrics and is highly configurable.
Repository Structure - Mono-Repo vs. Multi-Repo
When deploying Rails applications, the repository structure plays a central role in determining the deployment workflow. As we look ahead to 2024, the choice between a mono-repo or a multi-repo structure has implications that extend to the deployment strategy—be it native or containerized. In this section, we will scrutinize the advantages and challenges associated with each repository configuration and how they relate to deployment practices.
Mono-repo: Unified Codebase
The most obvious advantage of a mono-repo structure lies in simplified dependency management. Libraries are handled centrally, reducing overhead in setup and keeping the maintenance burden small. More directly related to deployment strategies, this form of code organization allows for a unified CI/CD pipeline. Keep in mind though that this approach will generally lead to slower build times for everyone. Regarding collaboration, a mono-repo promotes visibility of code changes across the team. On the other hand, this can also lead to tighter coupling if you don't pay attention.
Multi-repo: Decentralized Codebase
In contrast, a multi-repo approach involves splitting up the codebase into multiple repositories. Typically these will contain individual (micro-)services, or in the case of a Rails monolith, engines. Advantages lie in more focused development, as well as decoupled deployments. Build and test times will be faster in general, but it requires careful scheduling of changes across the entire application to avoid issues with integration. Dependencies can be managed more loosely, but maintenance has to be done for each repository separately.
Impact on Deployment Strategy
Concerning deployment strategy, a rule of thumb is that mono-repos cater better for native deployment than multi-repos. Because they may have clashing dependencies, multi-repos will benefit from isolated containers more than mono-repos.
This blog post examines the deployment options for Rails applications in 2024, focusing on the pros and cons of native versus containerized approaches. Native deployment is direct and can be automated with tools like Cloud 66, offering a hands-on operational setup. Containerized deployment, ranging from self-managed options like Kamal to full orchestration with Kubernetes, provides an infrastructure-agnostic solution with a focus on scalability and ease of provisioning.
In development, native setups are resource-light and recommended for small teams, while containerized environments mirror production closely, minimizing deployment errors—ideal for larger teams.
Native deployments use server resources efficiently but may lead to vendor lock-in, while containerized deployments are portable and support infrastructure as code practices.
In terms of scalability, native deployments favor vertical scaling, while containerized options excel in horizontal scaling and often feature auto-scaling capabilities. The choice between a mono-repo or multi-repo structure impacts deployment, with mono-repos aligning with native deployments due to unified codebases, and multi-repos benefiting from the isolation provided by containers.
In summary, the choice between native and containerized deployment in 2024 will largely depend on the specific needs of the development team, the application, and its scalability requirements, with each approach offering distinct advantages.