← All Articles

Blue/Green Deployment: The Two House Trick for Stress-Free Releases

Kelley SchultzKelley Schultz
Dec 6th 25

Blue-green deployment: The two-house trick for stress-free releases

You know that feeling right before you deploy? The mix of excitement, dread, and the quiet hope that production behaves this time? Yeah — we’ve all been there. That’s why we are big fans of blue-green deployment. It’s one of those DevOps patterns that sounds fancy but is actually just good engineering hygiene — and it can save your morning/afternoon/evening or let’s be honest, your late night.

Quick summary…What exactly is blue-green deployment?

At its core, blue-green deployment is a way to release new versions of your app without downtime. Basically, You run two identical environments, simultaneously:

  • Blue → your current, live app (users see this)
  • Green → your new version, running quietly in parallel

When the new version is ready, you just switch traffic from Blue to Green. If anything goes wrong? Flip it back. It’s like having a magical undo button for deployments. Another way to look at blue-green deployments is to imagine your app as a house: you live in the Blue house, but you’re renovating the Green one. Once Green is ready — new floors, fresh paint, no bugs — you move in. If the plumbing explodes, you move back to Blue and fix it there. No downtime. No panic. No Slack channel called “#emergency-deploy.”

Now, this may be obvious…but here are a few reasons developers love blue - green deployments:

  • Zero Downtime: Deploy while users keep working — no more “back soon” messages.
  • Instant Rollback: If production misbehaves, switch back to Blue faster than you can type rollback.
  • Safer Testing: You can test the new version under real-world conditions before flipping the switch.
  • More Confidence: You can deploy more often — because you’re not risking everything on one fragile push.

Why blue-green deployments are tricky

Sounds too good to be true? Yea, well here’s where it can get tricky. Blue-green sounds simple until you’re juggling the following:

  • Two sets of servers (or containers)
  • Networking and load balancers
  • Environment configs, secrets, and databases
  • Keeping both environments healthy and synced

Sound like a lot? Well, yea it can be, when you are trying to set it up manually. If you want a manual instruction guide - with line by line code - to implement blue-green deployments, skip to the manual instruction set at the end of this post. Takes some time to test out…but no less effective. Warning, the guide is 7 pages long. But, If you want blue-green deployments available at the click of a button…click here for your free trial account of Cloud 66. Yes, it’s that easy.

Why do I really need Cloud 66?

At Cloud 66, we’ve built deployment pipelines that handle this orchestration for you. You can set up blue-green deployments directly within your stack:

  • Spin up a Green environment automatically
  • Run your new version side-by-side with Blue
  • Test it…confirm it’s healthy
  • And then flip traffic over safely — with built-in rollback if needed

You get all the benefits of blue-green, minus the script wrangling and DNS anxiety. We have been there…can you tell?

A Few Tips from Experience

Keep databases backward compatible. Blue and Green might briefly share the same DB, so your migrations should be backward-compatible.

  • Automate health checks. Don’t trust manual testing for production cutovers.
  • Monitor both environments. Tools like Cloud 66 let you view logs and metrics from both sides before switching.

Wrapping It Up

Blue-green deployment isn’t magic — it’s just smart engineering. But when paired with the right DevOps platform, it feels a little magical. It turns deployment from “please don’t crash” to “flip, test, done.” And that’s the kind of confidence every developer deserves. So if you’re thinking about implementing blue-green deployment — or just want a safer way to ship — try us out…we can help you get there faster (without reading a 7 page guide).

Manual Instruction Set: Blue-Green Deployments on a Single VM (Rails + NGINX)

Manual Instruction Set: Blue-Green Deployments on a Single VM (Rails + NGINX)

  1. Directory Structure On the VM, define a directory layout like:
- /var/www/myapp/
  - blue/
  - green/
  - shared/             # db.yml, secrets, pids, logs, sockets
  - workers/            # background worker codebase (optional)

Blue/Green paths

- /var/www/myapp/blue → One full Rails release
- /var/www/myapp/green → The other release

Only one of these is active at any time.

Why separate workers/? Because we don’t want blue–green switching to restart or break background workers. Workers always point to a static code directory, e.g.:

/var/www/myapp/workers/current

…and you deploy worker code separately and safely (ideally following your own worker release workflow). 2. NGINX Setup NGINX will proxy to a socket or port inside the active color. Example directory for Puma sockets:

/var/www/myapp/blue/tmp/sockets/puma.sock
/var/www/myapp/green/tmp/sockets/puma.sock

Example NGINX config (key part)

upstream rails_app {
    server unix:/var/www/myapp/current/tmp/sockets/puma.sock fail_timeout=0;
}

server {
    listen 80;
    server_name myapp.com;

    root /var/www/myapp/current/public;

    try_files $uri/index.html $uri @app;

    location @app {
        proxy_pass http://rails_app;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

The trick: The symlink /var/www/myapp/current is the only thing that changes during a deployment. It points to either:

  • current → blue/
  • current → green/

Switching this symlink + a graceful reload of NGINX triggers the blue/green cutover without affecting workers. 3. Systemd Services Web service per color: Define two services:

puma-blue.service

[Unit]
Description=Puma (Blue)
After=network.target

[Service]
User=deploy
WorkingDirectory=/var/www/myapp/blue
Environment="RAILS_ENV=production"
ExecStart=/usr/bin/bundle exec puma -C config/puma.rb
Restart=always

[Install]
WantedBy=multi-user.target

puma-green.service is identical with the green path.
Both services can exist simultaneously BUT only the active one should be running.
Worker service (one only):
workers.service
[Service]
User=deploy
WorkingDirectory=/var/www/myapp/workers/current
ExecStart=/usr/bin/bundle exec sidekiq -e production
Restart=always

Workers do not depend on current/blue/green.

  1. Deployment Workflow Step 1 — Build new release locally or in CI
  • Compile assets
  • Install gems
  • Precompile assets
  • Run tests
  • Bundle the release (tarball or rsync) for deployment.
  1. Deploy New Release to the Inactive Color Example logic: Determine active color from symlink:
 if readlink /var/www/myapp/current == blue → active=blue, inactive=green

Deploy into the inactive path:

rsync → /var/www/myapp/green
bundle install
rails db:migrate   # see db notes below

Important: Database migrations

You have two options:

  • Option A — Zero-downtime safe migrations only If your org enforces safe migrations (add columns, backfill, then switch code), you can run migrations before the switch.
  • Option B — Separate migration step Run migrations out-of-band before switching. Workers continue running the old code, so ensure backward compatibility.
  1. Start Puma for the Inactive Color Example:
sudo systemctl start puma-green

Then test it:

curl --unix-socket /var/www/myapp/green/tmp/sockets/puma.sock http://localhost/health

or using a port if preferred. 7. Blue–Green Cutover When the inactive slot is healthy, switch by: Step 7.1 — Update the current symlink

ln -sfn /var/www/myapp/green /var/www/myapp/current

Step 7.2 — Reload NGINX

sudo service nginx reload

This is instant — the switch takes ~0 ms and doesn’t interrupt workers because: Workers read from /var/www/myapp/workers/current (unchanged)

NGINX only switches the socket

  1. Stop the Old Color Optional but recommended:
sudo systemctl stop puma-blue

Once confirmed stable, you may also prune old assets/logs. 9. Rollback Strategy Rollback is simply:

ln -sfn /var/www/myapp/blue /var/www/myapp/current
sudo service nginx reload

And restart the old puma service if needed:

sudo systemctl start puma-blue

Workers remain unaffected.

  1. Optional Enhancements
  • Health checks
    • Implement /health or /up endpoint in Rails.
  • Warmup boots
    • Hit endpoints or load caches before flipping.
  • Shared cache considerations
    • Redis session store
    • Redis cache store
  • Avoid code-incompatible caches.
    • Log rotation
    • Logs can live in shared/log.
  • Handling environments with Sidekiq queues that are version-sensitive If your worker code must match the web version:
    • deploy workers separately
    • use Sidekiq’s rolling queue drain

Final Architecture Summary

  • Web Layer (Blue/Green)

    • Two independent Rails deployments
    • Two possible Puma services
    • NGINX points at /current which switches instantly
    • Only one color receives traffic
  • Worker Layer (Stable)

    • Runs from a non-blue-green directory
    • Not restarted during blue–green cutover
    • Continues processing unaffected

Try Cloud 66 for Free, No credit card required