← All Articles

9 Critical Decisions for Running Docker in Production

James HigginbothamJames Higginbotham
Mar 30th 16Updated Jan 7th 21
Security

9-crtitical-decisions-needed-to-run-docker-in-production

You've got your Rails or Rack-based Ruby app built. It's even running in Docker off your laptop and other developers on your team have it up-and-running as well. Everything looks good, so time to ship it.

Wait! Not so fast! Transitioning to a Docker production environment isn't quite as easy as it sounds. There's more to it than just shipping your locally built container image into a production environment.

Let's examine the 9 most critical decisions you'll face before you can securely deploy your Dockerized Rails and Rack-based Ruby apps into production:

Critical Decision #1: Image Management

While setting up a Dockerfile and docker-compose.yml for building images in your development environment is fairly straightforward, you should create a consistent process to build Docker image files. This will eliminate any concerns about local environments, while avoiding dependencies on a development laptop as your only means to build new images. It will also enable you to create a continuous deployment pipeline to go from code commit to Docker image without manual intervention from your development team.

Unless you plan to release your Docker image to the world, you'll also need a private Docker image repository. While Docker provides a private image repository that allows you to deploy and manage yourself, you may not want to take this on, as failures can halt your deployment pipeline. You'll need to select an option to secure your private Docker images, while still making them accessible to your build and deployment processes.

Critical Decision #2: Selecting a Cloud Provider

Once you have a Docker image, you need to deploy it to a Docker host. Many cloud providers now support the deployment of Docker containers. Since most charge for the resources used rather than the container instances, it's important to check the pricing details to avoid sticker shock.

Be aware that the container deployment process varies across cloud providers, which can make it more difficult to change provider in the future. If you wish to use multiple cloud providers or prevent lock-in, you'll need to build in support for multiple cloud providers (or find a solution that does it for you).

Critical Decision #3: Network Access and Security Patching

Running containers in a local development environment creates no serious security risk. All processes run on a single host, isolated from the risk of network intrusion and various attack vectors common to production servers.

Development settings are pretty open to allow for troubleshooting during development. In a production environment, network settings require more consideration. Public traffic should not have access to certain containers, which should only be accessible by other containers within that private network. Network monitoring of traffic, brute force login attempts, and other attack vectors must be identified and dealt with appropriately.

You'll also need to track security patches as they're announced, then determine if your hosts and containers are all secure or need to have the patch installed.

Moving your containers to production requires thought about network access and keeping your containers and Docker hosts patched. Don't overlook this critical step for your production environment.

Critical Decision #4: Load Balancing across Containers and Hosts

Once we move from a single container service to multiple containers across one or more hosts, we'll need load balancers to distribute incoming requests. Using tools such as nginx or HAProxy is a common approach. The difficulty lies in keeping their configuration updated as containers are created and destroyed, as well as when new Docker hosts are added to your environment for additional capacity. Factor in time to address this need through tooling or scripting.

Note that, unless you plan to take your current deployment offline while you upgrade, you'll need to support multiple running versions at the same time. Your load balancing strategy needs to take this into account, to prevent dropping connections or routing traffic to the wrong version.

For more information on selecting a load balancing strategy, read our article on server scaling techniques.

Critical Decision #5: The Deployment Process

Many developers assume the tools they use in a development context will work in production. This isn't the case. Docker Compose configurations will vary considerably from development to production. From volume bindings to port bindings and network configurations, wiring up your containers will change. Complexity will grow as you move to a multi-host environment. You'll also have additional containers not commonly found in development, such as log aggregators, external databases, and HA message brokers (just to name a few).

Coordinating the differences in environment settings requires considerable scripting effort. It won't be as easy as docker-compose up as it is in a development environment. Plan enough time to work out these details as you go from a simple, single-container application, to a complex set of container images, each with multiple instances needing to be wired up to load balancers for distributing workloads. As your application matures and traffic increases, rolling upgrades or blue-green deployment strategies will need to be used to prevent site outages.

Not familiar with effective deployment strategies? Check out Deployment strategies for cloud native applications to learn more.

Critical Decision #6: Service Discovery

As the number of containers grow, so will the management overhead of registering them for consumption by your application. There are a variety of tools to manage this process, most requiring integration and configuration into your Docker production environment. Cloud 66 has found a simple way to manage service registries by using an internal DNS server.

Whatever you select, be sure to keep your service registrations in sync with your container instances, and factor in a load balancing strategy when containers are spread across multiple Docker hosts. Doing so will ensure your application can be coded to a general service name (e.g. myservice.mycluster.local) that can be used for routing to the specific container instance to service that request.

Critical Decision #7: Log Management

Using Docker Compose in development makes log viewing trivial and enables rapid troubleshooting. When dealing with multiple container instances across any number of hosts, it becomes more difficult to track down issues.

Distributed logging enables servers to collect and aggregate log entries across one or more log servers. Your production infrastructure will require support for log aggregation across containers. You'll also need to factor-in how you plan to view and search these logs to support troubleshooting.

Critical Decision #8: Container Monitoring

Monitoring your containers in production is essential. From Docker hosts to containers, you need to know the health of each service and the entire system. Selecting the right tools and monitoring strategies will ensure you minimize the impact of outages and maximize your host resources, resulting in happier customers.

Not sure what kind of monitoring strategy you may need? We have a monitoring strategy guide that can help you out.

Critical Decision #9: Database Management

In a development environment, databases can be hosted in-container without worrying about I/O performance. Production environments cannot tolerate poor performance, especially if we want to deliver a great customer experience. Scaling the database to handle increased I/O based on demand, along with high availability and a reliable backup/restore strategy are key to running a modern web app or mobile API. The strategy you select for your production environment will positively or negatively impact your customers.

Read our article on scaling production databases to learn more.

Do I Really Need To Make These Decisions Immediately?

Yes, most likely. Unless you're deploying a trivial application or API with little traffic, each of these decisions will be critical to product success. Seems like quite a bit of responsibility, doesn't it?

Developers need to remember that Docker is a tool, not a full-blown cloud native architecture solution. It offers some amazing capabilities, and I'm very happy to have Docker as part of my architecture. But it requires the same effort to maintain a production Docker deployment as any other cloud-based solution (and perhaps even more).

Using Cloud 66 to manage my containers in production removed the need for endless pontification over these considerations. By customizing a few configuration files, I can move my environment from development to production and gain the advantages of their platform: built-in logging, secure network access management, monitoring, continuous delivery, patch management, and their Database-as-a-Service when I need it. Try it out for yourself and see how easy deploying and managing your Docker production environment can become.


Also check out our blog about the '8 Components You Need to Run Containers in Production'.


Try Cloud 66 for Free, No credit card required