← All Articles

Planning Your Docker-Based Microservice Stack

James HigginbothamJames Higginbotham
Apr 7th 16Updated Jun 6th 22

Docker continues to grow in popularity, primarily because of its ease of packaging and distributing code across any environment. Combined with a microservices architecture, Docker empowers developers to build software in smaller, more modular components that can be combined to create solutions to complex problems.

Moving to a microservices architecture requires some up-front planning to avoid it becoming fragile, resulting in a distributed monolith. From service design to deployment and monitoring, there are several considerations for deploying a microservices architecture using Docker. Building on our earlier post of "9 Critical Decisions for Running Docker in Production", let's now look at how to plan a successful microservices architecture on Docker.

Design for one service per container

As new microservices are created, it may be tempting to optimize the deployment model by placing multiple services per container. However, services cannot be scaled independently from one another when they occupy the same container instance.

Planning tip #1: Adopt a one service per container approach to support scaling your microservices based on demand, without consuming additional memory and CPU overhead needed by poly-service containers.

Implement a service discovery solution

To achieve scale and high availability, microservices will need to be distributed across multiple Docker hosts. Hard-coding your app with the hostnames or IP addresses of containers running your service isn't an option. Instead, you will need to manage the service discovery between your code, which needs to locate a service by name, and your Docker infrastructure that's managing one or more container instances.

Planning tip #2: Select a service discovery strategy that can be kept up-to-date automatically as you scale container instances up and down. Options include: ZooKeeper, Consul, Etcd, Eureka, or using an internal DNS strategy.

Distribute shared assets via CDN

For most web applications, there will be shared assets, such as images, stylesheets, and JavaScript libraries. If you use a framework such as Ruby on Rails, the asset pipeline can be a useful tool for managing this process. However, it is also difficult to manage in a production environment. The assets generated from the pipeline within a single container can't be shared across containers easily. There are two options: have each container instance generate and serve the assets (not ideal), or push assets to a single, shared location.

Planning tip #3: Utilize CDNs for serving your static and dynamically generated assets. Doing so will improve browsing performance and offload the work from your container instances designed to process incoming API requests. It will also simplify your container infrastructure, as you will not have to find a way to share assets across container instances.

Externalize, monitor, and manage your microservices

As with a typical cloud native architecture, incoming HTTP requests need to be balanced across server instances. However, most cloud-based load balancers only route to servers, not to multiple containers on the server. By installing a reverse proxy in front of your HTTP-based microservices, incoming requests can be distributed across any number of container instances spanning multiple Docker hosts.

Beyond load balancing, your HTTP-based microservices will most likely require authentication, authorization, and rate limiting. Services exposed for mobile or public/partner developers will also require spike arrest to prevent DoS attacks, and routing from a single external URL structure to internal microservices (e.g. http://api.example.com/products/ -> http://products.myapp.local/).

Planning tip #4: Install a reverse proxy and/or API management platform. There are a variety of commercial and free open source options, including: 3scale, Apigee, Kong, or by customizing nginx.

Learn more about API management platforms from our resource page.

Deploy databases outside of containers

Unlike a traditional cloud server with a network-based block storage device, containers have an isolated filesystem separate from the host. Data inside containers is lost once the container is destroyed. Additionally, we cannot depend on containers to remain on the same host for a long period of time. Therefore, mounting the host filesystem as an external volume won't guarantee data longevity in a production database without a considerable amount of work. We need a better plan for databases in our stack to ensure our data is safe and high performance.

Planning tip #5: Setup and deploy your databases outside of your container. Use a Database-as-a-Service to remove the need to manage your own instances, or roll-your-own managed database solution outside of containers. The only exception is when your microservice has read-only data that can be packaged into the container during image creation time.

Want to accelerate your deployment process?

Cloud 66 offers many of the features described above, such as container hosting, service discovery using DNS, reverse-proxies, and database-as-a-service. You can learn more about the services provided by viewing the details of Cloud 66 for Docker. Once you have had a chance to see how it works, signup for Cloud 66 and start deploying containers to your favorite cloud provider.

What's next?

In an upcoming post, we'll outline the steps to build out a complete microservices architecture on Cloud 66, based on our step-by-step guide to deploying REST APIs using Ruby and Sinatra as a starting point.


Try Cloud 66 for Free, No credit card required