← All Articles

8 Components You Need to Run Containers in Production

Khash SajadiKhash Sajadi
Dec 12th 17Updated Jul 6th 22

8-components-you-need-to-run-containers-in-production

Docker, Compose, Kubernetes, Swarm, CNI, Prometheus, Helm, Containerd, Linkerd, Istio, Envoy, CoreDNS, Notary, Fluentd, rkt… just some of the names you keep hearing in the container or micro-services space, even before you start naming vendors and their commercial products.

If you’ve been observing containers for long-enough (and that’s about two years or so in this industry) you will also notice the “product churn”: products that rise up, make a great deal of noise and grab everyone’s attention, just to go fade away in the background of shiny new open source projects or commercial products, “exciting” or “disruptive” partnerships between giants of the IT industry, etc.

While vendors are trying to predict the next big change in the world of micro-services, many IT professionals are still trying to understand what these new technologies are and how they can be useful to their businesses or careers.

I’ve had many conversations with customers, partners as well as colleagues at Cloud 66 about this. In this post, I’m going to try and simplify for those who are looking to embark on this journey, by breaking it down to edible bites.

Components

What components do we need to choose for our container infrastructure? Here is a list:

1. Engine

The container engine is the part that runs a process (i.e. your application) inside of a Linux Container. While Docker is the main contender in this space, there are other container engines available to choose from, like rkt from CoreOS.

Speaking of Docker, at this point, it is important to clarify the difference between the container engine and the container image. While a container engine is responsible for running processes inside a Linux container, a container image the format that defines its attributes. This is the same as having an Excel file on your disk (the .xls file) and choosing where to edit it (MS Excel, Open Office or Google Sheets). The former is the container image and the latter is the container engine. Docker is the combination of both of those: the container engine (Docker Engine) and the container image (Dockerfile).

Technically all container engines should or will comply with the Open Container Initiative (OCI), so if the rest of your infrastructure components are compatible with the OCI you should be able to switch engines in and out based on security, isolation or performance requirements. Realistically however, Docker Engine is, for now, by far the most supported in terms of tooling and community.

Top Options:

  1. Docker
  2. rkt
  3. OCI Containerd

2. Orchestration

Containers are great, but to make use of them beyond a “hello-world” app requires a good level of herding. Now that you have your application stored as a container image and know what engine to use, you need to choose the right orchestration engine for it. An orchestration engine is a software that runs containers on servers. Servers (usually called Nodes in this context) are like the workhorses and containers are the workload. The Orchestrator decides which task should be performed where and ensures it is performed correctly and completely.

The orchestrator makes those decisions based on node availability, application requirements (like being closer to a customer), hardware needs (fast disks or GPUs) and other factors. As you can imagine, orchestrators can be complex and critical points of your infrastructure as well as being flexible enough to work with different workloads and their requirements.

Here your options are mainly Docker Swarm, Mesos and Kubernetes. There are many articles comparing these options but in 2017 it has become clear that Kubernetes is the clear winner of the orchestrator battle, and the best option both technically and commercially.

Top Options:

  1. Kubernetes
  2. Docker Swarm
  3. Mesos

3. Network

While container networking is not a standalone function or component like orchestration, it is a critical part of the container infrastructure you will be building. A container network is usually deeply integrated with your orchestration component, but all choices you have are compatible with Kubernetes. When choosing a container networking solution, it is better to choose one that’s fully CNI-compatible.

Top Options:

  1. Calico
  2. Flannel
  3. WeaveNet

4. Storage

While most containers can work with the container’s local storage just fine, some workloads require persistent storage, usually provided using two components: block storage and container storage driver. Almost all storage providers now have container engine storage drivers. The choice of storage provider also highly depends on your infrastructure provider choices.

Top Options:

  1. StorageOS
  2. Rook
  3. Portworx

5. Monitoring

As nodes (servers) become less and less important and replaceable in a container-based infrastructure, monitoring the cluster of servers’ health, containers and individual sub-systems deployed on the cluster becomes more important and challenging, since this environment is much more dynamic. This means new container-centric monitoring solutions are usually better suited for your new infrastructure compared to the more traditional monitoring systems that focus on servers and network at the host level.

Top Options:

  1. Prometheus

6. Logging

Logging is a critical part of any software system but even more so in a highly distributed design like micro-services. In a container-based, micro-services world, tracing each call and action through different parts of the system (from the web load balancer to the database) via unique and consistent markers, is critical to quick and immediate resolution of issues in production.

Top Options:

  1. Fluentd
  2. ELK

7. Service Mesh

This is an optional component. A service mesh acts as an intermediate layer sitting in front of the services available on the cluster, to regulate access to them based on configuration and events. Service meshes are relatively new in the container world, and the viable options will be more mature as they integrate more with orchestration engines.

Top Options:

  1. Linkerd
  2. Istio
  3. Conduit

8. Container Deployment Pipeline

While not a single component, a Container Deployment Pipeline (CDP for short) is needed to build production-ready container images and deliver them to your infrastructure. This might seem like a part that’s outside the scope of your infrastructure, but it really isn’t. Containers bring Devs and Ops closer together, and with the complexity of the infrastructure delivering your application increasing as a result of use of containers, your delivery pipeline should now do much more than just building container images from your code and running user-defined unit tests.

In general a CDP solution should facilitate efficient maintenance of container delivery that is consistent with your code. For example, Kubernetes uses configuration files (usually in yaml format) to determine which services should run and how. These configuration files need to be maintained alongside your code from commit stage, and support a degree flexibility when the Kubernetes cluster runs on different infrastructure (for example around load balancing or storage configuration based on the underlying provider) for different environments (dev vs QA vs staging vs production). This flexibility however should be limited to avoid making major infrastructure changes going to production (like network setting changes or removal of essential configuration or secret items).

Here is a short breakdown of what CDP should provide:

  1. Support for configuration of the running environment in lock-step with code.
  2. Facilitate flexibility around configuration parameters based on the available infrastructure components.
  3. Support for auditability and traceability of code from git to the running containers in production.
  4. Enable workflow-based delivery of application services to the container infrastructure, including support for manual and automated steps.
  5. Support for external and internal security checks on container images for known vulnerabilities.

Currently the available option is either to (a) use a complex set of home-brewed scripts on top of your CI pipeline and impose manual processes in place to ensure infrastructure changes are reviewed and their effects known, or (b) use a tool like Spinnaker that’s built for managing deployment workflow on a non-container based infrastructure, and adapt it to your needs.

At Cloud 66, we’ve moved our own stack to Kubernetes in early 2016, and built Skycap as a code-to-Kubernetes CDP solution for our own production needs as a result. As a business that runs on Kubernetes in production, we see first hand the challenges of delivering applications to Kubernetes. And, since last June, we’ve made Skycap available to our customers as well, as a hosted or self-hosted/on-prem CDP solution.

Summary

There are relatively clear winners in most the categories of components that you need to consider for running in containers. What’s more, this will accelerate, with leading IaaS companies like AWS, Google and Azure providing Kubernetes as a service, including a lot of the components provided as services on top of the more conventional primitives of compute, network and storage. However, there is no such leading solution in the field of CDP and pipeline in general—and that’s an area to watch. We would, of course, be delighted if you joined our happy customers in using Skycap today.


Try Cloud 66 for Free, No credit card required