← All Articles

Containers are just a starting point

Kasia HoffmanKasia Hoffman
Aug 3rd 17Updated Jul 6th 22

containers-are-just-a-starting-point

Containers, Docker, Kubernetes have been around for four years now. Some people are even starting to refer to this technology as mature! I would strongly argue, however, that the implementation of containers infrastructure is still very much in its growth stage.

This blog post will focus on the other parts of the container infrastructure. Everything else but the containers. They are just a starting point.

Containers Recap

What is Docker?

Docker revolutionized the software technology in 2013, by simplifying the existing concept of microservices. It is best described as a tool that can package any application and its dependencies in a virtual container.

Read the conversation between web developer and container consultant to get a better idea on containers implementation.

What is Kubernetes?

Kubernetes is an open source tool developed by Google to manage containerized apps in a clustered environment. It was essentially created to manage Docker and to address the disconnect between the way that modern, clustered infrastructure is designed.

So which one should you use, Docker or Kubernetes?

In my opinion, use both for your production environment. Kubernetes orchestration compliment Docker and work together very well.

Kubernetes is the current market leader and has a large community for support. This is an opinionated tool, which is great as developers are on the same page and can follow the same guidelines, but it gives them less flexibility. It has a huge potential to become a standard in the container industry.

On the other hand, Docker Swarm (aka Docker Compose++) are really designed with development in mind; therefore they are easier to debug in the development environment. Not as powerful in production.

Also, it is worth remembering that all of the technologies above are open source. It is also important to understand a true value of open source. Read more on the open source containerization conundrum.

Infrastructure beside the containers

Starting from 'hello world' to running your containerized application is a long steep learning curve. Once you master the containers and learn your way around all the gotchas out there, for example, those 10 Tips for Docker Compose Hosting in Production. You are ready to move to the next level, deploy and maintain your app in production.

So, what developers have to consider to run their containerized infrastructure. The decisions for the containers strategy will positively or negatively impact the customers.

Database

You can host your database in containers without worrying about I/O performance in a development environment. There is a lot more to consider in the production environment.

You need to think about the database storage components, backups and replications. To run modern web app or mobile API scale the database to handle increase I/O base on demand, together with high availability and a reliable backup/restore strategy.

Cloud Provider

Choose a right cloud provider for you, whether it is a bare metal, public cloud, or hybrid cloud. Investigate if your cloud provider can provide high availability within a region.

Read our blog post about the speed of VM creations and SSH access of the popular cloud providers. (Last updated in June 2016).

Lastly, look for the flexible solutions as the container deployment process differs across cloud providers, and you don't want to end up locked up with one provider.

Deployment Workflow Management

The aim is to achieve the deployment without any downtime (a blue-green deployment). It can be designed by starting a cluster of replicas running the new version, while all the old replicas are still serving all the live requests.

It is important to choose the right set of tools, as the deployment management tools will vary considerably from development to production environment, as mentioned above with the Docker Compose case. Moving to the multi-host environment will increase the complexity, make sure you think ahead about the details of moving from a simple single-container application to a complex set of container images, where each image with multiple instances is connected to the load balancer for distributing workloads.

Load Balancer & Service Discovery

Moving from a single container service to multiple containers across one or more hosts requires a load balancer to distribute incoming requests.

To deliver a seamless end-user experience a container apps need to communicate with each other and be able to deploy on any server or any containers cluster.

Tools, like nginx or HAProxy are the popular choice for microservices load balancer. The trick is to keep their configuration up to date, bearing in mind the different versions that need to run at the same time.

Developers face the networking challenges with the service discovery, which contributes to slower container adoption process. There is a number of management tools available, at Cloud 66 we have simplified it but using an internal DNS server.

Security

Containers cannot be treated as small virtual machines, as they are 'snippets of the code with the shared kernel that runs independently of one another'. Quick adaptation of the containers requires a new security strategy. In productions, all container processes run on a single host, isolated from the risk of network intrusion and various attack vectors.

Make sure you apply firewall rules for your container stacks and protect them against denial of service and brute-force attacks. Also, discover tools like Habitus.io a build flow tool for Docker can help you remove and manage the layers with secrets during the build process.

Monitoring & Logs

Investigate the available options for the full stack container monitoring strategies to ensure that users can perform necessary functions with the application. Check that the current and the future loads don't cause a slow performance or outages and lastly bear in mind the troubleshooting and error handling.

Additionally, set up the log management to collect and aggregate log entries for one or more log servers. Think about the ways how to view and search your logs to support troubleshooting.

What does Cloud 66 actually do?

Cloud 66 does all the above! Our complete solution looks after the entire infrastructure from code to production for you.

Start with Skycap, our container native CI/CD product to address many challenges faced by development teams adopting micro services and container-based applications.

Skycap is fully integrated with Maestro for an end-to-end container-based stack deployment.

Maestro is our full stack application management solution, backed by Kubernetes (Container Stack v2). Maestro builds, deploys and maintains your applications on any cloud and server as well as providing native database components, backups and replication, team and organization access control, log management, firewall and active security monitoring of infrastructure, monitoring, auditing and much more.

containers-are-just-a-starting-point


Also check out our blog about the '8 Components You Need to Run Containers in Production'.


Try Cloud 66 for Free, No credit card required