Today we are announcing Docker support for Cloud 66.
To tell you the truth we don’t really care about containers, we care about what they can do for our customers. Here is why:
This is a fictional conversation between Steve, a web developer and John a “container consultant”. All characters appearing in this work are fictitious. Any resemblance to real persons, living or dead, is purely coincidental.
Steve (web developer): So we have a problem: we want to make sure what we run on dev is actually what is going to run on production. This usually causes tension between our Devs and Ops as they constantly throw the problem over the fence.
John (container consultant): I know all about that issue. There are many ways to address that, but I feel you can really benefit from using containers. Containers draw a clear boundary between Devs (inside the container) and Ops (outside the container). They also contain all the needed runtime dependencies in a single component so you can be sure what runs on dev is going to run on prod. As a side benefit, you can enjoy micro services and all their benefits too.
Steve: Awesome! So how do we go about using containers?
John: It’s simple. You need to build your code into containers. You write a Dockerfile, drop it in your source code and run a
build command to end up with an image. Done!
Steve: Cool. So I need to have a build server and a bunch of scripts to checkout my latest code and run the build command and then I have images.
John: Yes. Now that you have the images, you need to push them to a repository, so they are versioned and accessible from your servers.
Steve: Got it. So I need a repository.
John: Yes. You can get it from Docker Hub or a couple of other companies or you can build it yourself. It’s very simple really. Docker registry is open source and can run as a Docker container itself. You just need to put nginx in front of it to control access and done.
Steve: OK. I can do that. Now how do I roll out the images to my servers?
John: Your servers only need Linux and Docker on them. Once you have that you can run a
run command on the server and it will pull the image from your repository and runs it. Done.
Steve: That sounds great. So I need to write a simple script to connect to my servers and do this.
John: Yes. Or you can use open source tools like Fleet to do that for you. They are open source and simple to use. You need to configure them and they will take care of it.
Steve: But how about redeploys? Like draining traffic from the old containers, or graceful shutdown of the worker containers? How can I orchestrate that?
John: Good question. For that you need a scheduler. Kubernets or Mesos are some examples. They are open source and you can configure and set them up yourself. They usually have a central controlling server and agents to run the operations on the servers.
Steve: I see. So, so far I need a build server, repository and a scheduler. Anything else?
John: Well. Your containers usually serve web traffic right? You need to put an nginx in front of them to distribute the traffic. It’s simple really. You can easily do that with nginx upstream configuration and boom. Done! Oh! One thing. You most probably need a service discovery for that. Something like etcd or consul. They make sure nginx can find the latest containers and also containers can find each other. You know Microservices and all that cool stuff.
Steve: Got it!
John: Now all that’s left is network and storage. For network, you have several options (don’t you just love the flexibility of it all?!) One is to restrict the access at the nginx level to your servers only. Another option is to use something called Ambassador where a container redirects all the traffic across the containers.
Steve: So we have this server with two network cards for redundancy and all that, and that’s now reduced to a single linux process?
John: Well you can see it that way. But not everyone has the same redundancy requirements. Besides, you can install monitoring on each container and control them so the network containers come up first and then the other containers are bound to the network properly after a crash or restart. Trivial technicalities really.
Steve: OK. You said something about storage?
John: Oh yes! So since containers can move all over your cluster, if you need to run a DB or store files locally you’re likely not to find them where you left them. But again, you are spoiled for choice: One is to run a storage service like NFS or Gluster. The other one is to use some open source projects like Flocker that syncs the data with the containers around. You need to configure them and make sure all the servers have the same storage setup otherwise the sync is going to be one way, if you know what I mean!
Steve: I see. OK. So let me just make sure I got it right: I have a problem about my app that can be solved with containers and I can see that. But for that to happen I need to have build servers, image repositories, service discovery, network layer, storage service, a scheduler and a ton of scripts to stick them all together. Right?
John: When you put it that way, then I guess yes. But you’re losing the bigger picture here…
What was that all about?!
It seems everyone is trying to squeeze a Docker somewhere in a press release. Yesterday I saw perhaps the most ridiculous press release from a company which has little to do with containers, but they felt obliged to self-promote amidst the storm that is raging between Docker and Rocket.
For us, containers can solve some big problems for our customers: multi tenancy apps (sorry micro services), Dev / Ops separation, ensuring what runs on their dev will run on production and portability.
We want to help our customers solve those problems but not by replacing them with 10 more problems from a very dynamic and ever-changing set of tools that are out there.
We think Cloud 66 for Docker is the best all-in-one solution that lets you benefit from running your app in containers without having to deal with all the ancillary systems they require.
Cloud 66 Docker support includes:
- BuildGrid: a powerful set of hosted build servers to build your images based on your Dockerfiles.
- Image repository: hosted image repository for the built images.
- Automatic Docker deployment: Advanced scheduler to rollout the containers across your servers and take care of load balancing, port forwarding, container lifecycle management and graceful draining and shutdown of workers.
- Discovery: a hosted, dynamic DNS server that returns the correct IP address of the requested server.
- Container network: a managed and monitored network across all containers in a cluster.
This is all on top of all the features we have been providing our customers for 2 years: production ready databases with backup and replication, server monitoring and access control, firewall management, brute force security control and much more.
We feel we can only add value by focusing on the real problems our customers have. Anything less will be a partial solution at best and PR-ware at worst.
If you think a complete solution like this can help you be a better engineering team or build greater apps, we would love to hear your feedback.
Try Cloud 66 Docker deployments for free today.