← All Articles

Getting the most out of Cloud Servers without additional Scripting

James HigginbothamJames Higginbotham
Aug 10th 16Updated Nov 28th 17
Ruby on Rails

alt

We all want to get the most out of cloud servers. We want to find ways to make our spend stretch far when it comes to our cloud infrastructure. We also want to see all vCPU cores on our cloud instances fully utilized. Let's look at some strategies for how to do this, and how Cloud 66 can help you get the most out of your monthly cloud spend without writing and maintaining any complex automation scripts.

Strategy #1: Make it easy to scale your app

Your approach to deploying applications can have an impact on how easy it is (or isn't) to scale your app. While you can deploy your application on the same server as your database in the early days of your application, don't stick with this strategy as your app experiences more traffic. Your database requires memory and CPU to service the queries you send to it, negatively impacting the overall performance of your application. It will also create a single point of failure (SPoF), as you'll be relying on a single server for everything in your stack.

Instead, separate your database from your application using a service such as Amazon RDS, a third-party database-as-a-service, or using the Cloud 66 managed database service. This will allow your web app to use as much of the available CPU as possible. It will also allow you to scale the number of app server instances up or down based on anticipated traffic. Plus, you'll offload the management of your database to a third-party, freeing you to focus on app development and customer support.

Strategy #2: Push everything to the background

The performance of your application depends on its ability to accept an incoming HTTP request, process that request, and send back a response as fast as possible. Anything not contributing to this flow is extra work that negatively impacts performance, requiring more servers when under peak load. By moving as much work as possible to be processed in the background, you free your web app to process critical request/response flows without wasting CPU and memory doing work that can be pushed until later.

A common approach to offloading work to the background is to use a distributed messaging approach, as detailed in our Realscale article. Solutions such as Sidekiq are commonly used to make this approach easy for Rails developers. Simply write a background worker that handles the processing, then deploy on one or more worker processes to handle these background tasks.

Cloud 66 makes this easy by allowing applications to define a Procfile for both the web app process, along with background processes deployed on the same server. Your server will now handle web app requests, along with background processes on the same server.

For background work that is CPU and/or memory intensive, you also have the option of deploying dedicated process servers rather than sharing the same web app servers. By separating your web app and background processes to different servers, you can select different instance sizes to fit your stack needs. e.g. larger instances for your app servers, smaller instances for background processes.

Strategy #3: Make use of all your vCPUs

You may not realize it, but not all web frameworks use your CPUs in the same way. For example when deploying a Rails app, you have several Rack server options. If you just use the default, you may not use all of the vCPUs available to you from your cloud provider. This results in one or just a few of the available vCPUs from being used:

Choose the right Rack server to use all of your vCPUs by launching multiple processes. A common recommendation is to use Unicorn with Rack. Unicorn forks multiple processes, allowing each process to use one of the available vCPUs to process an incoming request. This helps to maximize the use of each app server, rather than deploying multiple servers that use only 25% of the available vCPUs.

Strategy #4: Select the right strategy to deploy at scale

Once you start to scale your application, your deployment process will start to take longer as each server is upgraded. This requires that you modify your deployment strategy to handle additional servers. One option to speed up deployment is to use a parallel deployment strategy. By using this strategy, deployments occur on multiple servers at once in a serial fashion, rather than on a server-by-server basis. This does however have an impact on how you migrate your database, so be sure to read our help page on taking advantage of parallel deployments before selecting this option.

Strategy #5: Use redeployment hooks to avoid deploy scripting

Scripting your deployment process often requires a deep understanding of server deployment processes, such as those outlined above, as well as understanding how to use tools such as Capistrano, Chef, Puppet, or Ansible. All this scripting work takes away valuable time from building and supporting your application. Plus, your team must keep up with the latest changes, often during a critical deployment time when your scripts break due to changes with deployment tools. We outlined many of these issues in a recent article, "7 Obstacles to Overcome When Deploying Ruby on Rails to Production".

By using Cloud 66 redeployment hooks, you can deploy your application to one or more servers easily. Using redeployment hooks, all of this scripting can be eliminated by triggering a new deploy every time code has been merged into a specific branch in your Git repository (e.g. master). You can even connect the Cloud 66 deployment process to an automated test solution, such as TravisCI, to ensure that only tested code is deployed into production.


Try Cloud 66 for Free, No credit card required