From data to collaboration and logic, web APIs power most mobile applications. If like me, you prefer to use the right framework for the right job, Rails may seem a little much for a REST-based API. Thankfully, the Sinatra micro-framework is a great choice for your next mobile API or REST-based microservice.
In a previous article, we detailed the steps required to deploy a Ruby-on-Rails application using Cloud 66 for Docker. But how do we deploy a Ruby-based REST API build with Sinatra? Just like Rails, Sinatra is built on top of the Rack, and is a common framework that handles much of the HTTP plumbing for web applications. The steps are similar, but we do have to handle a few differences between the two frameworks.
This article will step you through the process of creating your API using Sinatra, testing it out locally using Docker Compose, and deploying your API to your choice of cloud vendor using Cloud 66. Along the way, we'll touch on a few helpful tips and even a Rubygem that you may find useful when building REST APIs in Ruby.
Building a REST API with Sinatra
The first step in our journey is to create the API with Sinatra. For those not familiar with Sinatra, it's a DSL that makes it easy to compose web applications. Sinatra is more barebones than Rails, making it a great choice for building an API in Ruby. I've used it for several production APIs using Ruby MRI, as well as JRuby when I need even more performance.
For this walkthrough, we'll build a simple Products API that's able to list available products and create new ones. It will use MongoDB as our data store, though you can choose your datastore of choice.
For those more familiar with web API design, I've chosen to use the HAL hypermedia format for response payloads. To generate these easily, I'm using the handy Roar gem. Roar offers a variety of choices for generating response payloads and parsing request payloads, including plain JSON, XML, JSON-HAL, and JSON-API.
Sinatra's simple DSL allows for developing APIs in a single source file. We'll use this approach for the walkthrough. However, for discussion purposes, I'll split the sections up. You can view the full source code for the project on my github project if you prefer to view the code all at once. Feel free to fork the project and try it out yourself, add new endpoints, or use it as a starting point for your own idea.
Setup Bundler and the Database
First, we need to setup Bundler and require the gems that we need and then setup our connection to MongoDB for Mongoid to map our objects to Mongo documents:
# Encoding: utf-8
require 'rubygems'
require 'bundler'
Bundler.require
require 'sinatra'
require 'mongoid'
require 'roar/json/hal'
require 'rack/conneg'
configure do
Mongoid.load!("config/mongoid.yml", settings.environment)
set :server, :puma # default to puma for performance
end
Define our Products model
We then define our Mongoid model and a Roar representer that will serialize our response:
class Product
include Mongoid::Document
include Mongoid::Timestamps
field :name, type: String
end
module ProductRepresenter
include Roar::JSON::HAL
property :name
property :created_at, :writeable=>false
link :self do
"/products/#{id}"
end
end
Implementing the API
The source code for the API itself is pretty simple, as it returns a list of the products (most recent first) and can create a new product with a name:
get '/products/?' do
products = Product.all.order_by(:created_at => 'desc')
ProductRepresenter.for_collection.prepare(products).to_json
end
post '/products/?' do
name = params[:name]
if name.nil? or name.empty?
halt 400, {:message=>"name field cannot be empty"}.to_json
end
product = Product.new(:name=>name)
if product.save
[201, product.extend(ProductRepresenter).to_json]
else
[500, {:message=>"Failed to save product"}.to_json]
end
end
We use Roar's representers to serialize a collection as an array in JSON or directly to generate a JSON representation of a single resource.
As you probably realize, our API could do a lot more than just list and create, but for this example we'll keep it simple.
Configuring Mongoid with Sinatra
And finally, we need to setup the Mongoid configuration file:
development:
clients:
default:
database: products
hosts: ["127.0.0.1:27017"]
In this config file, we assume MongoDB is running locally. A little later in this article, we'll try it out in a local Docker instance using Docker Compose.
Now all we need is a Gemfile and we can run our API locally:
source 'https://rubygems.org'
# Sinatra
gem 'sinatra'
gem 'puma'
gem 'rack-conneg'
# Mongo + Mongoid
gem 'mongo', '~> 2.1'
gem 'mongoid'
gem 'bson_ext'
# Roar/Representable
gem 'roar'
gem 'multi_json'
Trying Sinatra Locally
So far, we have the following files:
products_service.rb
- the Sinatra API sourceconfig/mongoid.yml
- the configuration that informs Mongoid where our MongoDB livesGemfile
- the rubygems we will install using Bundler
Next, let's try running it locally (if you have MongoDB installed locally):
bundle install
ruby products_service.rb
In another console, you can list the available products (currently empty):
curl -X GET http://127.0.0.1:4567/products
You should receive a 200 OK response code with an empty payload:
[]
To add a new product:
curl -X POST http://127.0.0.1:4567/products -F "name=My Product"
This should result in a 201 CREATED response code along with the details on the newly created resource:
{
"name": "My Product",
"created_at": "2016-03-10T12:33:31.089-06:00",
"_links": {
"self": {
"href": "/products/56e1bdfbbe8a7c0a48d64a2c"
}
}
}
Now, fetch the list of products again and you'll see your new product included:
curl -X GET http://127.0.0.1:4567/products
This time you may see something like this:
[
{
"name": "My Product",
"created_at": "2016-03-10T12:33:31.089-06:00",
"_links": {
"self": {
"href": "/products/56e1bdfbbe8a7c0a48d64a2c"
}
}
}
]
That's it! Next, let's see how we can use Docker on our local development machine instead, to ensure we have everything working before pushing it to the cloud using Cloud 66.
Dockerize and run your API locally
For this portion of the walkthrough, you'll need to have Docker installed. Detailed instructions can be found in Andreas' previous post on 'How to deploy Rails with Docker'.
Assuming you already have Docker installed on your local development machine, let's setup a Dockerfile
to tell Docker how to build a container image for our products API:
FROM ruby:2.2
RUN apt-get update -qq && apt-get install -y build-essential
ENV APP_HOME /app
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
ADD Gemfile* $APP_HOME/
RUN bundle install
ADD . $APP_HOME
This is a simple, barebones Docker install that's based on an Ubuntu image with Ruby 2.2 pre-installed. It will ensure the apt package manager is updated and install core build tools necessary for compiling any Rubygems you may need. It then uses Bundler to install our rubygems and adds all of the files for our API into the container image.
Finally, let's setup a Docker Compose configuration file that will launch our API, and a MongoDB instance in two separate containers:
---
products:
build: .
command: bundle exec ruby products_service.rb
ports:
- "4567:4567"
links:
- mongodb
- mongodb:mongodb.cloud66.local
environment:
- RAILS_ENV=production
- RACK_ENV=production
mongodb:
image: mongo
We now have 2 additional files in our project:
Dockerfile
docker-compose.yml
To ensure that our API uses the MongoDB instance running in our container, we need to adjust our config/mongoid.yml configuration just a bit:
development:
clients:
default:
database: products
hosts: ["mongodb:27017"]
Notice that the hostname that we are using for our MongoDB host is the same name as the links defined in our docker-compose.yml
file, above. By linking our products container to the mongodb container, we can use the container name in our database configuration, without knowing what internal IP address Docker assigned to the MongoDB container.
We can now use Docker Compose to build our images:
docker-compose build
Followed by running both containers:
docker-compose up
You should see a few log lines from MongoDB that I will omit, followed by the following from Sinatra and Puma:
mongodb_1 | 2016-03-11T22:09:08.906+0000 I NETWORK [initandlisten] waiting for connections on port 27017
products_1 | == Sinatra (v1.4.7) has taken the stage on 4567 for production with backup from Puma
products_1 | Puma starting in single mode...
products_1 | * Version 3.1.0 (ruby 2.2.4-p230), codename: El Niño Winter Wonderland
products_1 | * Min threads: 0, max threads: 16
products_1 | * Environment: production
products_1 | * Listening on tcp://0.0.0.0:4567
products_1 | Use Ctrl-C to stop
Just like when we run it locally, our API will be available on port 4567.
Note: If you are still running the API server in another console, it will already occupy port 4567 and will need to be shutdown prior to running docker-compose up
. The same thing should be said of MongoDB, so you may need to stop any locally running MongoDB on your development machine.
If you run the same curl command from above:
curl -X GET http://127.0.0.1:4567/products
You will see the following log output from Docker on the console:
products_1 | 172.17.42.1 - - [11/Mar/2016:22:09:36 +0000] "GET /products HTTP/1.1" 200 136 0.0065
Note that you will not have any results yet, as this MongoDB instance is different from the locally running one. You can add a new product and re-execute the curl command above to see an actual response:
curl -X POST http://127.0.0.1:4567/products -F "name=My Product"
When you are done running Docker Compose, you can run the following command to shutdown both containers:
docker-compose stop
Preparing to deploy your Dockerized API using Cloud 66
Once you've been able to Dockerize your application on your local machine, the next step is to deploy it to your favorite cloud vendor using Cloud 66. To do this, we first have to add a Rakefile
Cloud 66 will execute on each deploy, and prepare a service.yml
we'll use to define our Docker stack on Cloud 66.
Since we're using MongoDB, we don't have to create a database schema ahead of time as we would for SQL databases such as PostgreSQL. By default, Cloud 66 expects to execute 2 rake tasks that are common for Ruby and Rails-based deployments: db:migrate
and db:seed
. Since we may need these for our API in the future, I often just create empty rake targets in a Rakefile
to start:
namespace :db do
task :seed do
end
task :migrate do
end
end
The final preparation step is to prepare a service.yml
definition that is used by Cloud 66 to build your Docker stack, including all containers. This file is used in place of docker-compose.yml
when deploying to Cloud 66, as it supports additional configurations specific to a Cloud 66 deployment. More details on how it works and the options available can be found in the article titled "Docker service configuration".
For our API, we'll use the following service.yml
:
---
services:
products:
git_url: git@github.com:launchany/microservices-sinatra-products.git
git_branch: master
command: bundle exec ruby products_service.rb
build_root: .
ports:
- container: 4567
http: 80
https: 443
env_vars:
RAILS_ENV: production
RACK_ENV: production
databases:
- mongodb
Be sure to set the git_url
to your own repository, or feel free to use my public Github repo for this example shown above. You can also fork this repo and customize or extend it as you wish.
This service.yml
does a few things:
- Sets the git branch to use (master)
- Defines the command to run our service - in this case, we use Bundler to run our Sinatra app directly
- The ports to map this service for external availability. For Sinatra, we require port 4567 to be externalized to port 80 and 443 (if we add decide to support TLS in the future)
- Define the environment variables to pass when the command is run. For Sinatra, we use
RACK_ENV
rather thanRAILS_ENV
, but I left both in for this example for those more familiar with Rails than Sinatra - Define our hosted database, in this case MongoDB. Please note that databases run on the host server rather than in the container for performance. You may also opt to use your own database instance rather than hosting it with Cloud 66
If you prefer, you can use the Cloud 66 Starter command-line tool to help you get started. The tool will examine your application and generate an appropriate Docker
, docker-compose.yml
, and service.yml
files. For details on using Cloud 66 Starter on OS X, read Andreas' fantastic post on deploying Rails with Docker, where he covers this tool in-depth.
Deploy your Dockerized API using Cloud 66
With all of your files configured properly, committed and pushed to your git repository, it's time to deploy your stack to Cloud 66.
- Login or Signup for Cloud 66
- Create a New Stack, selecting the 'Docker Stack' option
- Give your stack a name and select an environment
- Switch to the advanced tab and paste the contents of the
service.yml
file (generated using Starter or by hand) - Click the button to go to the next step
- Select your deployment target and cloud provider as normal, and choose if you want to deploy your databases locally, on a dedicated server or to use an external server.
For this example, I named the service 'products', used my products service Github repository, selected AWS, and decided to deploy the database locally on the host.
Note: If this is your first time setting up Cloud 66, you'll need to register the SSH key provided by Cloud 66 with your git repository service. This will allow Cloud 66 access to your repository (e.g. Github) for deployments, otherwise you will experience a deployment error.
Once completed, Cloud 66 will provision a server from your cloud vendor, build your API container image, deploy it to your server, and wire everything up. No Chef or Puppet scripting required. You can then use the IP or hostname of your new deployment to access your API, just as you did above.
Tips for debugging your Cloud 66 Docker Stack
Along the way, I encountered a few issues. Some of them as a result of skipping the use of the Cloud 66 Starter tool, as I prefer to better understand the details before using automation tools. Other issues were a result of misconfiguration. Here are some tips to help you when deploying the first time:
- An error message of
"Could not create custom git repo"
was an early issue. I thought it indicated that it couldn't access my git repository. However, for me it meant that the image build process can't find the db:seed and db:migrate targets in your Rakefile (or your Rakefile is missing). Hopefully these kinds of deploy issues will be improved with better error messaging in the future - Be sure to use the git:// url format for the URL. If you incorrectly type the git url, the initial code analysis may succeed but the deploy may fail (the analysis step seems to be more resilient to URL formatting issues and overcomes them better than the deployment process)
- If your docker image is created successfully but you encounter a startup error of your application, read the LiveLogs article to better understand how logs may be viewed across your containers from within the Cloud 66 dashboard. If no logs are visible, you may need to ssh to your server
- Unlike Heroku, Cloud 66 Docker stacks do not automatically generate database config files. The steps above for creating a
config/mongoid.yml
config file are required for Rails or Sinatra. If you are using ActiveRecord, you may need aconfig/database.yml
or the appropriate database configuration file for your object-database mapper. Example config files are available for a variety of mappers to help get you started
What else can Cloud 66 do to manage my production containers?
This workflow has provided a concise introduction to deploying a Ruby-based API to Cloud 66 using their managed container services. Here are some other key features that I have found important for managing APIs in production with Docker on Cloud 66:
- Continuous deployment support - Since Cloud 66 provides a build grid for creating new images, it can automate the full deployment process from the time you push new code on the git branch for your environment using redeployment hooks. This saved me lots of scripting effort
- Selective container deployment - I can choose to redeploy specific services within my stack, allowing me to manually or automatically deploy new versions of my services without requiring all services to be deployed at once (and without the heavy scripting required to make this happen easily)
- Parallel deployment - Since Cloud 66 manages the internal network infrastructure, I can push new deployments in parallel to an existing stack, without worrying about dropping requests. Incoming requests already in progress are completed while new traffic is directed to the updated stack
- Multi-cloud failover - While many cloud providers can provide high availability within a region, Cloud 66 supports app failover to new regions or even completely different cloud vendors
- Internal DNS - The elastic DNS service automatically assigns internal DNS names for databases and container services and is deployment-aware. This makes it easy to integrate services without worrying about referencing the wrong version of a service during or after a new service deployment
As I'm moving my own applications towards a microservice architecture, there is a lot to factor into the operations side. I'm looking forward to using these and other features to boost my microservice migration, while removing the scripting effort required to make it all work. I'll share more on how this all comes together in an upcoming article, and look forward to your comments.