Two new awesome features

Today I am happy to announce two new features: Individual Linux Users and RabbitMQ AddOn.

Individual Linux Users

Until now, a single shared user (the one used by the cloud provider) and SSH key was used for both deployments and all SSH accesses by the members of your team.

Starting from today, all new stacks will have an individual and unique user for every memeber of your team.

This means if a someone leaves your team, their access to servers will be revoked automatially without your intervention. And any new addition to your team will automatically be commissioned on all of the applicable servers based on the priviledges you assign them.

This feature is transparent to all users and will be automatically used by the toolbelt without any change. If you want to use it directly with your own SSH terminal, downloading the SSH key from the server page will download your own individual SSH key and not the shared one.

RabbitMQ AddOn

We just added RabbitMQ to the list of our addons. You can now install a managed instance of RabbitMQ on your stack with a click of a button.

Until now, RabbitMQ was installed on stacks only if it was used in your Rails, Sinatra or Padrino stacks. Now you can add it as a standalone part of your infrastrucutre after the stack has been created.

Enjoy!

Connecting Stripe to Xero

Stripe is our payment provider. We love the simplicity of Stripe and the convenience that it brings. We are also big fans of Xero, our accounting software. As a company with presence in both US and UK, we needed a way to connect Stripe to Xero to allow us calculate the correct taxes and make our life easier when it comes to reconciling our revenue with our bank statements.

Taxes, taxes, taxes

Every time you charge a customer in Stripe it creates a unique payment in the system. This payment is linked to a single customer. Stripe then pays the total of all the payments (minus their fees) to your bank account under a single transaction.

Imagine the following scenraio

 Customer    | Country | Payment
-------------|---------|---------
ACME Inc     | US      | $10.00  
Blimey Ltd   | UK      | $9.00  
Ze Corp GmbH | Germany | $8.00  

2 days after these payments are taken, you will have $27 paid in your account. This is all good unless you are either:

  1. You are a EU company and need to charge VAT for EU customers but not the non-EU ones
  2. You are a US company and need to charge different tax rates for different states.

If you are like us and use Xero, you can setup different accounts for different VAT or tax rates. For example you can have the following income accounts:

 Account | Description  
---------|--------------
 4000    | Sales UK     
 4001    | Sales EU     
 4002    | Sales non-EU 

or even more detailed accounts spliting sales and support (as they are taxed differently).

The big question here is How do I split that $27 between these accounts?

When you file your VAT return (or US sales tax) you should split that amount like this:

 Account | Description  | Amount 
---------|--------------|--------
 4000    | Sales UK     | $9.00  
 4001    | Sales EU     | $8.00  
 4002    | Sales non-EU | $10.00 

You definately don't want to do this manually every 2 days!

APIs to the rescue

We built a connector that connects Stripe to Xero and splits the payments into the correct account.

Step 1 - Create a Xero developer account

Go to Xero developer resources page and get started. Xero gives you a Demo account as well so you don't mess up your real accounts.

Step 2 - The connector

require 'stripe'  
require 'xero_gateway'

# methods here are called only by Stripe
class PaymentHookController < ApplicationController

    XERO_ACCOUNTS = { :no => 4004, :eu => 4002, :uk => 4000 }

    # called by transfer.paid event of stripe
    # NOTE: for now it only works with product revenue. Support revenue needs to be identified and invoiced differently
    # NOTE: This will require xero private key to be available in the home dir
    def transfer_paid
        raise ActiveRecord::RecordNotFound if !custom_authenticated

        if params[:type] != 'transfer.paid'
            # we are going to accept this as valid but not follow through on our side. this way Stripe will not retry
            render :json => { :ok => true, :message => 'not a transfer.paid event' }
            return
        end

        Stripe.api_key = Configuration.stripe_private

        Rails.logger.info("Received Stripe transfer.paid callback with payload #{params.to_json}")

        line_items = []
        # get a list transactions within this payment by calling back. For some reason the payload doesn't have all of them
        transaction = Stripe::Transfer.retrieve(params[:data][:object][:id]).transactions.all(:count => 100) # need to do this to get all transactions
        tx = transaction[:data]
        tx.each do |item|
            # find the transaction
            begin
                # we don't need to file Stripe charges
                next if item[:type] != 'charge'

                # get the charge
                charge = Stripe::Charge.retrieve(item[:id])
                # get the customer
                account = Account.find_by_stripe_customer_id(charge[:card][:customer])

                if account.nil?
                    Rails.logger.error("Account not found for #{charge[:card][:customer]}")
                    next
                else
                    Rails.logger.debug("Adding line item for charged of customer #{account.id}")
                end

                line_items << { 
                    :description => "Usage charge for customer #{account.id} CH:#{charge[:id]} TX:#{params[:data][:object][:id]}", 
                    :account_code => XERO_ACCOUNTS[account.vat_category], 
                    :unit_amount => item[:net].to_f / 100.00
                }
            rescue => e
                Rails.logger.error("Failed to retrieve Stripe charge #{item[:id]} due to #{e}")
            end
        end

        if line_items.empty?
            Rails.logger.error("No line items were created")
            render :json => { :ok => false, :message => 'No line items created' }, :status => 400
            return
        end

        Rails.logger.info("Creating invoice in Xero")
        begin
            gateway = XeroGateway::PrivateApp.new(Configuration.xero_key, Configuration.xero_key, File.join(Dir.home, 'xero.pem'))
            # create a xero invoice
            invoice = gateway.build_invoice({
                :invoice_type => "ACCREC",
                :due_date => Time.now.utc,
                :reference => "Stripe Invoice for Transfer #{params[:data][:object][:id]}",
                :line_amount_types => "Inclusive"
            })
            invoice.contact.name = 'Stripe'

            line_items.each do |item|
                invoice.line_items << XeroGateway::LineItem.new(
                :description => item[:description],
                :account_code => item[:account_code],
                :unit_amount => item[:unit_amount])
            end

            invoice.create
        rescue => e
            Rails.logger.error "Failed to create invoice due to #{e}"
            render :json => { :ok => false, :message => e.message}, :status => 500
            return
        end

        render :json => { :ok => true, :message => 'Done' }
    end
end  

Notes

  • custom_authenticated is a method not defined here. It returns true if the call can be authentiacated. You can use your prefered way to make sure the call is made by Stripe.
  • You need to make sure your Xero API pem key is accessible by the code (under user home directory in this sample)

What's happening?

The connector is simple. It gets hit with a POST load (webhook) by Stripe. It then parses the payload of the webhook and looks up each of the customers behind every payment that made up the transaction in our database to detemine their VAT situation (or US sales tax one as an example). It then uses Xero API to file each payment under the right account number.

Step 3 - Push it up

This is a Rails controller example for the connector. You can put it in your app, or split it into a smaller app hosted on Heroku or your own server with Cloud 66.

Step 4 - Hook it up to Stripe

Now that the connector is live, you can add a Web Hook to your Stripe account. The webhook will be hit everytime Stripe transfers money to your account and the connector will split the amount into its constituencies and file them against the correct Xero account.

Awesome!

Measuring Customer Happiness

There was a time when I could send a personal email to every single new customer who had signed up for Cloud 66 and talk to them about their business, code and other cool things. It was great to know our customers personally, know what they do and what they are looking to get out of Cloud 66. I still know many of our customers by first name and try to catch up with them on the phone or Skype every now and then.

But we have grown so much over the past 12 months that I cannot keep up with our growth and get in touch with every customer personally.

The question I ask myself these days is: How can we continue that personal touch at scale?

There are great tools out there to help with better customer relationship. From email based automation tools like Customer.io to integrated CRM solutions like Intercom, or support ticket systems like HelpScout.

We use those tools to build more personal welcome emails, drip campaigns or enrich our understanding of our customers by bringing in external sources of information like Facebook, Twitter or Github.

But that doesn’t answer the biggest question we keep asking ourselves: Are our customers happy?

If a customer is paying you, then they are either happy or don’t have other options.

It is rare to have a product that is the only option for a customer (excluding nasty businesses that lock customers in by legal or sometimes illegal tricks). So how would you know if your customer is going to still pay you the next month? How can you measure customer happiness when you cannot get in touch with every single one and use your human judgement?

How would you know if your customer is going to still pay you the next month? How can you measure customer happiness when you cannot get in touch with every single one?

Happiness Metric

Doing a simple search online shows a lot of “Customer Satisfaction Metrics” articles. Most of those are useless. They rely on surveys, “intentions”, “perceptions” or “expectations”.

No one gives honest and accurate answers to a long survey about how they feel about a company. No one can measure intentions, perceptions or expectations. These are mostly ways for consultants to come in, run workshops and hold meetings and generate long reports for no one to read.

We tried building a reproducible analytical way to measure customer happiness. We are software engineers, not human behaviour experts so this is not going to be a scientific exercise or a definitive answer to the customer happiness question. We are still proving our theories and improving our algorithms, but we think it’s worth sharing it with others.

Building a Happiness Score

The first step is to identify actions that customers perform on the product that show engagement. The more engaged customers are with a product, the more likely they are to be happy. We used "last deployment date" as one of our measurable engagement metrics. Based on that we can calculate a last deployment score which is calculated like this:

    def last_deployment_score
        # days since last deployment
        x = DateTime.now - last_deployment_at
        # decay: 1 right after deployment and almost 0 after 15 days
        return Math.exp(-x/5)
    end

This will look like this:

last deployment with decay

Older customers are not necessarily happier customers (although it might be the case for the majority of no-lock-in businesses). Your older customers are more likely to tell you if they are unhappy with the product than very young ones. This makes account life score another candidate to measure:

    def account_life_score
        x = DateTime.now - created_at
        return (20*x)/(15*x+300)
    end

The account life score will increase quickly for the first 60 days of the account life and then goes flat as the account gets older.

account life score

If you have a high churn, you can change this to decrese the score after a while.

Another important factor showing engagement for most SaaS business is "time since last login". A last_sign_in_score can look like this:

    def last_sign_in_score
        # days since last signin
        x = DateTime.now - last_sign_in_at
        # decay: 1 right after signin and almost 0 after 15 days
        return Math.exp(-x/10)
    end

Which would look like this:

last sign in with decay

In this example if a customer hasn't logged in for 35 days, their user happiness score will drop to almost zero.

Overall Happiness Score

A collection of scores for specific metrics can help you identify issues with customers before you lose them. However an overall happiness score is always useful to bring things to your attention.

We did that by simply assigning weighting to each category and averaging the result.

An example could be like this:

    def happiness_score
        return (account_score * 0.7) + (team_score * 0.3)
    end

Normal Ranges

The formulae here are experimental and we are always fine-tuning them based on other interactions we have with our customers. Some of these formulae can get complex so commenting your code is a good tip!

You will start to see the "normal ranges" for each score category as you use them more. This is a good place to use rolling averages and standard deviations as a way to monitor sudden change (like when a customer usage pattern suddenly changes with no warning and you need to reach out to them).

Customer Support Interactions

When we started this project, we thought measuring customer support interactions (support tickets, online chats, etc) could be a good way to measure customer happiness. This is true in the sense that customers care about your product if they are engaging in a support conversation. But this rarely means they are happy. Think of the number of times you sent a support ticket to Facebook or Google. Or when a bug in the product requires a lot of customer interaction. Neither case is an indicator of your happiness or unhappiness with the product.

That's why I don't think using customer support interactions on their own withuot any manually entered meta data is going to be useful to the overal score.

Cloud 66 API v3 is here

Our new API is now released and ready to build awesome apps against!

The new API has pagination, asynchronous calls and much more.

The best part? Personal Authentication Tokens.

Personal Access Tokens

Our API, like many other ones is RESTful and uses oAuth for authentication. We use oAuth 2.0 instead of 1.0 which is much easier to use by clients but still can be very painful, especially is used with commandline tools like cURL.

That's why we introduced Personal Access Tokens.

A Personal Access Token is a simple way to issue a revokable and secure token for your API clients that is also scoped. In a normal oAuth 2 scenario, you need to create a new app and do the whole oAuth 2 dance to get the token so you can use it as Authorization Bearer with your HTTP requests. With a Personal Token all you need to do is to issue a new token, select the "scopes" you would like this token to have access to and use it as the authorization bearer.

Create a new token

Here is how:

  • First, head to "Authorized Applications" under "Account" and click on Create a new Personal Access Token.
  • Give your Personal Access Token a rememberable name and select the "scopes" you would like it to have. For more information on scopes, see API v3 documentation.

New Personal Access Token

  • You can see the token by clicking on the name after it's created.

Use your token

Let's say you would like to have a cURL command that lists all of your stacks. This action will require a "public" scope. Once you have a Personal Access Token with public scope, you can simply use cURL to get the list of your stacks:

$ curl https://app.cloud66.com/api/3/stacks.json -H "Authorization: Bearer 2ea2032cf6264d50aedb832fbba114788b687eb496ad4ae58226e5adc2d07561"

Replacing the long UUID with your Personal Access Token.

Revoke your tokens

You shoulnd't use Personal Access Tokens for applications writen to work with Cloud 66. Personal Access Tokens are purely there for personal use and should be treated like passwords.

You can revoke the Personal Access Tokens by deleting them from your account.

Customer spotlight: Zibbet

Zibbet image

As a team of developers, we're always intrigued to learn about how people use our service. We try our best to reach out to our customers and see how best we can help make their lives easier. We recently got in touch with the great guys at Zibbet to learn more about how they use Cloud 66.

For those who don't know, Zibbet is a marketplace for awesome handmade and vintage products. They take a firm stance against resellers on the site - which sets them apart from a marketplace like Etsy. When Etsy changed their policy to allow resellers last year, many merchants jumped ship to Zibbet. So many in fact that Zibbet servers struggled to stay up! They previously hosted on Heroku, and recently migrated over to Cloud 66. We spoke with Pavel Kotlyar, the person in charge of Zibbet tech, to find out more.

How did you originally host Zibbet?

Heroku, 5PX (5 passengers per dyno) for web, 1PX for worker + http://hirefire.io to auto scale in case of unexpected traffic + Heroku Postgres Baku.

What was your reasoning for moving to Cloud 66?

Our application was a brand new rebuild of our old web site, but unfortunately we did not have enough time before the release to optimise it well enough in terms of memory consumption. We also had quite large data to manage, so 6GB RAM on Heroku with 5 processes per dyno was not good for us and we often got errors on Heroku related to memory overflow during peak traffic. We can’t afford to have 1 process per dyno and pay for 25 PXes… So the main reason - we needed more powerful servers with more RAM but at the same time we needed the same simplicity of server operation, setup, scale and management. We don’t have a DevOps or server admin in the team and our developers really like to focus on application development rather then on servers management and maintenance.

How did the move affect your monthly hosting bill?

It was a pleasurable surprise - we’ve got much more powerful servers with Digital Ocean and Cloud 66 and our monthly cost literally became 50% cheaper than on Heroku with Heroku Postrges.

How did you go about migrating to Cloud 66?

We had some small issues related to Procfile settings as for example Redis should not be there, because it automatically running with Sidekiq and passenger already installed in the stack. But overall it was very easy and the support was very helpful as well as the documentation. The migration of the database, thanks to Cloud 66 documentation was smooth and easy as well, we’ve migrated 7Gb database from Heroku and imported it to our Postgres server in 15 mins following the steps from the docs. So the downtime was about 20 mins including DNS changes.

Did you experience any issues with the move?

We had very few issues and as mentioned, it was Procfile settings and some issues related to our CI deploy configuration settings. I was able to run our Heroku configuration very quickly.

Is there anything else you'd like to add?

We are happy customers, so far so good. We even don’t need the support anymore since everything is working just fine. A few days ago we decided to rebuild our search with Elasticsearch and with Cloud 66 we can setup Elasticsearch server with one click, this is awesome!


We're really happy to be working with the great people at Zibbet, and always love to hear from our customers. If you're working on something cool and would like us to share it, get in touch with us!

Announcing OS level security monitoring

The recent security vulnerabilities found in bash (shellshock) again have caused sleepless nights for many developers and sysadmins. We had similar issues last time when Heartbleed was found to be affecting many servers.

Reacting to a situation like this usually consists of three steps:

  1. Checking to see if you are affected
  2. Finding a way to fix the issue
  3. Rolling out the fix with minimum disruption

Am I affected?

This is the first step. It usually involves searching the net, reading forums and threads to find the most reliable way to check if a server or application is vulnerable to a specific threat.

How to fix it?

Once it is known that a server is vulnerable to a security issue, the next step is to find a fix. This usually involves finding the right patch for the OS, the fixed version of a component or gem.

How to roll it out?

Now that we know how to fix the issue, we need to find out how we can roll it out as quickly as possible with as little diruption to our customers.

How can Cloud 66 help?

Everyone who is deploying a Rails, Sinatra or Padrino stack on Cloud 66 benefits from automatic OS updates as well as an option to roll out fixes to more sever issues during deploy with "Apply Security Upgrades" option on the deploy menu.

Our StackScore also keeps monitoring your overall infrastructure setup as well as some parts of the app for known security issues.

OS level security monitoring

We are pleased to announce that from today we are also checking all Cloud 66 deployed servers for known security issues at the OS level (like Shellshock) regularly and will reflect the results in your Security StackScore. You will get an email when there is an issue and when automatic security upgrades fix the problem. So no more worries about the possibility of vulnerability and being unsure if a server is left behind.

OS level security monitoring is available to all Cloud 66 customers from today for free!

Stay secure and kepping rocking!

Building bullet proof applications on public cloud

Remember Hurracane Sandy?

We all remember when Hurricane Sandy took AWS and many other US East Coast data centres offline. Those were really painful and expensive days for many of us. Luckily disasters like Sandy happen rarely. However we have all seen how upgrades, hardware faults or network issues can take our sites and mobile backends offline.

Making Web Apps bullet proof

Imagine you had your site hosted on AWS US East when it was taken offline. Having a standby copy of it on DigitalOcean US West Coast could have really helped you.

Let's see how we can make that happen. There are three parts to achiveing this goal:

  • Code
  • Data
  • Traffic

Code

To build a standby stack you need to deploy your code to a new set of servers. This can be tricky since all your deploy scripts can be tied to single cloud provider or fixed set of IPs. Also most probably you have deploy scripts that deploy your code to existing servers but they don't build new servers for you.

Being able to deploy your code to a new set of servers is the most critical part of recovering from down time. The problem is, in most cases you don't know if the whole thing works end-to-end until it is too late.

The key to this step is to build immutable infrastructure.

Using tools like Chef can help by allowing you to automate a full stack build and deployment. You need to make sure the scripts can run on multiple cloud providers and are kept up to date as your application evolves.

Cloud 66 can also help you with this part as it runs on all major cloud providers and doesn't need any update to your scripts. It only uses your code to determine what needs to be deployed. Clone your stack and you're there!

Data

Data is always tricky to move. Big databases can take hours to move and you can't afford to be down for hours. The best strategy here is to setup DB replication to keep your data warm in your standby stack. Setting up and monitoring a DB replication can also be tricky. How do you know your DB replication is working and your replica is not off sync when you need it?

There are guides around on how to setup replication for MySQL, PostgreSQL and MongoDB as well as Redis.

Altertively, you can use Cloud 66 database replication across two stacks. With cross-stack DB replication, Cloud 66 replicates your MySQL, PostgreSQL, MongoDB or Redis databases with a couple of clicks. But we don't stop there, we also monitor your replications and let you know if the replica is out of sync with the master before it's too late.

Traffic

Now that you have a working stack with production data standing by all you need is to switch your traffic to the new stack.

The best practices here are:

  • Always have a load balancer so you don't need to change your DNS in most cases.
  • 24 hours before switching over your traffic, reduce your DNS record TTL to 5 minutes.
  • On the day of the move, switch the DNS record and increase the TTL back to 24 hours.

The issue with this approach is that when there is an emergency you don't have 24 hours.

This is where Cloud 66 ElasticAddress can help. ElasticAddress is a fast response DNS server with 100% uptime that quickly diverts your traffic from one stack to another. All you need to do is to point your domain to your ElasticAddress. ElasticAddress automatically tracks your web servers so if you add a load balancer it will update without your intervention.

Summary

Creating high availability applications takes a lot of engineering and even more planning. Not only that, your operations have to be always reflecting the changes in the development side of things to keep high availability an objective.

Ensuring your automation scripts are always up to date and can work with multiple cloud providers, your data is always replicated and your traffic can be quickly and easily switched over is the best place to start planning for emergencies and achieving high availability.

Cross Origin Resource Sharing (CORS) Blocked for Cloudfront in Rails

Using a CDN like AWS Cloudfront helps speed up delivery of static assets to your visitors and reduce the load on your servers. Setting up Cloudfront for your Rails apps is very simple, thanks to gems like asset_sync that work nicely with Rails’ asset pipeline compilation process and S3.

One issue can however be tricky to solve sometimes: CORS blocks.

What is happening?

If your assets are served by a CDN like Cloudfront, they can be served from a domain like sdf73n7ssa.cloudfront.net while your app is served on www.myawesomeapp.com. This triggers CORS blocking in browsers to stop malicious websites fetching nasty resources while browsing a seemingly nice website.

The most common type of this issue is with fonts when you get something like this:

Font from origin 'https://sdf73n7ssa.cloudfront.net' has been blocked from loading by Cross-Origin Resource Sharing policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://www.myawesomeapp.com' is therefore not allowed access.  

How to fix it?

When it comes to AWS Cloudfront, the most commonly suggested method is to allow CORS origins on the Cloudfront side. This method involves writing XML configuration code for AWS and uploading it on the S3 side.

There is a better and easier way!

I personally prefer using my DNS subdomains to solve this problem. If my CDN is behind cdn.myawesomeapp.com instead of sdf73n7ssa.cloudfront.net then browsers are not going to freakout and block them as cross domain security problems.

To point your subdomain to your AWS Cloudfront domain go to AWS Cloudfront control panel, select your Cloudfront distribution and enter your CDN subdomain into the Alternate Domain Names (CNAMEs) field. Something like cdn.myawesomeapp.com will do.

Now you can go to your DNS provider (like AWS Route 53) and create a CNAME for cdn.myawesomeapp.com pointing to sdf73n7ssa.cloudfront.net.

You can test your new CDN domain by making sure assets served from it are coming from Cloudfront.

curl -I http://cdn.myawesomeapp.com/assets/image.png  

should return something like this

HTTP/1.1 200 OK  
Content-Type: image/png  
Content-Length: 10414  
Connection: keep-alive  
Date: Mon, 22 Sep 2014 10:06:41 GMT  
Last-Modified: Sun, 06 Jan 2013 16:37:19 GMT  
ETag: "1c4bef3752c306b9c14a05b4a19d7d79"  
Accept-Ranges: bytes  
Server: AmazonS3  
Age: 1599  
X-Cache: Hit from cloudfront  
Via: 1.1 a3c44e1caa58818cd22903047dc0faf4.cloudfront.net (CloudFront)  
X-Amz-Cf-Id: sEbH-vV6deQra_YQa144RxtwhuJaWSrq-tpdiFxWdUbDbR2DnhoIrQ==  

But what about SSL?

This method works for non-SSL traffic, but in most cases we use a schema-less asset URL like //cdn.myawesomeapp.com for our resources so both http://cdn.myawesomeapp.com and https://cdn.myawesomeapp.com work. Using custom domain Cloudfront CDN names will break the SSL (https) version. You can use dedicated SSL enabled CDN from Cloudfront, but that’s usually very expensive.

Luckily, AWS supports SSL Server Name Indication (SNI).

To use it you need to upload your SSL certificate to AWS first. Unfortunately there is no UI for this on the AWS side yet, so you will need to install the AWS command line tool first (that’s easy). Once you have your AWS command line tool installed and configured with the AWS keys, you can upload your SSL certificate:

aws iam upload-server-certificate --server-certificate-name my_certificate_name --certificate-body file://my_cert.crt --private-key file://my_key.key --certificate-chain file://intermediate_cert.pem --path /cloudfront/my_cert_name_here/  

NOTE : Note the file:// and the trailing / in the command.

Once the SSL certificate is uploaded, you can head back to your CDN distribution on AWS Cloudfront and select “Custom SSL Certificate (stored in AWS IAM)” and “Only Clients that Support Server Name Indication (SNI)” options.

Now you should be able to see your assets both on HTTP and HTTPS served from CDN. Test it with CURL:

curl -I https://cdn.myawesomeapp.com/assets/image.png  

You can now safely change your config.action_controller.asset_host in your production.rb to //cdn.myawesomeapp.com

Sign up attribution for a Rails app

“I know I waste half of my advertising dollars...I just wish I knew which half.” said Henry Procter.

Knowing where your site signups come from is a good way of knowing if you are spending your advertisement dollars on the right target, but how would you do that?

Google Analytics is the most common tool used by many for that. The problem is Google Analytics can become very complicated very quickly. More importantly linking a campaign with a single signup is something that should happen in the app during the signup process. This will mark each individual user with the campaign they came from.

Mark your campaigns

Started by Google Analytics, UTM parameters in URLs were created to link campaigns to visits, but they are not limited to Google Analytics and are supported by almost any visitor tracking utility out there.

To start with, make sure all the links coming to your website use UTM parameters. You can use Google Analytics URL Builder or just simply add utm_source and utm_campaign to the URL coming to your site. This will end up looking something like this:

http://www.mysite.com?utm_source=test&utm_campaign=promo

Capture the UTM into a cookie

Not every one who visits the site is going to signup immediately, but they might do that later, so we need a way to capture that.

To achieve that, we are going to capture the utm parameters into a cookie that expires in a month. That way if the visitor signs up for up to a month, we know where he came from.

In Rails, we can do that by a filter:

before_filter :capture_utm

private

def capture_utm  
    unless cookies[:utm]
      cookies[:utm] = { :value => utm.to_json, :max_age => "2592000" }
    end
end

def utm  
    {
        :utm_source => params[:utm_source],
        :utm_campaign => params[:utm_campaign],
        :utm_medium => params[:utm_medium],
        :utm_term => params[:utm_term],
        :utm_content => params[:utm_content]
    }
end  

Read the cookie before signup

Linking the utm parameters with the new user is easy. If you are using Devise to manage your users, you can customise the signup controller to capture the value:

class CustomRegistrationsController < Devise::RegistrationsController

    def create
        super

        begin
            utm = cookies[:utm]
            if utm
                resource.utm_params = utm
                resource.save
            end
        rescue => exc 
            Rails.logger.error "Error reading utm cookie due to #{exc}"
        end

Make sure you add this line to your routes.rb:

devise_for :users, :controllers => {:registrations => “custom_registrations” }  

utm_params is a text field we added to the User class in a migration. This will store the JSON serialised version of the utm parameters in database next to each new signup.

You might want to avoid addin the values if they are empty (like when the signup came from no campaign) but I prefer having them there all the time to simplify reading and deserialising them during analysis.

What if you have multiple subdomains?

If you are like us, you don't only have 1 website. We have a blog (this one), help website and our community website. Also our main website (www.cloud66.com) is a separate app.

All of our apps are hosted on cloud66.com subdomains: blog.cloud66.com, help.cloud66.com and community.cloud66.com

This means we can use cookies that run across all of our websites. In Rails you can do this by adding domain to the cookies.

This will change the capture_utm method above to this one:

def capture_utm  
    unless cookies[:utm]
      cookies[:utm] = { :value => utm.to_json, :max_age => "2592000", :domain => ".mydomain.com" }
    end
end  

Note the . before the domain name. Using this method the cookies will be sent to the main app (where the signup happens) regardless of where the first visit was.

How about CDN?

We also use AWS Cloudfront as a CDN for our main website. This speeds up requests and reduces load on our servers. We use custom origin servers for our Cloudfront but the default settings is not going to work with the solution above.

This is because Cloudfront CDN by default ignores query strings and doesn't forward cookies to the origin server. Solving this is simple!

Login to your AWS Cloudfront dashboard and select the Cloudfront distribution. Under Behaviors, you can select edit and change these two settings:

Forward Query Strings to yes and Forward Cookies to all

Done!

Ubuntu 14.04, VPC support and T2 instance types

Today I am pleased to announce Cloud 66 support for AWS Virtual Private Networks (VPC). VPC allows AWS customers to build private networks complete with private IP ranges, routing tables, internet gateways and multi location VPC connections.

AWS has been encouraging VPC use within their customers by adding new instance types support only on VPC (like the developer friendly T2 instances) and slowly phasing out EC2 Classic (the non-VPC version) throughout their network by building a Default VPC for all customers automatically.

T2 instance types are particularly important as they have a very development friendly price structure: as most development stacks are not constantly in use, T2 instances can be an economically efficient way to use. They accrue CPU “credits” during idle times that you can use through bursts of usage. These instance types were only available on EC2 VPC.

To support T2 instances we had to support VPC and upgrade our guaranteed supported OS version from Ubuntu 12.04 to 14.04. I am pleased to announce that all of those are now available to all of our customers:

  • All new instances fired up with Cloud 66 will be configured with Ubuntu 14.04.
  • You have a choice of VPC or Classic when using AWS stacks.
  • T2 instances are supported on all EC2 regions and availability zones.

We hope you like these new features. As always we would love to hear your feedback and thoughts.