Customer spotlight: Zibbet

Zibbet image

As a team of developers, we're always intrigued to learn about how people use our service. We try our best to reach out to our customers and see how best we can help make their lives easier. We recently got in touch with the great guys at Zibbet to learn more about how they use Cloud 66.

For those who don't know, Zibbet is a marketplace for awesome handmade and vintage products. They take a firm stance against resellers on the site - which sets them apart from a marketplace like Etsy. When Etsy changed their policy to allow resellers last year, many merchants jumped ship to Zibbet. So many in fact that Zibbet servers struggled to stay up! They previously hosted on Heroku, and recently migrated over to Cloud 66. We spoke with Pavel Kotlyar, the person in charge of Zibbet tech, to find out more.

How did you originally host Zibbet?

Heroku, 5PX (5 passengers per dyno) for web, 1PX for worker + http://hirefire.io to auto scale in case of unexpected traffic + Heroku Postgres Baku.

What was your reasoning for moving to Cloud 66?

Our application was a brand new rebuild of our old web site, but unfortunately we did not have enough time before the release to optimise it well enough in terms of memory consumption. We also had quite large data to manage, so 6GB RAM on Heroku with 5 processes per dyno was not good for us and we often got errors on Heroku related to memory overflow during peak traffic. We can’t afford to have 1 process per dyno and pay for 25 PXes… So the main reason - we needed more powerful servers with more RAM but at the same time we needed the same simplicity of server operation, setup, scale and management. We don’t have a DevOps or server admin in the team and our developers really like to focus on application development rather then on servers management and maintenance.

How did the move affect your monthly hosting bill?

It was a pleasurable surprise - we’ve got much more powerful servers with Digital Ocean and Cloud 66 and our monthly cost literally became 50% cheaper than on Heroku with Heroku Postrges.

How did you go about migrating to Cloud 66?

We had some small issues related to Procfile settings as for example Redis should not be there, because it automatically running with Sidekiq and passenger already installed in the stack. But overall it was very easy and the support was very helpful as well as the documentation. The migration of the database, thanks to Cloud 66 documentation was smooth and easy as well, we’ve migrated 7Gb database from Heroku and imported it to our Postgres server in 15 mins following the steps from the docs. So the downtime was about 20 mins including DNS changes.

Did you experience any issues with the move?

We had very few issues and as mentioned, it was Procfile settings and some issues related to our CI deploy configuration settings. I was able to run our Heroku configuration very quickly.

Is there anything else you'd like to add?

We are happy customers, so far so good. We even don’t need the support anymore since everything is working just fine. A few days ago we decided to rebuild our search with Elasticsearch and with Cloud 66 we can setup Elasticsearch server with one click, this is awesome!


We're really happy to be working with the great people at Zibbet, and always love to hear from our customers. If you're working on something cool and would like us to share it, get in touch with us!

Announcing OS level security monitoring

The recent security vulnerabilities found in bash (shellshock) again have caused sleepless nights for many developers and sysadmins. We had similar issues last time when Heartbleed was found to be affecting many servers.

Reacting to a situation like this usually consists of three steps:

  1. Checking to see if you are affected
  2. Finding a way to fix the issue
  3. Rolling out the fix with minimum disruption

Am I affected?

This is the first step. It usually involves searching the net, reading forums and threads to find the most reliable way to check if a server or application is vulnerable to a specific threat.

How to fix it?

Once it is known that a server is vulnerable to a security issue, the next step is to find a fix. This usually involves finding the right patch for the OS, the fixed version of a component or gem.

How to roll it out?

Now that we know how to fix the issue, we need to find out how we can roll it out as quickly as possible with as little diruption to our customers.

How can Cloud 66 help?

Everyone who is deploying a Rails, Sinatra or Padrino stack on Cloud 66 benefits from automatic OS updates as well as an option to roll out fixes to more sever issues during deploy with "Apply Security Upgrades" option on the deploy menu.

Our StackScore also keeps monitoring your overall infrastructure setup as well as some parts of the app for known security issues.

OS level security monitoring

We are pleased to announce that from today we are also checking all Cloud 66 deployed servers for known security issues at the OS level (like Shellshock) regularly and will reflect the results in your Security StackScore. You will get an email when there is an issue and when automatic security upgrades fix the problem. So no more worries about the possibility of vulnerability and being unsure if a server is left behind.

OS level security monitoring is available to all Cloud 66 customers from today for free!

Stay secure and kepping rocking!

Building bullet proof applications on public cloud

Remember Hurracane Sandy?

We all remember when Hurricane Sandy took AWS and many other US East Coast data centres offline. Those were really painful and expensive days for many of us. Luckily disasters like Sandy happen rarely. However we have all seen how upgrades, hardware faults or network issues can take our sites and mobile backends offline.

Making Web Apps bullet proof

Imagine you had your site hosted on AWS US East when it was taken offline. Having a standby copy of it on DigitalOcean US West Coast could have really helped you.

Let's see how we can make that happen. There are three parts to achiveing this goal:

  • Code
  • Data
  • Traffic

Code

To build a standby stack you need to deploy your code to a new set of servers. This can be tricky since all your deploy scripts can be tied to single cloud provider or fixed set of IPs. Also most probably you have deploy scripts that deploy your code to existing servers but they don't build new servers for you.

Being able to deploy your code to a new set of servers is the most critical part of recovering from down time. The problem is, in most cases you don't know if the whole thing works end-to-end until it is too late.

The key to this step is to build immutable infrastructure.

Using tools like Chef can help by allowing you to automate a full stack build and deployment. You need to make sure the scripts can run on multiple cloud providers and are kept up to date as your application evolves.

Cloud 66 can also help you with this part as it runs on all major cloud providers and doesn't need any update to your scripts. It only uses your code to determine what needs to be deployed. Clone your stack and you're there!

Data

Data is always tricky to move. Big databases can take hours to move and you can't afford to be down for hours. The best strategy here is to setup DB replication to keep your data warm in your standby stack. Setting up and monitoring a DB replication can also be tricky. How do you know your DB replication is working and your replica is not off sync when you need it?

There are guides around on how to setup replication for MySQL, PostgreSQL and MongoDB as well as Redis.

Altertively, you can use Cloud 66 database replication across two stacks. With cross-stack DB replication, Cloud 66 replicates your MySQL, PostgreSQL, MongoDB or Redis databases with a couple of clicks. But we don't stop there, we also monitor your replications and let you know if the replica is out of sync with the master before it's too late.

Traffic

Now that you have a working stack with production data standing by all you need is to switch your traffic to the new stack.

The best practices here are:

  • Always have a load balancer so you don't need to change your DNS in most cases.
  • 24 hours before switching over your traffic, reduce your DNS record TTL to 5 minutes.
  • On the day of the move, switch the DNS record and increase the TTL back to 24 hours.

The issue with this approach is that when there is an emergency you don't have 24 hours.

This is where Cloud 66 ElasticAddress can help. ElasticAddress is a fast response DNS server with 100% uptime that quickly diverts your traffic from one stack to another. All you need to do is to point your domain to your ElasticAddress. ElasticAddress automatically tracks your web servers so if you add a load balancer it will update without your intervention.

Summary

Creating high availability applications takes a lot of engineering and even more planning. Not only that, your operations have to be always reflecting the changes in the development side of things to keep high availability an objective.

Ensuring your automation scripts are always up to date and can work with multiple cloud providers, your data is always replicated and your traffic can be quickly and easily switched over is the best place to start planning for emergencies and achieving high availability.

Cross Origin Resource Sharing (CORS) Blocked for Cloudfront in Rails

Using a CDN like AWS Cloudfront helps speed up delivery of static assets to your visitors and reduce the load on your servers. Setting up Cloudfront for your Rails apps is very simple, thanks to gems like asset_sync that work nicely with Rails’ asset pipeline compilation process and S3.

One issue can however be tricky to solve sometimes: CORS blocks.

What is happening?

If your assets are served by a CDN like Cloudfront, they can be served from a domain like sdf73n7ssa.cloudfront.net while your app is served on www.myawesomeapp.com. This triggers CORS blocking in browsers to stop malicious websites fetching nasty resources while browsing a seemingly nice website.

The most common type of this issue is with fonts when you get something like this:

Font from origin 'https://sdf73n7ssa.cloudfront.net' has been blocked from loading by Cross-Origin Resource Sharing policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://www.myawesomeapp.com' is therefore not allowed access.  

How to fix it?

When it comes to AWS Cloudfront, the most commonly suggested method is to allow CORS origins on the Cloudfront side. This method involves writing XML configuration code for AWS and uploading it on the S3 side.

There is a better and easier way!

I personally prefer using my DNS subdomains to solve this problem. If my CDN is behind cdn.myawesomeapp.com instead of sdf73n7ssa.cloudfront.net then browsers are not going to freakout and block them as cross domain security problems.

To point your subdomain to your AWS Cloudfront domain go to AWS Cloudfront control panel, select your Cloudfront distribution and enter your CDN subdomain into the Alternate Domain Names (CNAMEs) field. Something like cdn.myawesomeapp.com will do.

Now you can go to your DNS provider (like AWS Route 53) and create a CNAME for cdn.myawesomeapp.com pointing to sdf73n7ssa.cloudfront.net.

You can test your new CDN domain by making sure assets served from it are coming from Cloudfront.

curl -I http://cdn.myawesomeapp.com/assets/image.png  

should return something like this

HTTP/1.1 200 OK  
Content-Type: image/png  
Content-Length: 10414  
Connection: keep-alive  
Date: Mon, 22 Sep 2014 10:06:41 GMT  
Last-Modified: Sun, 06 Jan 2013 16:37:19 GMT  
ETag: "1c4bef3752c306b9c14a05b4a19d7d79"  
Accept-Ranges: bytes  
Server: AmazonS3  
Age: 1599  
X-Cache: Hit from cloudfront  
Via: 1.1 a3c44e1caa58818cd22903047dc0faf4.cloudfront.net (CloudFront)  
X-Amz-Cf-Id: sEbH-vV6deQra_YQa144RxtwhuJaWSrq-tpdiFxWdUbDbR2DnhoIrQ==  

But what about SSL?

This method works for non-SSL traffic, but in most cases we use a schema-less asset URL like //cdn.myawesomeapp.com for our resources so both http://cdn.myawesomeapp.com and https://cdn.myawesomeapp.com work. Using custom domain Cloudfront CDN names will break the SSL (https) version. You can use dedicated SSL enabled CDN from Cloudfront, but that’s usually very expensive.

Luckily, AWS supports SSL Server Name Indication (SNI).

To use it you need to upload your SSL certificate to AWS first. Unfortunately there is no UI for this on the AWS side yet, so you will need to install the AWS command line tool first (that’s easy). Once you have your AWS command line tool installed and configured with the AWS keys, you can upload your SSL certificate:

aws iam upload-server-certificate --server-certificate-name my_certificate_name --certificate-body file://my_cert.crt --private-key file://my_key.key --certificate-chain file://intermediate_cert.pem --path /cloudfront/my_cert_name_here/  

NOTE : Note the file:// and the trailing / in the command.

Once the SSL certificate is uploaded, you can head back to your CDN distribution on AWS Cloudfront and select “Custom SSL Certificate (stored in AWS IAM)” and “Only Clients that Support Server Name Indication (SNI)” options.

Now you should be able to see your assets both on HTTP and HTTPS served from CDN. Test it with CURL:

curl -I https://cdn.myawesomeapp.com/assets/image.png  

You can now safely change your config.action_controller.asset_host in your production.rb to //cdn.myawesomeapp.com

Sign up attribution for a Rails app

“I know I waste half of my advertising dollars...I just wish I knew which half.” said Henry Procter.

Knowing where your site signups come from is a good way of knowing if you are spending your advertisement dollars on the right target, but how would you do that?

Google Analytics is the most common tool used by many for that. The problem is Google Analytics can become very complicated very quickly. More importantly linking a campaign with a single signup is something that should happen in the app during the signup process. This will mark each individual user with the campaign they came from.

Mark your campaigns

Started by Google Analytics, UTM parameters in URLs were created to link campaigns to visits, but they are not limited to Google Analytics and are supported by almost any visitor tracking utility out there.

To start with, make sure all the links coming to your website use UTM parameters. You can use Google Analytics URL Builder or just simply add utm_source and utm_campaign to the URL coming to your site. This will end up looking something like this:

http://www.mysite.com?utm_source=test&utm_campaign=promo

Capture the UTM into a cookie

Not every one who visits the site is going to signup immediately, but they might do that later, so we need a way to capture that.

To achieve that, we are going to capture the utm parameters into a cookie that expires in a month. That way if the visitor signs up for up to a month, we know where he came from.

In Rails, we can do that by a filter:

before_filter :capture_utm

private

def capture_utm  
    unless cookies[:utm]
      cookies[:utm] = { :value => utm.to_json, :max_age => "2592000" }
    end
end

def utm  
    {
        :utm_source => params[:utm_source],
        :utm_campaign => params[:utm_campaign],
        :utm_medium => params[:utm_medium],
        :utm_term => params[:utm_term],
        :utm_content => params[:utm_content]
    }
end  

Read the cookie before signup

Linking the utm parameters with the new user is easy. If you are using Devise to manage your users, you can customise the signup controller to capture the value:

class CustomRegistrationsController < Devise::RegistrationsController

    def create
        super

        begin
            utm = cookies[:utm]
            if utm
                resource.utm_params = utm
                resource.save
            end
        rescue => exc 
            Rails.logger.error "Error reading utm cookie due to #{exc}"
        end

Make sure you add this line to your routes.rb:

devise_for :users, :controllers => {:registrations => “custom_registrations” }  

utm_params is a text field we added to the User class in a migration. This will store the JSON serialised version of the utm parameters in database next to each new signup.

You might want to avoid addin the values if they are empty (like when the signup came from no campaign) but I prefer having them there all the time to simplify reading and deserialising them during analysis.

What if you have multiple subdomains?

If you are like us, you don't only have 1 website. We have a blog (this one), help website and our community website. Also our main website (www.cloud66.com) is a separate app.

All of our apps are hosted on cloud66.com subdomains: blog.cloud66.com, help.cloud66.com and community.cloud66.com

This means we can use cookies that run across all of our websites. In Rails you can do this by adding domain to the cookies.

This will change the capture_utm method above to this one:

def capture_utm  
    unless cookies[:utm]
      cookies[:utm] = { :value => utm.to_json, :max_age => "2592000", :domain => ".mydomain.com" }
    end
end  

Note the . before the domain name. Using this method the cookies will be sent to the main app (where the signup happens) regardless of where the first visit was.

How about CDN?

We also use AWS Cloudfront as a CDN for our main website. This speeds up requests and reduces load on our servers. We use custom origin servers for our Cloudfront but the default settings is not going to work with the solution above.

This is because Cloudfront CDN by default ignores query strings and doesn't forward cookies to the origin server. Solving this is simple!

Login to your AWS Cloudfront dashboard and select the Cloudfront distribution. Under Behaviors, you can select edit and change these two settings:

Forward Query Strings to yes and Forward Cookies to all

Done!

Ubuntu 14.04, VPC support and T2 instance types

Today I am pleased to announce Cloud 66 support for AWS Virtual Private Networks (VPC). VPC allows AWS customers to build private networks complete with private IP ranges, routing tables, internet gateways and multi location VPC connections.

AWS has been encouraging VPC use within their customers by adding new instance types support only on VPC (like the developer friendly T2 instances) and slowly phasing out EC2 Classic (the non-VPC version) throughout their network by building a Default VPC for all customers automatically.

T2 instance types are particularly important as they have a very development friendly price structure: as most development stacks are not constantly in use, T2 instances can be an economically efficient way to use. They accrue CPU “credits” during idle times that you can use through bursts of usage. These instance types were only available on EC2 VPC.

To support T2 instances we had to support VPC and upgrade our guaranteed supported OS version from Ubuntu 12.04 to 14.04. I am pleased to announce that all of those are now available to all of our customers:

  • All new instances fired up with Cloud 66 will be configured with Ubuntu 14.04.
  • You have a choice of VPC or Classic when using AWS stacks.
  • T2 instances are supported on all EC2 regions and availability zones.

We hope you like these new features. As always we would love to hear your feedback and thoughts.

Say hello to your new dashboard!

Today we are very happy to announce launching our new look!

Cloud 66 user interface has had a facelift to make it more intuitive to use and most importantly the dashboard now has a totally new look and feel.

Cloud 66 Dashboard

The new features include quicker organisation switch, stack cards on the dashabord and a super fast search filter as well as links to the web head of the stack.

We also moved the context menu items out and put them on the right handside of the page to make it easier to navigate to your favourite features.

Cloud 66 Navigation

We hope you like the new UI and dashboard improvements. Let us know what you think and how we can make it even more awesome!

Introducing ElasticAddress

Today we are excited to tell you about ElasticAddress, the last tool in your operations toolbox to help you build bullet proof, high availability and immutable infrastructure.

What is ElasticAddress?

ElasticAddress is a quick response, high availability and automatic network traffic switch from Cloud 66.

It makes is very easy to switch the traffic from one stack to another without the need to change your DNS records.

How does it work?

ElasticAddress is like a clever DNS record that always points to the web end point of your stack. If you have 1 server, it will point to that server, if you add a load balancer it will automatically point to the load balancer.

It is also like a DNS record with 2 destinations: 1 primary address and one standby address. You can use 2 stacks as the destinations for an ElasticAddress and switch the traffic between the two within 5 minutes.

How do I use ElasticAddress?

To use ElasticAddress, login to your Cloud 66 account and click on ElasticAddress under the Account menu.

Here click on the 'Add an ElasticAddress' button.

Now you can select a primary and optionally a backup (standby) stack for this ElasticAddress. This will create a ElasticAddress for you that is pointing to the primary stack.

You can try and visit the ElasticAddress and see your stack. ElasticAddress looks something like this: 123-456-789.cloud66.net

You can point your domain CNAME (like www.myawesomeapp.com) to this ElasticAddress now.

Switching traffic between stacks

Now if you need to switch your traffic between the primary and backup stack, all you need to do is to click on the relevant stack of the ElasticAddress and your traffic will be switched within 5 minutes!

What can I use ElasticAddress for?

We built ElasticAddress to make it very easy to build high availability web applications or APIs across multiple geographical locations or data centres.

By combining easy deployment of your code through Cloud 66, cross stack database replication and ElasticAddress you can build web applications that run across different cloud providers, data centres and geographical locations that have high availability with a fraction of the cost usually associated with other disaster recovery solutions.

DigitalOcean API Changes

Recently DigitalOcean released version 2 of their API into public beta.

Currently we only support their version 1 of the API and will switch over to the new version when it is finalised.

In the meantime, please make sure to use the API v1 keys to connect your Cloud 66 account to DigitalOcean.

Cloud 66 DigitalOcean API

If you need any help or support, please contact support@cloud66.com

Introducing Custom Servers

At Cloud 66 we strive to make deployment and management of Ruby stacks as easy as possible. This means we deploy web servers, database server and backend process servers in your stack.

These could include MySQL, PostgreSQL, Redis, MongoDB or ElasticSearch for your databases or nginx and passenger or unicorn your web and application servers.

But what if you use a different type of server in your app that's not supported by Cloud 66? To take care of that, you can now fire up Custom Servers in your stacks.

Custom Server

A custom server is a server that has all the basic Cloud 66 features:
It shows up under your Stack, it is monitored and kept up to date by Cloud 66, secured by the firewall management and ActiveProtect and you can use the toolbelt to SSH into it just like your web and database servers.

Beyond that, we leave the customisation of a Custom Server to you. You can automate this we Deploy Hooks and Manifest files.

As an example, you can use Custom Servers to host non Ruby parts of your app or deploy a middleware like ActiveMQ for your stack.