Hey Cloud 66 users! Continuing our series of 'how-to' content, today I'll be providing tips for how you can easily migrate from Heroku to Cloud 66.
We've frequently spoken on this blog about the false economies presented by public PaaS platforms, making a case for why they're not the greatest option for devs looking for flexibility and affordability.
To complete your migration, there are 3 key components that we'll be addressing: your code, data and traffic. However before we get into that, you'll first need to calculate your server requirements. We recommend you deploy similar server resources when using Cloud 66 -- so either 512 MB, 1 GB, 2.5GB or 14 GB depending on what you're currently running in Heroku. We also encourage the use of a separate server for your database in production environments.
For practical purposes, I'll assume you've already created a Cloud 66 account. So let's get started:
1. Your Code
Begin by providing Cloud 66 with access to your Git URL so that it can be analyzed. If your Git repository is private, you'll need to add the SSH key provided to your Git account and use the SSH version of the Git URL. I covered how to do this in my previous article here.
Once you've clicked Analyze, Cloud 66 will analyze your code to determine what dependencies you have. Upon clicking Deploy stack, Cloud 66 will create the required servers in your cloud account, configure them from scratch by updating packages, securing your firewall, installing Nginx and many more steps. Once this is complete, your code will be running on servers in your own cloud, ready for use.
2. Your Data
Once your code is deployed, you'll now want to migrate your database across. It's worth noting that the process differs slightly for PostgreSQL and MySQL databases:
From your Heroku toolbelt, create a database backup URL by running
Next, visit your stack detail page and click the Import Heroku data link. Paste the URL provided by the toolbelt into the field, and click Import Heroku data.
Start by dumping your existing database. It's worth referring to the ClearDB documentation to help you steer away from common problems.
$ mysqldump -u [username] -p[password] [dbname] > backup.sql
Once you have a
mysqldumpfile, we recommend you use the Cloud 66 toolbelt to upload the file to your stack database server. Remember to replace the [ ] fields below with your real values:
$ cx upload -s "[stack_name]" --server [database_server_name] backup.sql /tmp/backup.sql
Next, use the toolbelt to SSH to your server:
$ cx ssh -s "[stack_name]" [server_first_name]
Finally, use the command below to import your backup into the database. You can find the generated username, password and database name by visting your stack detail page and clicking into your database server (example: MySQL server):
$ mysql -u [generated_user_name] -p [generated_password] "[database_name]" < /tmp/backupfile.sql
3. Your Traffic
Once you’re ready to serve traffic from your Cloud 66 stack, you'll need to redirect your traffic to it. To switch your traffic to your new Cloud 66 stack, access your DNS provider settings and update your CNAME records to point to your new server.
Based on the feedback from some of our customers who've gone through this process, I've also included some useful pointers they've suggested we include in our documentation:
Web server and Procfile
By default, Cloud 66 will deploy your stack with Phusion Passenger, but you can also choose a custom web server like Unicorn. You may have a
webentry in your Procfile to do this on Heroku. Cloud 66 ignores this entry to avoid compatability issues.
To run a custom web server, we require a
custom_web entry. It's important to set this before analyzing your stack, to avoid building the stack with Passenger.
You can also use the Procfile to define other background jobs.
Heroku re-starts all dynos at 24 hours of uptime, which may conceal possible memory leaks in your application. When you migrate to Cloud 66, these will become noticeable because we don’t restart your workers (other than during a deployment), so the leak can grow to be bigger. A temporary solution is to re-create the Heroku restart behavior, for example with this script:
for OUTPUT in $(pgrep -f sidekiq); do kill -TERM $OUTPUT; done
This will send a TERM signal to any Sidekiq workers, giving them 10 seconds (by default) to finish gracefully. Any workers that don’t finish within this time period are forcefully terminated and their messages are sent back to Redis for future processing. You can customize this script to fit your needs, and add it to your stack as a shell add-in.
Note that this is a temporary solution, and we recommend that you use a server monitoring solution, like New Relic, to identify the source of your leak.
Asset Pipeline Compilation
If you haven’t compiled assets locally, Heroku will attempt to run the assets:precompile task during slug compilation. Cloud 66 allows you to specify whether or not to run this during deployment.
That's it folks. If you're looking to build a business case for moving off Heroku and need more specifics, you can refer to this case study from Playlist.com who did just that.
And don't forget, you can find answers to all sorts of queries in our Help pages, and similar content to this on our Community site. And if you haven't already, make sure you join the Cloud 66 Slack community to get involved.