← All Articles

Autoscaling, Nginx Metrics and New Pricing

Khash SajadiKhash Sajadi
Jul 11th 23

Autoscaling, Nginx Metrics and New Pricing

Ok there is a lot to unpack, so let’s jump right in.

First, Autoscaling

Today we’re rolling out Autoscaling! Autoscaling has consistently been the most requested feature for a very long time and I am excited to announce its general availability today!

This has become kind of a rule for us: the easier it is to explain a feature, the more difficult it is to implement it. I don’t think I need to say much about what Autoscaling is, so I’m going to quickly tell you how you can use it and how it works.

It all starts with metrics. Our metrics collection subsystem has been in production for months now. We collect basic metrics (CPU, Memory, Network and Disk) with 30 second granularity and store them for at least 7 days. You can see them on each server or server group.

Autoscaling uses the metrics generated by this subsystem and runs regular checks based on various different aggregation algorithms to determine the direction of scaling decisions.

To set up an autoscaling rule, all you need to do is to tell the system what your desired metric level is. For example, an autoscaling rule can be defined as “keep the CPU level at around 40%”. This rule will scale up your web servers when the CPU goes above 40% and scales down web servers when it drops below 40%. You can have 1 rule per metric and each rule works independently of the other ones.

To start, we are enabling three types of rules: CPU, Memory and a new metric: Nginx Response Time. These rules calculate the rolling average of the given metric across a server group and act on that server group (scale up or down).

New Metrics: Nginx Response Time and Connections

Today, we are rolling out a new metric: Traffic (nginx) metrics. Starting today, you can see charts of nginx response time and concurrent connections for your web servers and across your web server groups. This is a valuable metric that is very close to your end users’ real experience of your application.

Combined with autoscaling, you can define rules to scale your servers up and down based on basic VM metrics (CPU, Mem and Network) as well as your Nginx metrics (add more servers when the response time goes up and scale down when it is low).

These rules are defined per server group and work on the aggregate metric of the group. For example you can define a rule to add new web servers if the average response time across all of your web servers, goes above 100ms.

To determine the type of the server to fire up, we use the last server on the stack as a template, so there is no need to create templates and abstract objects just to use autoscaling.

New Pricing

Since the beginning of Cloud 66, with a very brief exception a few years back, we used unit based pricing: price per server per month. This pricing model is very flexible and grows with you, but it can also cause issues later on. Unpredictable costs from month to month is the top complaint about unit pricing. Also, unit pricing might make you think twice before scaling up your servers, to avoid cost increase or having to explain to your boss why this month’s bill is higher than the last month.

In our new pricing we wanted to make the cost of using Cloud 66 as predictable as possible. That’s why we rolled out our tier based pricing to our new customers last month and today we are expanding the program to anyone who wants to participate.

Our new pricing is like many other SaaS companies: different tiers with different per month prices. You can see them here: cloud66.com/pricing

Here is the important part: If you are an existing customer and are enjoying our unit based pricing, you can stay on your current pricing plan as long as you want. If you decide to switch to our new pricing plan, you will be paying less, per server. The price is even cheaper if you decide to pay annually. This means if you choose to move to our new pricing, your bills will go down.

To allow us keep the unit prices the same for existing customers, and pay for the new features we add, we are going to roll out some new features only to our new pricing plans.

Availability

The new Nginx metrics is available to all of our customers. Depending on your application’s age, you might need to apply an Application Update to get your nginx metrics flowing. If you have more than one server in your web server group, this will have no impact on your traffic. If you only have one server on your stack, the update will cause a few seconds of downtime, as it requires an nginx restart.

Autoscaling is available to all our new customers and all existing customers on any of our new pricing plans.

Running the autoscaling infrastructure is not cheap. Apart from the Capex of developing the feature, the Opex mostly comes from the metrics infrastructure. Collecting millions of data points from each server at 30 second intervals, storing and aggregating them as well as running regular boundary checks based on autoscaling rules, comes at a cost and we had to choose between increasing our unit price for everyone, making autoscaling a paid add-on feature or limiting it only to our new pricing. We think, limiting it to our new pricing is the best option: you get to use autoscaling and pay less per server.

We hope you like these new changes: cheaper per server pricing, nginx metrics and autoscaling. Atop this foundation, we will rollout more features. We cannot wait to share them with you!

To learn more about Autoscaling, please take a look at our docs.


Try Cloud 66 for Free, No credit card required