There was a time when I could send a personal email to every single new customer who had signed up for Cloud 66 and talk to them about their business, code and other cool things. It was great to know our customers personally, know what they do and what they are looking to get out of Cloud 66. I still know many of our customers by first name and try to catch up with them on the phone or Skype every now and then.
But we have grown so much over the past 12 months that I cannot keep up with our growth and get in touch with every customer personally.
The question I ask myself these days is: How can we continue that personal touch at scale?
There are great tools out there to help with better customer relationship. From email based automation tools like Customer.io to integrated CRM solutions like Intercom, or support ticket systems like HelpScout.
We use those tools to build more personal welcome emails, drip campaigns or enrich our understanding of our customers by bringing in external sources of information like Facebook, Twitter or Github.
But that doesn’t answer the biggest question we keep asking ourselves: Are our customers happy?
If a customer is paying you, then they are either happy or don’t have other options.
It is rare to have a product that is the only option for a customer (excluding nasty businesses that lock customers in by legal or sometimes illegal tricks). So how would you know if your customer is going to still pay you the next month? How can you measure customer happiness when you cannot get in touch with every single one and use your human judgement?
How would you know if your customer is going to still pay you the next month? How can you measure customer happiness when you cannot get in touch with every single one?
Doing a simple search online shows a lot of “Customer Satisfaction Metrics” articles. Most of those are useless. They rely on surveys, “intentions”, “perceptions” or “expectations”.
No one gives honest and accurate answers to a long survey about how they feel about a company. No one can measure intentions, perceptions or expectations. These are mostly ways for consultants to come in, run workshops and hold meetings and generate long reports for no one to read.
We tried building a reproducible analytical way to measure customer happiness. We are software engineers, not human behaviour experts so this is not going to be a scientific exercise or a definitive answer to the customer happiness question. We are still proving our theories and improving our algorithms, but we think it’s worth sharing it with others.
Building a Happiness Score
The first step is to identify actions that customers perform on the product that show engagement. The more engaged customers are with a product, the more likely they are to be happy. We used "last deployment date" as one of our measurable engagement metrics. Based on that we can calculate a last deployment score which is calculated like this:
# days since last deployment
x = DateTime.now - last_deployment_at
# decay: 1 right after deployment and almost 0 after 15 days
This will look like this:
Older customers are not necessarily happier customers (although it might be the case for the majority of no-lock-in businesses). Your older customers are more likely to tell you if they are unhappy with the product than very young ones. This makes account life score another candidate to measure:
x = DateTime.now - created_at
The account life score will increase quickly for the first 60 days of the account life and then goes flat as the account gets older.
If you have a high churn, you can change this to decrese the score after a while.
Another important factor showing engagement for most SaaS business is "time since last login". A last_sign_in_score can look like this:
# days since last signin
x = DateTime.now - last_sign_in_at
# decay: 1 right after signin and almost 0 after 15 days
Which would look like this:
In this example if a customer hasn't logged in for 35 days, their user happiness score will drop to almost zero.
Overall Happiness Score
A collection of scores for specific metrics can help you identify issues with customers before you lose them. However an overall happiness score is always useful to bring things to your attention.
We did that by simply assigning weighting to each category and averaging the result.
An example could be like this:
return (account_score * 0.7) + (team_score * 0.3)
The formulae here are experimental and we are always fine-tuning them based on other interactions we have with our customers. Some of these formulae can get complex so commenting your code is a good tip!
You will start to see the "normal ranges" for each score category as you use them more. This is a good place to use rolling averages and standard deviations as a way to monitor sudden change (like when a customer usage pattern suddenly changes with no warning and you need to reach out to them).
Customer Support Interactions
When we started this project, we thought measuring customer support interactions (support tickets, online chats, etc) could be a good way to measure customer happiness. This is true in the sense that customers care about your product if they are engaging in a support conversation. But this rarely means they are happy. Think of the number of times you sent a support ticket to Facebook or Google. Or when a bug in the product requires a lot of customer interaction. Neither case is an indicator of your happiness or unhappiness with the product.
That's why I don't think using customer support interactions on their own withuot any manually entered meta data is going to be useful to the overal score.