Overview

Recently, there's been a renewed focus on monitoring and understanding company (or product) growth, especially when it comes to SaaS products. Werner Vogels recently mentioned something quite similar in a blog post, "People often ask me if developing for the cloud is any different from developing on-premises software. It really is." I couldn't agree more, it's awesome for understanding products, how users are using them, and what you can do to improve them.

I have a technique that I like to use to understand how the products that I am working on are doing. The technique is simple, take some of the fundamental SaaS Company metrics and then apply them to a particular product or even feature. This is a great tool for product managers to understand how their features are adoptioned, what typical customer behavior is like with a given feature, and helps product leadershipset standards for what a product should be achieving.

Typical SaaS Metrics

These techniques came about because of a simple project. SaaS Companies track a number of metrics like ARR, LTV, ACV, CAC, ARPA, Net Revenue Retention and so on. You can read more about these in a variety of cheatsheets that cover the typical SaaS Metrics (ChartMogul has one as does Chaotic Flow). While these metrics are great and should absolutely be tracked, they do have a number of shortcomings.

These typical metrics are (a) lagging indicators and (b) aren't (always) actionable for individual contributors. An individual PM or data scientist working on a feature isn't going to be able to move the CAC number directly - even if it needs moving. Moreover, an individual PM may also not have a P&L that they're tracking and may not have detailed financial information to review.

Typical Growth Metrics

That's typically where growth metrics come in. You pick and measure PMs or growth initiatives based on high level MAU, DAU, downloads and so on. These typical metrics are often short-sighted. MAU or raw action counts can be informative, but the problem is simple. These metrics demonstrate growth, but they're not good drivers of growth - they're vanity metrics.

First Round Review had a great post on this topic titled I'm Sorry, But Those are Vanity Metrics. The article points out the short-comings of various metrics and motivates more deeply understanding customers, customer behavior, and usage outliers (hint: call them and ask!).

SaaS Metrics for PMs and Growth Leads

At this point, if you're a PM or someone in charge of growth, you're seeing what brought me here. SaaS metrics are too high level and may not apply to an individual improvement and growth metrics don't go to sufficient depth. That's when I realized one could manipulate some typical SaaS metrics to apply generally to any feature I might want to understand and understand whether or not they're effectively growing.

Feature Drop off (Churn)

This metric is great, because it shows in no uncertain terms how many of your customers or users are trying to use a given feature and then stopping to use it.

In short, you'll be asking,

at time t you'll have a group of users that start using a given feature, after n days, how many of those users are still using that feature?

I'll typically do it on a cohort basis such as "given all the customers that started using a given feature in August, how many of them are still using that feature in December?"

Here's why this is such a great metric. By cohorting, you can see if changes you make in the onboarding of customers, marketing, and so on are improving your adoption. You're also going to see whether or not this corresponds to people stopping using the feature as well. Once you're looking at this information regularly and across a number of products, you can start setting adoption benchmarks.

What I like about this metric ist hat it allows you to compare arbitrary features that have nothing to do with one another to each other and make sense of it. You can also weight these by the customer type that you're hoping to target, the revenue generated from these customers or these use cases, and so on.

Risks

The fundamental risk with this is that, if used in isolation, you might end up de-prioritizing features that are actually important to customers. In short, that this metric gets weaponized to make the case for not working on a particular feature or not continuing to build out a given feature. While I haven't seen this be the case, one could imagine it being so.

Conclusion

There are a number of other metrics that I will write about over the coming weeks. Getting key KPIs or metrics to track is essential for monitoring the progress of anything, not just software. I hope that you find the feature Drop Off metric useful and are able to apply to the products that you work on.