Here’s a situation you might be familiar with.
Let’s say that you track a metric named ‘Marketing Qualified Leads’ or MQLs. This is a common metric used by many B2B companies, that is typically defined as leads that your marketing team qualifies as ‘good fit’ and hands over to your sales organisation.
You keep track of this metric every week (perhaps even on an XmR chart!). One day, you see the MQL metric spike up. “Oh good!” you say, and assume that something wonderful has happened.
But some time passes, and you notice that your sales metrics don’t seem to show a commensurate increase in numbers. Even taking into account some real world lag, you find this a little peculiar.
Eventually, you ask someone from the marketing team: “hey, why was there a sudden spike in MQLs on the 20th February, earlier this year?” And the marketing person replies, rather sheepishly: “Oh, we changed the way we counted MQLs. It used to be that we would just count the leads that John would manually pull from our various bespoke customer interest sign-up forms to pass to the sales team. But eventually we added every SEO-inbound lead captured through Hubspot to our reported MQL count, because Mary thought it would be a better reflection of our numbers … especially given how our sales people have been picking leads directly from Hubspot about a year ago.”
“Huh,” you say. “So there’s actually no change in ‘real’ MQLs that the salespeople are contacting?”
“No.”
“Which explains why the sales numbers don’t seem to have budged much, beyond routine variation?”
“Yes.”
You find yourself distrusting your numbers just a wee bit more.
Here’s another scenario, which you might also be familiar with.
Let’s say that you do a short growth project to improve customer onboarding. You define your engaged customer count, count the number of customers you onboard successfully each week, make some change to your onboarding flow, and then measure the difference in newly engaged customers after your modification.
You declare the project a mild success, archive your spreadsheets and Powerpoint presentations, and move on to the next thing.
A year later you come back to your onboarding project. Your boss wants you to revisit the modifications you did to the onboarding flow … to see if you can test an alternative flow against it. She thinks there’s a higher chance of success with the alternative flow.
There’s only one problem: you can’t remember how you defined an ‘engaged customer’ in the previous project. Was it ‘three out of six steps completed in the onboarding flow in the first week after sign-up’? Or was it ‘two logins a week for the first three weeks’?
Enough time has passed that you can’t find all the throwaway notes and materials that you created for this research project.
As a result, you can no longer compare the lift in engaged users given your boss’s new user flow to the lift attributable to your old modifications … because you can’t for the life of you remember how the old metrics were defined!
Fix This … Before It Becomes a Problem
What’s the lesson here? The lesson is somewhat obvious: tracking how your metrics are measured matters!
In truth, these examples are very common.
If you’re reading this, you might believe that becoming data driven is a worthwhile goal. Heck, you might even think that using XmR charts is going to help you get most of the way there.
But working with data is tricky because of little gotchas like these: if you don’t deal with these issues now, you might waste months of work when you circle back later.
The answer to this problem is — clearly — you need to define your metrics and store them someplace centrally available, and have people update those definitions when they change what a metric means. You may use a spreadsheet. Or you may use a fancy tool. It doesn’t matter — you just want to save your definitions somewhere central.
Which then begs the question: what format should you store these metric definitions in? Is there a standard way to define any kind of metric you care about?
The good news is that yes: there is. There’s a very simple, universal way to define metrics that will work for any kind of metric under the sun. Don’t make the mistake that most people make — use this format from the very beginning.
A Universal Method to Define Metrics
In the years after World War 2, statistician W. Edwards Deming taught business operators that “an operational definition is a procedure agreed upon for translation of a concept into measurement of some kind.”
What he meant was that things like ‘Marketing Qualified Leads’ or ‘engaged users’ are concepts that need to be defined in a particular way for it to be useful in a business context.
The ‘particular way’ is what Deming called an ‘Operational Definition’, or an OD. An OD consists of three parts:
- The criterion: what is it that you’re measuring?
- The test procedure: how do you intend to measure it?
- The decision rule: when should you include something in the count?
This seems like a bit much for a simple metric definition. But if you think about it, the three components of an OD tell you everything you need to know to reliably reproduce a metric.
Here are a couple of examples:
- Criterion: Marketing Qualified Leads
- Procedure: Every Monday, all SEO-sourced leads from Hubspot are pulled into a spreadsheet and counted.
- Decision Rule: Only SEO leads from the previous Monday (defined as midnight UTC) to the previous Sunday (midnight UTC) are counted.
If you have this metric definition in a central location, for instance, you could always ask the marketing team: “is this no longer true?”, or create a process that forces them to update the definition every time they wish to do so.
Here’s another example:
- Criterion: Activated engaged users
- Procedure: Every Monday, an SQL query is run that calculates all new customers with more than or equal to (>=) five app interactions over the previous week.
- Decision Rule: The query is limited to customers where customer’s created_at is: 0:00 UTC Monday <= created_at <= 23:59 UTC Sunday of the previous week. An app interaction is any analytics event with the tag ‘activity’.
Both of these metric definitions are ODs for a digital product, measures that may be easily instrumented using software alone. But ODs are useful for more manual counts as well.
Suppose you are rolling out a new sales pitch across your entire sales force. Your sales leaders want to know that the sales training has worked, and that the new sales pitch is being delivered accurately across dozens of sales calls each week. You decide to define a new metric for this, named ‘Sales Pitch Adoption. The OD for ‘Sales Pitch Adoption is as follows:
- Criterion: Sales Pitch Adoption
- Procedure: Some 80% of sales calls are done virtually over Zoom (the rest are in-person), and are recorded using sales recording software. Of those recorded calls, five recordings are randomly picked every Wednesday and are evaluated as pass or fail by the sales manager for the region. Sales Pitch Adoption will be the % of successful calls across all the sampled calls across the entire sales org.
- Decision Rule: The sales manager will pass or fail the recording based on a simple scoring rubric: a) that the sales call proceeds in the order: pitch then questions then demo, b) that the question segment of the call is no longer than 10 minutes, and c) that the salesperson successfully establishes a next step for the prospect at the end of the call.
Sure there may be certain bits of this measure that are somewhat reliant on the sales manager’s individual judgement (such as the successful ordering of elements within the sales call), but that’s still acceptable. The point is that it is very clear what the OD for Sales Pitch Adoption is! This means there’s going to be no confusion about how the metric is to be measured, and all the sales managers will know how to calculate the metric for their given regions.
If you define your metrics using an Operational Definition, you will avoid many of the pitfalls outlined at the start of this article. It’s not very hard: it just consists of three things. We recommend it to anyone about to set on a path of becoming data driven.