Model accuracy is less important than you think

The scenario is this: You are a data scientist supporting a marketing manager in preventing customers from switching to your competitor. She is quite savvy and has a reliable technique, which costs some amount per use, that is excellent at convincing customers not to switch e.g. an unexpected discount on their bill. She needs your expertise is in identifying a list of customers she should apply this technique to.  A good list she tells you, should pick only customers that result in an outcome that increases profit (revenues less expenditure) for your company.

Clearly the objective is to best split the list into 2 groups (‘high-switch-risk’ and ‘not-high-switch-risk’) given the underlying uncertainty  But here’s the rub: the size of the high-switch-risk group has not been specified. It would easy to generate a list with a really high certainty about which customers are actually high-switch-risk.  However, this list would probably only number a handful of customers, as to increase the accuracy we’d only include customers we are really really certain about. This list would likely miss out on many high-switch-risk customers.

Therefore, we need to increase the size of the list, and hence reduce the accuracy . In order to determine when to stop increasing the size, and remembering that the objective is to maximise profits, we often ask ourselves the following questions:

  • False Positive Cost: What is the cost of incorrectly identifying a customer as high risk of switching (i.e. wasted marketing cost)?
  • False Negative Cost: What is the cost of failing to identify a high-risk customer (i.e. lost revenue)?
  • How many False Positive Costs = A False Negative Cost?

Ask yourself these questions when score a model for quality. Remember that depending the on context, sometimes having less false positives is more important than overall accuracy and sometime having less false negatives is more important.

The ability to execute upon an insight (=’Actionability’) is nearly always the most important metric to judge model success. Models exist to help us create more desirable outcomes; therefore models are scored on their ability to enable us to create more desirable outcomesThe objective is to maximise profitability, under this perspective some insights are more valuable than others.

I’ve summarised below a number of stylized scenarios to illustrate the concept. In a future post, I plan to dive into this is more detail – including implementing a predictive solution in R (if you have any good multivariate datasets I could use for this, please do point them towards me).

Case A
Fraud Investigation (limited number of investigations)
An investigation by an IRS or HMRS agent costs a lot – wrong predictions are expensive!
– False Positive is very expensive
– False Negative is inexpensive
– List should have few customers, and be really accurate

Case B
Mail Package Identification
Opening a package doesn’t take much time, letting €mm worth of contraband into a country is not good!
– False Positive is inexpensive
– False Negative is expensive
– List should have many customers, and hence less accurate

Case C
Churn Prediction, where intervention is inexpensive (e.g. ‘cinema tickets’) and losing customers is expensive
– False Positive is inexpensive
– False Negative is expensive
– List should have many customers, and hence less accurate

Case D
Churn Prediction, where intervention is expensive (e.g. ‘big discount on bill’) and losing customers is expensive
– False Positive is expensive
– False Negative is expensive
– List should be midsized, with reasonable accuracy and reasonable number of false negatives

False Positive / False Negative

Leave a comment