How should we measure financial performance? For decades, this question was rarely asked, because the answer seemed obvious.
It was obvious because we’d all been schooled and drilled to use one tool: variance analysis, where the actual for a period is compared to a fixed target.
This, so the argument goes, gives us a clear and unambiguous answer to the main question we need to answer: is this performance good or bad? It allows us to dispense opprobrium and praise – not to mention bonuses – with confidence.
What’s more, we can break those variances down into their constituent parts and work out the contribution of volume, price and mix, to answer the second big ‘performance question’: what caused this variance in performance?
As we made these black and white declarations, to protests that “the budget was wrong” In our self-righteousness, we dismissed this as a lame excuse for poor performance.
Here’s the first piece of bad news: those people were right.
Politics, self-interest and guesswork
In a volatile business environment, especially – and my goodness, have we got one of those at the moment – any fixed target is, at best, a guess. Even worse, most targets are the product of negotiation, so politics and self-interest come into play. The more detailed we get, the more guesswork and politics is involved, and the more correct our critics get.
Deep down, we all know this, but we pretend not to because the inevitable conclusion of admitting it to ourselves is uncomfortable. But it gets worse.
So targets are false friends, but we can’t rely on the other side of the variance equation either – the ‘actual’ data.
Noise in the data
This isn’t because the data is ‘wrong’ (although of course it can be). As any scientist or statistician will tell you, every data point in every walk of life is infected by noise. It’s the product of the potentially infinite amount of random events that, to a greater or lesser extent, blur our picture of reality.
The more voluminous and granular our data sets, the bigger the problem of noise infestation becomes – and the more meaningless any comparison between a single data point and anything else gets. It means that ‘Big Data’ is more of a problem than an opportunity when the only club in the golf bag is a crude comparison.
The Emperor’s new clothes
Here is the unvarnished unspoken truth of the matter: you cannot make any sound judgement about performance by comparing a politically tainted target with a data point infected by an unknown amount of noise. Our written commentaries are mostly a process of elegant post-rationalisation, with little predictive power.
At best, this is a waste of everyone’s time and effort. At worst, our customers might start taking action based on a fairy tale. As things stand, in many companies, the best advice you could give leaders is to ignore the variances and make your own mind up based on the data, which is a sad state of affairs.
The Emperor has no clothes – as a profession, we need to feel the chill around our nether regions before we muster the energy and wit to do anything about it.