I can’t remember exactly when, perhaps around the end of the 1990s, the retailers that my old company sold to started sharing their EPOS (electronic point of sale) data with us. Think of it – every day (or even more frequently) we had records of how much we had sold and in which store. How useful would it be to be able to track the sales of our products on a daily basis?
We could spot emerging trends almost instantly and work out how well our promotions, advertising and product innovations were working. With the retailers’ support, we could carry out trials in limited areas to find out what worked and what didn’t before pouring huge sums of money in. But what actually happened?
Every day the files landed with a big electronic thud on the company’s servers and just sat there, neglected and unloved. Although the potential benefits were obvious, no one had any idea how to extract meaning from such a huge volume of fast-changing, noisy and messy data. So we just ignored it.
Just having data is of little value – that is where data analytics come in. To convert this data into information, we need to structure the data and then look for patterns in it.
In theory, these patterns represent information that we can do something with. Unfortunately, it is not quite that simple.
The first problem: noise
The more granular and unstructured data is, the more noise it contains. And as anyone who has ever looked up at a cloud and seen a rabbit (or Elvis) knows, it is possible for us to detect patterns in noise. As it turns out, computers are prone to this as well.
I can’t express this any better than Nate Silver did, in his book The Signal and the Noise: The Art and Science of Prediction. Silver is famous for his ability to analyse complex real-world process, like elections, and make stunningly accurate predictions based on the application of mathematical technique, so he is no Luddite.
“This is why our predictions may be more prone to failure in the era of Big Data. As there is an exponential increase in the amount of available information, there is likewise an exponential increase in the number of hypotheses to investigate. For instance, the U.S. government now publishes data on about 45,000 economic statistics.
“If you want to test for relationships between all combinations of two pairs of these statistics— is there a causal relationship between the bank prime loan rate and the unemployment rate in Alabama? — that gives you literally one billion hypotheses to test. But the number of meaningful relationships in the data—those that speak to causality rather than correlation and testify to how the world really works—is orders of magnitude smaller.
“Nor is it likely to be increasing at nearly so fast a rate as the information itself; there isn’t any more truth in the world than there was before the Internet or the printing press. Most of the data is just noise, as most of the universe is filled with empty space.”
The second problem: a lack of context
Silver refers to the complexity that is an inevitable consequence of scale – the unimaginable number of mathematical combinations. But, even if it were not complex, it is easy to demonstrate why the real world cannot be understood through simple mathematical association alone.
For example, a computer might notice a correlation between the sales of men’s shorts and ice cream, but it cannot know whether the ice cream sales cause the sale of shorts (perhaps because the ice cream drips on bare legs rather than on fabric) or vice versa.
It takes a human being to spot that both are caused by something else altogether, which might not appear in the data set at all – temperature. The difficulty of spotting causal patterns in data is complicated further when we have to factor in the time dimension as well. Something that we observe now might be the result of actions taken one month, one quarter or even a year ago, and the key event might not have been captured as ‘data’ at all – like ‘that’s when we started using cute animals in our TV adverts’.
Arguably we only know for sure that a relationship is causal rather than purely a correlation through action – when we do something and get the response we expect. Analysis alone simply provides us with a plausible hypothesis to test. I could go on, but you should have got the message by now.
This technological stuff might be great for letting you know that ‘people who order this item also bought…’ but, because this ‘insight’ is the product of a relatively simple correlation, I wouldn’t use it to do anything complicated, like choosing what meal to cook for my in-laws this evening.
This is an extract from Steve Morlidge’s new book, Present Sense.