“Facts are stubborn things, but statistics are pliable”, Mark Twain

In these challenging and uncertain times our propensity to follow the latest news stories seems heightened, with a focus on new and unfamiliar terms. One such term is the ‘R number’, referring to the effective reproduction of COVID-19. The key level of the R number is 1 – if it is higher than 1 we run the risk of exponential growth of the disease. Below 1 and we can see an effective decline of the virus through time. By way of example, an R of 3.5 means 100 people with COVID-19 would infect 350, who in turn would infect 1,225 and so on. Courtesy of the BBC, the impact of the different levels of the R number can be illustrated as, from a base of 1000 cases:

In a recent news article, the presenter was explaining the likely impact on the R value given different scenarios, such as the opening of schools or the gradual movement back to work. I was struck by the confidence assumed in the scenarios and the precision of the expectations. As an individual who knows nothing of epidemiology, it was somewhat comforting to know the experts are able to model out their expectations with such precision under the range of scenarios. Unfortunately, that comfort was dashed when the presenter went on to say that the R number in UK was somewhere between 0.6 and 0.9. That seemed quite a wide margin for an analysis that previously seemed so certain.

This led me to do some fairly basic research into the R number and the realisation that, as with so many statistical measures, it is the context of the analysis that is important to bear in mind when judging the value of the output. There are different ways of calculating R. In the absence of accurate and timely data, the number is calculated using a combination of sample data and modelling techniques. There is a lag between the collection of the data and the output. Also the data comes from across the country, when ideally we need to know the local reinfection levels.

So why am I referencing the R number in a blog to do with investment? I see many parallels in what I believe to be a deep flaw in our industry – the obsession with focusing on statistical numbers in isolation and, worse still, making decisions on the basis of those numbers alone.

We obviously need a basis for assessing risk and return of potential opportunities, but the monitoring of those opportunities by reference to benchmarks and over short periods of time beguiles us into a sense of certainty over the performance of those investments that, in all likelihood, is not there. We see this in many forms in the institutional investment industry, starting from the reliance on models for our asset allocation decisions, through to the construction of benchmarks and the use of statistical measures to assess the degree of variation from those benchmarks.

I should emphasize that I am not against the use of these measures and techniques per se; but rather, to quote Peter Drucker – ‘what gets measured, gets managed’, I fear that we place too great a reliance on the data. As my actuarial colleagues of many years past used to say ahead of the presentation of asset/liability model results to trustees, it is important to use the data as a tool to help guide the decision – not to treat it as some absolute truth to the issue.

It is in times such as these where we face significant uncertainty about economics, financial markets and, indeed, the fabric of society itself that there is ever more pressure to act. I would urge that whilst that may be the case, we should avoid the knee-jerk, take pause and consider the basis of the data in front of us; ponder the context and offer challenge. If we do all of this, in conjunction with the data, our outcomes are more likely to be fruitful and enduring.

More Articles