Skip to main content
taking stock

When Britain's official number-crunchers revealed last week that the economy did a double-dip off the three-metre board in the first quarter, most people accepted as fact the unexpected bad news of yet another recession. The currency stumbled, markets grew more jittery than usual and critics quickly cited aggressive and ill-timed government austerity as the cause.

Prime Minister David Cameron pronounced the numbers "very, very disappointing," blamed the euro zone's own spiralling woes and declared: "What is absolutely essential is we take every step we can to help our economy out of recession."

But perplexed economy watchers found the negative GDP number unconvincing – and not only because it was based on about 40 per cent of the data that will ultimately be collected. Other economic surveys and soundings have been signalling at least some sputtering growth.

"I'm not saying all the data is dodgy. But my impression was that the economy had a reasonably good January, had a bit of a relapse in February but then had a reasonably good March," says Howard Archer, IHS Global Insight's estimable chief British economist. "Over all, I thought the economy grew modestly."

It's yet another example of how seemingly definitive numbers can amount to a lot less than meets the eye in the cloudy world of finance and economics. They can – and frequently do – prompt lousy decisions by governments, businesses and investors who fail to explore what the data actually mean.

"It's amazing how people take numbers for granted," says Philip Green, co-author of a new book, misLeading Indicators, that explains why it's essential to delve deeply into indicators to understand what they actually indicate, what conclusions can be safely drawn from them and whether they have any value at all.

"There are so many misleading indicators out there," says Mr. Green, a statistics expert whose Mississauga-based consulting firm, Greenbridge Management, helps businesses separate the statistical wheat from the chaff. "I'm just trying to give people the tools that will help them be a little more informed."

What constitutes a misleading indicator? "It's one from which you make an unreasonable or even a wrong inference," he says flatly.

Most of us know what it means when someone says the temperature is minus 20 degrees. But with GDP, inflation, stock market averages and a host of other economic and financial gauges, "you can't really know what the numbers mean unless you understand the definition."

So when faced with something like preliminary British GDP numbers, people have to probe beyond the headlines, rather than jumping on the initial reading. They may decide from anecdotal research that the situation is even worse than the numbers indicate. The stats gatherers will revise the first quarter figure next month, when they factor in expenditures, and probably again the month after that. During the Great Recession of 2008-09, the plunge from peak to trough was originally reported at 5.5 per cent over a period of six quarters. Revised data showed the economy nosedived 6.5 per cent in five quarters.

Mr. Green and co-author George Gabor, a retired professor of statistics at Dalhousie University, take particular delight in skewering the risk models used by most banks and thoroughly discredited during the Great Financial Meltdown of 2008.

Banks, including the Wall Street houses that blew billions on bad bubble bets, thought they were protected from extreme losses by something called value at risk (VAR). The idea was to gauge what the banks could lose on their trading floors on any given day, with a probability of 1 per cent. But their losses shot way past what they thought remotely possible, because of a critical mistake.

"It was based on the incredibly faulty premise that risk can be measured," Mr. Green says. "So many people act as if it can. The idea is contemptuous of the laws of physics."

What the risk managers should have been worried about as soon as losses far exceeded expectations was the probability that changes had occurred that affected the fundamental structure of the market. "That probability is not one you can measure. You assign it based on what you know about what is going on in the markets," Mr. Green says. "You can't measure probabilities from data."

Mr. Green is sympathetic toward bond-rating agencies, which came under intense fire for the high ratings that helped Wall Street peddle risky mortgage-backed securities to unsuspecting pension funds and the like. Leaving aside the issue of possible conflicts of interest, he argues that the ratings "were based on assumptions they had about the probability of defaults. What actually happens doesn't mean your probability assignment was wrong. People think it does, because they think risk is being measured."

Meanwhile, despite all that has happened, investors still put too much faith in numbers. To separate the useful from the misleading, he suggests people take the time to understand what goes into a particular indicator, how to define it and whether reasonable inferences can be drawn from it. Then, once it meets those conditions, comes the crucial but too often ignored final question: Does it correspond to reality?

Interact with The Globe