15 February 2018

How statistics distort policy


Macroeconomic statistics shape our world. Just witness markets’ reaction to the higher-than-expected US inflation numbers reported this week.

Often, these data have economic and political consequences of their own. Sir John Cowperthwaite, the self-effacing civil servant widely credited for the triumphal emergence of Hong Kong as a free-market powerhouse, famously banned the collection of aggregate economic statistics. He feared that they might encourage overbearing bureaucrats: “If I let them […], they’ll want to use them for planning.”

The rise of macroeconomic data is, as Cowperthwaite warned, intimately bound up with the mixed economy. Gross Domestic Product, perhaps the most closely watched measure of an economy’s health, became widely used only after World War II. National income, a precursor of GDP, had been standardised by Simon Kuznets in the 1930s, as part of the New Dealers’ attempt to gauge the impact of the Great Depression and the countervailing effect of their own interventionist policies.

The Consumer Price Index and related inflation measures only rose to relevance when systematic rises in the price level came to the attention of economists. Monetary aggregates, in turn, were brought into focus by Milton Friedman and others as they pointed to the tight empirical relationship between money creation by central banks and accelerating inflation. Macro aggregates are cause and consequence of the philosophy which grants the government a large role in economic management.

The state’s performance as it anticipates and reacts to those statistics has hardly been flattering. Centrally planned economies, which took the notion of statistics-powered intervention to the extreme, fell by the wayside. Even middle-of-the-road “demand management”, as practiced for instance in Britain in the 1960s and 1970s, proved disappointing as politicians invariably prioritised the short term. A correlation emerged between the business cycle and the electoral cycle.

Nor are public officials averse to the creative accounting which they (rightly) decry so much in the public sector. Consider Argentina, where the former Peronist President Cristina Kirchner pressured the statistics office to report inflation consistently below outturn. Nicolás Maduro, the Venezuelan satrap, has altogether dispensed with this fiction by simply banning official inflation figures.

The use of macro statistics for public policy is thus technically questionable and politically suspect. But lately, a new problem has surfaced to challenge government statisticians: their numbers seem to be doing an increasingly poor job of reflecting the true performance of economies.

It isn’t a new concern. Economists such as Harvard’s Martin Feldstein have for some years been warning that the failure of statistics bodies to fully account for quality improvements and new products leads them to overestimate inflation and underestimate real output growth. Thus the economic gains in terms of lives saved and improved by healthcare innovations are ignored in CPI and GDP calculations. Similarly, advances in the environmental quality, safety and performance of cars over the past 40 years only partially show up in national income accounts.

This sort of consistently one-sided mismeasurement should be of great concern to policymakers, for two reasons. First, to the extent that statistical outturns shape policy (eg, the central bank’s interest-rate target, annual wage increases as per collective bargaining arrangements, rail fares, and so on), overestimation leads to inaccurate adjustments, leading to an inflationary bias which in turn will tend to raise price level trends. Secondly, because macroeconomic statistics are the economic data to which journalists and the public are most exposed, false figures will shape a false economic narrative.

Consider the malaise that has gripped many Western economies since the financial crisis. Whilst employment is up and the stock market, at least until recently, has registered record highs, there are worries that the good times of the pre-crisis period may never return because labour productivity and real wages are growing more slowly than they used to. But labour productivity is the quotient of GDP to hours worked, whereas wages have to be adjusted for inflation to give a useful measure of real growth in incomes. If we are underestimating increases in output which come from innovation and better quality, and in turn overestimating cost inflation, both recorded productivity and wages will continue to disappoint, even though well-being is in fact increasing.

There is as yet no consensus on the extent to which mismeasurement explains bad post-crisis macroeconomic outcomes. But we can be certain that it is a problem, and one likely to worsen over time, for two reasons. The first is that a one-sided bias is cumulative, meaning that mismeasurement will cause a growing gap between recorded and actual outcomes. The second reason is that, as the cycle of innovation accelerates, the share of services in developed economies grows, and more of output growth is due to quality improvements rather than increased production of the same good, the portion of the economy that is vulnerable to mismeasurement will increase.

Half a century ago, US presidential candidate Robert Kennedy declared that national income measured “everything […], except that which makes life worthwhile.” His was probably an unfair criticism, because for many years GDP has helped us to assess the relative health of economies. So have CPI and other aggregate statistics, but they are long past their expiry date. And so, perhaps, are the theories of economic planning which spawned their creation.

Diego Zuluaga is Head of Financial Services and Tech Policy at the Institute of Economic Affairs.