How Well Do You Measure Risk?

Forecasts based on typical techniques are “indistinguishable from random noise” for shorter time horizons, according to a Dutch central bank researcher.

The most common forms of risk measurement may not be the most accurate, according to research from Dutch central bank De Nederlandsche Banke (DNB).

The study—an examination of the accuracy and reliability of risk analysis techniques—focused on the difference between value-at-risk (VaR) and expected shortfall (ES). Of the two methods, VaR forecasts were more accurate, found DNB researcher Chen Zhou and the London School of Economics’ Jon Danielsson.

This counters the common view that VaR is “inherently inferior to ES.”

“Perhaps swayed by the theoretical advantages, ES appears increasingly preferred both by practitioners and regulators,” the study’s authors wrote.

In fact, international bank regulator the Basel Committee suggested replacing 99% VaR with 97.5% ES in its latest Basel III market risk proposal.

“This will lead to less accurate risk forecasts,” Danielsson and Zhou argued. “If the regulators are concerned with precision, VaR is preferred.”

To compare the two techniques, the researchers conducted risk forecasts for sample sizes ranging from one year to 50 years. They found that both methods were “highly sensitive” to the sample size, with uncertainty increasing “rapidly” as the sample size decreased. For samples below a few thousand days, the uncertainty became “considerable.”

“At the smallest samples, often the most commonly used in practice, the uncertainty is so large that the risk forecast is essentially indistinguishable from random noise,” the researchers wrote.

ES carried more uncertainty than VaR, when the two techniques were projected at the same probability levels and when using the Basel III combination of 97.5% ES and 99% VaR.

The ES technique was found to have one advantage: It was harder to manipulate than VaR. As manipulated risk forecasts would also lack accuracy, this might be a reason to prefer ES, Danielsson and Zhou noted.

But regardless of the method chosen, the researchers said risk forecasts will remain “virtually indistinguishable from random noise” except when derived from large sample sizes.

“Common practices and trends in risk management are misguided,” Danielsson and Zhou wrote. “It is a concern that vast amounts of resources are allocated based on such flimsy evidence.”

Read the full paper, “Why Risk is so Hard to Measure.”

Related: Is Risk Measurement Damaging Long-Term Performance?