Despite being trendy topics, big data and machine learning have been around for a long time. More than 21 years ago, Deep Blue was the first computer to win a game of chess against chess grand-master Garry Kasparov. Back then, the computer relied on brute force and a (big data) database of chess moves to calculate the best possible next move. Although Deep Blue’s win was a first, more interestingly, losing made Kasparov aware of the potential for using computing power to improve play and decision-making in chess.
After the defeat, Kasparov started a new league of chess called Advanced Chess, in which human players were supposed to augment their game with the help of computer input. This co-operation resulted in fewer mistakes coming from faulty input data, while the human players had access to an immense database of chess moves to optimize their game. To win a game of Advanced Chess, it requires “Two to Tango,” both the human player and the computer need to be optimized to work together.
More recently, traders, asset managers, and chief investment officers (CIOs) have started to extensively use mathematical optimization and machine learning to identify opportunities for outperformance. Apart from facing difficulty in explaining the optimization or machine learning to trustees, using these techniques naively to make unquestioned investment decisions is bound to lead to unintended consequences.
Whereas a game of chess is bound to fixed rules, the rules of investing keep changing. Therefore, naive asset allocation optimization is known to lead to error optimization, especially when using a simple model based on historic data. This error proneness mainly results from oversimplification of the relationship of historic data with the present state of markets, policy, and supply/demand dynamics. To understand how history rhymes with the present, human interpretation and situational awareness become critical.
Within the ABB pensions team, we have been combining human market research with quantitative big data models for the past seven years. Key to improving the decision-making was getting the trustees and board members to gain confidence in the extensive use of quantitative models. To do so, it was important to show how the historic rhyming is part of the model output, but that it also requires a layer of human criticism. Real-life examples, such as increased tail risk correlations, business cycle behavior, and non-normal return distributions, create the decision robustness boards need to feel comfortable with a certain change.
Having gained the confidence of board members made it possible to add the quantitative aspects to be integrated into a more dynamic investment decision process, leading to better results than continuously rebalancing back to the long-term strategic asset allocation. Despite our dynamic investment focus on assets, the assessment of opportunities and risks continues to be measured against our balance sheet.
With an asset performance of more than 30% since 2013, our pension funds have significantly outperformed multi-asset and diversified funds of the largest and most-respected managers. Initially, the results were most clearly coming from the dynamic asset allocation, which has provided a relatively stable additional performance of 0.2% per annum. Especially when market stress is around the corner, quantitative support proves to be very beneficial in removing the anchoring and availability biases from decision-making. This delivered good results during the oil shock of 2015 and provided early warning signs to the Q4 2018 equity market shocks.
However, the quantitative approach was also used in finding investment managers with complementary down-side protecting strategies and creating an overlaying equity portfolio in an effort to reduce inherent implementation biases across asset classes. During the first runs of the quantitative optimization, it was discovered that the portfolio itself had a large bias to US duration, something which was adjusted manually to prepare for the rising rate environment. The so-called “completion portfolio” was implemented in Q4 2016, and had a relatively hard time in the upward markets since the US presidential election. However, with the markets turning downward in Q4 2018, the equity portfolio has proven its resilience and made up the majority of underperformance incurred during the goldilocks era.
For this particular pension fund, all but one active return manager has outperformed its benchmark in 2018. Active asset management resulted in an outperformance of 0.8% in 2018, while dynamic asset allocation delivered 0.7%. The outperformance from manager selection highlights the importance of not relying solely on dynamic asset allocation at this stage of the cycle. With valuation dislocations increasing across conventional benchmarks, it has become vital to thoroughly analyze the underlying biases in risk drivers, and why these risk drivers would provide value from an economic or technical supply/demand basis.
Too often, we see examples of machine learning strategies driven fully by (limited) historic data, without taking account of the qualitative aspects of our current day and age. Unlike chess, the investment environment isn’t bound by fixed rules, instead both fundamental and behavioral factors play a major role. Big data and machine learning-supported techniques have the potential to greatly improve decision-making, but they should always be used in combination with human creativity, situational awareness, and intuition. Nowadays, similar to dancing and playing advanced chess, it takes “Two to Tango” in asset management as well.
Rens Goetz is Head of Asset Management at ABB Ltd., a pioneering technology leader in electric automation. ABB operates more than 120 DB and DC plans globally across 100 countries, totaling more than $16 billion AUM. He will be a technology panelist at the 2019 Chief Investment Officer Summit on May 16-17 at the Harvard Club in New York City.