Unmasking the Chicago School
To see this article in digital magazine format, click here.
Yaneer Bar-Yam doesn’t shy away from big topics. “Let me start with statistics,” he says over the phone from his office at the New England Complex Systems Institute (NECSI), surrounded by the Kendall Square campus of the Massachusetts Institute of Technology. “Basically, science as it has existed was based upon the assumption of statistics, and the underlying assumption is that things are fairly independent, which leads to normal distributions.” Think of the classic Bell Curve, a graph of the “normal” distribution of data that often occurs in nature, from neck lengths in a population of giraffes to the height of sunflowers. In those normal distributions, says Bar-Yam, factors and data points “are fairly independent, means [averages] are fairly regular things, and deviations from the mean follow fairly normal patterns.”
By understanding the distribution of data, scientists can understand averages and likelihoods, and thus make predictions. Take the decay of nuclear particles. We have no idea when any given atom of Plutonium-239 is going to decay into Uranium-235, but we do know the odds of an atom of Plutonium-239 decaying over a given period of time. Take a decent sized chunk of the stuff and there will be so many atoms potentially decaying that we can calculate precisely how much will decay in a given period of time and how much radiation it will kick off. “When applied to markets,” Bar-Yam continues, “you end up with the standard asset pricing models,” like the Capital Asset Pricing Model, derivatives pricing models like Black-Scholes, Value-At-Risk models, or the more complicated benchmarking models institutional investors and asset owners use that take portfolio risk into account. These models have been remarkably useful, yielding Nobel prizes for the theoreticians, and untold sums of money for the financial practitioners who wield them.
The problem? These models—for years the deity that academics, politicians, and markets worshipped—don’t actually apply to financial markets.
For roughly the last four decades, the dominant conventional wisdom has been that markets are efficient, predictably random, and self-regulating, a set of ideas generally associated with the University of Chicago, the home of leading free-market economists like Efficient Market Hypothesis creator Eugene Fama, the Rational Expectations Theory’s Robert Lucas, and the late Monetarist, public intellectual and regulatory-foe, Milton Friedman. (To be fair, there’s a definite distinction between the actual studies and journal articles of Chicago economists and some of the more unthinking dogmas espoused by financial industry lobbyists and know-nothing politicians, but it’s still the free marketeers of Chicago whose ideas so often provide the intellectual framework for their political argumentation.) Yet post-2008, it’s hard to argue perfect rationality when the market is paying top dollar for repackaged NINJA loans, or make the case for efficiency when it’s lurching around like a herd of heifers in a china shop based on the musings of a credit analyst. Moreover, while the financial crisis has hardly resulted in a new consensus view of markets in Washington, on Wall Street, or much of anywhere else, it certainly has damaged Chicago’s reputation and greatly lessened its influence.
The Chicagoists themselves haven’t exactly waged the most effective public relations campaign. “I don’t know what a credit bubble means,” Fama recently told The New Yorker’s John Cassidy. “I don’t even know what a bubble means….I can tell a story very easily in which the financial markets were a casualty of the recession, not the cause of it.” His son-in-law, Chicago economist John Cochrane, blamed the stock slides of 2008 on President’s Bush’s September 24 economic address, and has said the recession was a good and necessary thing for the country. “We should have a recession,” said Cochrane. “People who spend their lives pounding nails in Nevada need something else to do.”
American economic history generally can be sliced up into fairly neat periods: the mercantilism of the colonial period; the nascent form of capitalism those colonies overthrew the British government to adopt; the laissez faire, Robber Baron Gilded Age; the increasing government involvement of the Progressive Era that helped give rise to Keynesian economics during the Depression; and, finally, the ascendance of Monetarism and market efficiency. So, what’s next? The answer, while still uncertain, likely lies with men and women such as Bar-Yam.
In the early 1960s, a French mathematician named Benoit Mandelbrot analyzed a century’s worth of cotton price movements and found fluctuations far more dramatic than the normal distribution models deemed possible. Further analysis of other commodities, even the price of stocks and bonds, found that they were similarly erratic. “In fact, the bell curve fits reality very poorly,” Mandelbrot and Richard Hudson wrote in their book The (Mis)Behavior of Markets. That was as of 2003, five years before Wall Street’s mortgage market blowup, the Flash Crash, the European debt crisis, or the debt-ceiling debacle. To explain why the theory and the real world simply don’t line up, Mandelbrot and Hudson offered two possibilities. That this was “a calamitous era that insists on flaunting all predictions.” Or, more likely, that “our assumptions are wrong.” Like any good economic model, normal distribution models are essentially constructed of assumptions on assumptions. In this case, the underlying assumption is that, from day to day, price fluctuations—even of the same stock—have about as much connection to one another as two sunflowers growing at the opposite ends of a field. Which is to say, they have virtually no connection at all. Prices are like the “random walk” of a drunkard heading home through a field, the theory goes: they lurch forward and fall back and dodge and weave all over with no real relationship or predictability.
As Mandelbrot showed, the models don’t fit actual fluctuations; what they miss, says Bar-Yam, is connectedness. To return to the nuclear decay analogy, Plutonium-239 always has a half-life of a little more than 24,000 years. “Yet what happens,” asks Bar-Yam, “when the decay of those atoms is no longer independent, when the decay of one atom affects the other atoms around it and how quickly they decay?” The answer, of course, is what happens inside a nuclear bomb or a nuclear reactor. In finance, it’s when markets collapse, or rise explosively, and seemingly uncorrelated financial instruments begin to follow and connect with one another in a self-reinforcing feedback loop. Often called “fat-tail events” or “Black Swans” after Nicholas Nassim Taleb’s book, it’s also called, appropriately enough, a meltdown. To make the overly erratic data fit into a normal distribution model, says Bar-Yam, economists and finance quants often add complicated parameters and conditions to their models, replete with confusing terminology like heteroscedasticity and kurtosis, but all that does is cover a much simpler flaw. “You can’t patch up statistics when it totally breaks,” Bar-Yam continues, “and the reason it breaks is because of a very fundamental reason that exists everywhere, and that is dependency. Things depend on each other…but all the theory and data analysis [done in finance] relies on those assumptions, that interdependencies are weak enough that they aren’t important.” From the outside, this doesn’t sound like the most revolutionary idea, yet, in the world of financial economics, these ideas border on heresy.
The reason for this, like financial markets, is somewhat complex. The early insights of the Chicago School were more than just (more or less) correct: They were useful. During the 1960s and 1970s, the widespread national backlash against overly onerous government regulations and restrictive income tax levels (70% for top earners through much of the 1960s and 1970s) turned into a referendum on American freedom and liberty. Along came a bunch of eggheads from the miserable science saying that not only was all this taxing and regulating a threat to personal liberty, it was actually bad for the economy—helping establish Chicago as the epicenter of the free-market theories that still dominate today.
It seems strange, perhaps, that a model everyone knows doesn’t quite work should hold so much sway for so long, but it makes a bit more sense if we think of it as a stage in the evolution of economic thinking. In a brilliant essay published in the 1958 Annual Report of the Rockefeller Foundation, Dr. Warren Weaver broke the progression of science down into three basic stages. First came the ability to understand problems of simplicity, with only a few variables, like the behavior of planets caught up in gravitational orbits, or the affect of heat on the pressure of a gas. Once simple relationships were understood, scientists were able to move on to relationships of disorganized complexity, where there are a lot of variables, but which interact in such simple ways that statistics and the law of averages make them predictable—like the decay of nuclear particles or the exponential population growth of, say, rabbits. You may not be able to predict the amorousness or fertility of any given bunny-couple, but get 100 rabbits together in a nice lush thicket without any predators, and you’ll be able to guess how many more there’ll be a year later with surprising accuracy.
This, it could be argued, is roughly the state of many economic theories. We understand some simple relationships (print lots of money and its value will decrease and you’ll have inflation). We have a pretty good grasp of disorganized complexity: Over long enough time frames, markets are awfully rational and efficient, and those normal distributions and assumptions about randomness tend to work out pretty well. Our problem lies in what Dr. Weaver called the third stage of scientific understanding: the ability to understand issues of organized complexity—the kind of problems that life sciences like ecology and biology deal with. What will happen to that rabbit population if you put a single cat into the same thicket? Five cats? Five cats, and a fox that might eat rabbits, but might also eat cats? Suddenly, the relationships become more complicated, and the feedback loops between variables far more important. Start throwing medical biology into the equation (say, a grass fungus that can attack a rabbit’s pulmonary system) and things get really complicated.
These are the kinds of situations the field in which Bar-Yam is pioneering—Complex Systems—tries to better understand. “Just because there are complicated dependencies in a system,” he says, “doesn’t mean you can’t understand it. Throughout nature and science, things depend upon each other in similar ways, but you need to understand those patterns of dependency if you want to understand the system.” This isn’t just abstract, Ivory tower theorizing. Conventional economic theory holds that, for the financial industry to be more stable, it should “diversify,” with links to multiple industries. During the 2000s, the financial industry did just that, and NECSI graphed the connections. Contrary to the theories, however, diversifying didn’t strengthen finance: The complicated nature of dependency meant that a tremor in any one segment of the economy could wash over the financial industry like a tsunami, A system designed, according to the old theories, to be particularly robust was in fact hardwired to collapse.
Complex systems analysis is, in some ways, akin to the big-picture techniques of macro investors—like the Tiger Fund’s Julian Robertson or Bridgewater Associates Ray Dalio—who look for larger trends in the world economy before predicting how the results of those trends will ripple outward into specific sectors and companies. There are also shades of the interplay between new information, overreaction, and overcompensation that help make up George Soros’ Theory of Reflexivity. However, where those investing ideas are rooted in instinct and anecdote, the goal of complex systems is to formalize the ideas into quantifiable patterns, the kind of things economics departments can teach—and Federal Reserve Governors and SEC regulators can implement.
That’s where the field of behavioral economics, which tries to understand the interplay of psychology with markets, enters. A standard criticism of behavioral economics is that while it’s chock-full of interesting observations and studies, it lacks a unified theorem to explain the economy. “They’ve got lots of interesting curiosity items,” Fama has said dismissively of behavioral finance. “I don’t know that it’s progressed much beyond the level of curiosity items.” Nowhere has this lack of coherence been more obvious than in asset pricing. The flawed assumptions and real-world breakdowns of CAPM and Black-Scholes have been known for years, but nothing has risen to replace them. “I don’t know what asset pricing would look like in a world that really took behavioral finance seriously,” Fama says. “If you really think prices are incorrect, what are you going to tell me about the cost of capital?” Yet, that might be changing. A new study by economists Gunduz Caginalp and Mark De Santis, published in the August issue of American Behavioral Scientist, provides a new model for understanding price fluctuations, serving as an alternative to the “random walk” theory and the prevailing asset-pricing models. The specifics are complex but, by isolating the impact of new information and the “noise” of constant trading, Caginalp and De Santis found that, absent other factors, “the best predictor of tomorrow’s price is halfway between today’s price and the price indicated by the trend from yesterday to today,” —almost like a body in motion whose velocity steadily decreases as gravity or wind resistance acts upon it. They also found that a larger money supply raised prices, something nearly everyone on Wall Street knows, but which is considered impossible by Chicago professor Robert Lucas’ rational expectations theory. The long and short of their findings is that an asset doesn’t have any single “correct” equilibrium price “determined by the fundamental value as classical finance suggests but rather there exists a spectrum of equilibrium prices” that depend on things like money supply, momentum, and the behavior of the investing public. Fit this into a framework of inquiry like Complex Systems, and Caginalp and De Santis’ work could help knit together a more comprehensive, usable alternative to the Chicago School.
If this more mathematically complex view of economics is a new direction for the field, there is a potential paradox at work. Didn’t the “economics profession [go] astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth,” economist Paul Krugman put it in The New York Times, with banks relying too heavily on ratings and risk models to make decisions for them? In a sense, yes, but the problem wasn’t the models themselves so much as overconfidence in them, with people acting as though models that were generally pretty accurate were somehow omnipotent. In essence, the Chicago School failed when it oversimplified the unsimplify-able. Any school of thought’s attempt to replace it will, in all likelihood, attempt to avoid the same mistake by recognizing its own limitations. Medicine and biology accepted what kind of problems they were facing and were able to address them as such and make advancements—not by applying ever more advanced statistics and complicated theories to problems that the numbers don’t quite apply to, but by carefully observing and classifying conditions, and finding correlations and drawing the theories out from reality. Academics—and, by extension, traders, bankers, chief investment officers, and politicians—would be wise to follow suit.
