Earlier this month SEED magazine published this very interesting article by George Sugihara, theoretical biologist at Scripps Institution of Oceanography, on how deep mathematical models tie the events of climat change, epileptic seizure, fishery collapses, and risk management surrounding the global financial crisis. Excerpts:
Try here for the full article. The article was originally published on Dec 10, 2010.
[...] Economics is not typically thought of as a global systems problem. Indeed, investment banks are famous for a brand of tunnel vision that focuses risk management at the individual firm level and ignores the difficult and costlier, albeit less frequent, systemic or financial-web problem. Monitoring the ecosystem-like network of firms with interlocking balance sheets is not in the risk manager’s job description.
A parallel situation exists in fisheries, where stocks are traditionally managed one species at a time. Alarm over collapsing fish stocks, however, is helping to create the current push for ecosystem-based ocean management. This is a step in the right direction, but the current ecosystem simulation models remain incapable of reproducing realistic population crashes. And the same is true of most climate simulation models: Though the geological record tells us that global temperatures can change very quickly, the models consistently underestimate that possibility. This is related to the next property, the nonlinear, non-equilibrium nature of systems.
Most engineered devices, consisting of mechanical springs, transistors, and the like, are built to be stable. That is, if stressed from rest, or equilibrium, they spring back. Many simple ecological models, physiological models, and even climate and economic models are built by assuming the same principle: a globally stable equilibrium. A related simplification is to see the world as consisting of separate parts that can be studied in a linear way, one piece at a time. These pieces can then be summed independently to make the whole. Researchers have developed a very large tool kit of analytical methods and statistics based on this linear idea, and it has proven invaluable for studying simple engineered devices. But even when many of the complex systems that interest us are not linear, we persist with these tools and models. It is a case of looking under the lamppost because the light is better even though we know the lost keys are in the shadows. Linear systems produce nice stationary statistics—constant risk metrics, for example. Because they assume that a process does not vary through time, one can subsample it to get an idea of what the larger universe of possibilities looks like. This characteristic of linear systems appeals to our normal heuristic thinking.
Nonlinear systems, however, are not so well behaved. They can appear stationary for a long while, then without anything changing, they exhibit jumps in variability—so-called “heteroscedasticity.” For example, if one looks at the range of economic variables over the past decade (daily market movements, GDP changes, etc.), one might guess that variability and the universe of possibilities are very modest. This was the modus operandi of normal risk management. As a consequence, the likelihood of some of the large moves we saw in 2008, which happened over so many consecutive days, should have been less than once in the age of the universe.
Our problem is that the scientific desire to simplify has taken over, something that Einstein warned against when he paraphrased Occam: “Everything should be made as simple as possible, but not simpler.” Thinking of natural and economic systems as essentially stable and decomposable into parts is a good initial hypothesis, current observations and measurements do not support that hypothesis—hence our continual surprise. Just as we like the idea of constancy, we are stubborn to change. The 19th century American humorist Josh Billings, perhaps, put it best: “It ain’t what we don’t know that gives us trouble, it’s what we know that just ain’t so.”
Among these principles is the idea that there might be universal early warning signs for critical transitions, diagnostic signals that appear near unstable tipping points of rapid change. The recent argument for early warning signs is based on the following: 1) that both simple and more realistic, complex nonlinear models show these behaviors, and 2) that there is a growing weight of empirical evidence for these common precursors in varied systems.
A key phenomenon known for decades is so-called “critical slowing” as a threshold approaches. That is, a system’s dynamic response to external perturbations becomes more sluggish near tipping points. Mathematically, this property gives rise to increased inertia in the ups and downs of things like temperature or population numbers—we call this inertia “autocorrelation”—which in turn can result in larger swings, or more volatility. Another related early signaling behavior is an increase in “spatial resonance”: Pulses occurring in neighboring parts of the web become synchronized. Nearby brain cells fire in unison minutes to hours prior to an epileptic seizure, for example.
The global financial meltdown illustrates the phenomenon of critical slowing and spatial resonance. Leading up to the crash, there was a marked increase in homogeneity among institutions, both in their revenue-generating strategies as well as in their risk-management strategies, thus increasing correlation among funds and across countries—an early warning. Indeed, with regard to risk management through diversification, it is ironic that diversification became so extreme that diversification was lost: Everyone owning part of everything creates complete homogeneity. Reducing risk by increasing portfolio diversity makes sense for each individual institution, but if everyone does it, it creates huge group or system-wide risk. Mathematically, such homogeneity leads to increased connectivity in the financial system, and the number and strength of these linkages grow as homogeneity increases. Thus, the consequence of increasing connectivity is to destabilize a generic complex system: Each institution becomes more affected by the balance sheets of neighboring institutions than by its own. [...]
Try here for the full article. The article was originally published on Dec 10, 2010.
Comments
Post a Comment