Skip to main content

George Sugihara On Early Warning Signs

Earlier this month SEED magazine published this very interesting article by George Sugihara, theoretical biologist at Scripps Institution of Oceanography, on how deep mathematical models tie the events of climat change, epileptic seizure, fishery collapses, and risk management surrounding the global financial crisis. Excerpts:
[...] Economics is not typically thought of as a global systems problem. Indeed, investment banks are famous for a brand of tunnel vision that focuses risk management at the individual firm level and ignores the difficult and costlier, albeit less frequent, systemic or financial-web problem. Monitoring the ecosystem-like network of firms with interlocking balance sheets is not in the risk manager’s job description.

A parallel situation exists in fisheries, where stocks are traditionally managed one species at a time. Alarm over collapsing fish stocks, however, is helping to create the current push for ecosystem-based ocean management. This is a step in the right direction, but the current ecosystem simulation models remain incapable of reproducing realistic population crashes. And the same is true of most climate simulation models: Though the geological record tells us that global temperatures can change very quickly, the models consistently underestimate that possibility. This is related to the next property, the nonlinear, non-equilibrium nature of systems.

Most engineered devices, consisting of mechanical springs, transistors, and the like, are built to be stable. That is, if stressed from rest, or equilibrium, they spring back. Many simple ecological models, physiological models, and even climate and economic models are built by assuming the same principle: a globally stable equilibrium. A related simplification is to see the world as consisting of separate parts that can be studied in a linear way, one piece at a time. These pieces can then be summed independently to make the whole. Researchers have developed a very large tool kit of analytical methods and statistics based on this linear idea, and it has proven invaluable for studying simple engineered devices. But even when many of the complex systems that interest us are not linear, we persist with these tools and models. It is a case of looking under the lamppost because the light is better even though we know the lost keys are in the shadows. Linear systems produce nice stationary statistics—constant risk metrics, for example. Because they assume that a process does not vary through time, one can subsample it to get an idea of what the larger universe of possibilities looks like. This characteristic of linear systems appeals to our normal heuristic thinking.

Nonlinear systems, however, are not so well behaved. They can appear stationary for a long while, then without anything changing, they exhibit jumps in variability—so-called “heteroscedasticity.” For example, if one looks at the range of economic variables over the past decade (daily market movements, GDP changes, etc.), one might guess that variability and the universe of possibilities are very modest. This was the modus operandi of normal risk management. As a consequence, the likelihood of some of the large moves we saw in 2008, which happened over so many consecutive days, should have been less than once in the age of the universe.

Our problem is that the scientific desire to simplify has taken over, something that Einstein warned against when he paraphrased Occam: “Everything should be made as simple as possible, but not simpler.” Thinking of natural and economic systems as essentially stable and decomposable into parts is a good initial hypothesis, current observations and measurements do not support that hypothesis—hence our continual surprise. Just as we like the idea of constancy, we are stubborn to change. The 19th century American humorist Josh Billings, perhaps, put it best: “It ain’t what we don’t know that gives us trouble, it’s what we know that just ain’t so.”

Among these principles is the idea that there might be universal early warning signs for critical transitions, diagnostic signals that appear near unstable tipping points of rapid change. The recent argument for early warning signs is based on the following: 1) that both simple and more realistic, complex nonlinear models show these behaviors, and 2) that there is a growing weight of empirical evidence for these common precursors in varied systems.

A key phenomenon known for decades is so-called “critical slowing” as a threshold approaches. That is, a system’s dynamic response to external perturbations becomes more sluggish near tipping points. Mathematically, this property gives rise to increased inertia in the ups and downs of things like temperature or population numbers—we call this inertia “autocorrelation”—which in turn can result in larger swings, or more volatility. Another related early signaling behavior is an increase in “spatial resonance”: Pulses occurring in neighboring parts of the web become synchronized. Nearby brain cells fire in unison minutes to hours prior to an epileptic seizure, for example.

The global financial meltdown illustrates the phenomenon of critical slowing and spatial resonance. Leading up to the crash, there was a marked increase in homogeneity among institutions, both in their revenue-generating strategies as well as in their risk-management strategies, thus increasing correlation among funds and across countries—an early warning. Indeed, with regard to risk management through diversification, it is ironic that diversification became so extreme that diversification was lost: Everyone owning part of everything creates complete homogeneity. Reducing risk by increasing portfolio diversity makes sense for each individual institution, but if everyone does it, it creates huge group or system-wide risk. Mathematically, such homogeneity leads to increased connectivity in the financial system, and the number and strength of these linkages grow as homogeneity increases. Thus, the consequence of increasing connectivity is to destabilize a generic complex system: Each institution becomes more affected by the balance sheets of neighboring institutions than by its own. [...]

Try here for the full article. The article was originally published on Dec 10, 2010.

Comments

Popular posts from this blog

The Independent Directors at OpenAI

Sam Altman was the CEO and  Greg Brockman  was the chairman of the board  at OpenAI.org, the parent company that is listed as a not-for-profit organization in the US u/s 501(C)(3).   On 17 Nov 2023 both of them were fired by the Independent Directors of the board. This post talks about the 4-day drama that ensued at the back of these events, focusing on the role of Independent Directors. (Try here for a related earlier post.) One year ago the company launched the ChatGPT, the Large Language Model, that rose to prominence with its Generative AI capabilities (“GPT” or Generative Pre-trained Transformer) and human-like response and interactive interface (“Chat”). At launch ChatGPT was based on based on GPT-3.5 series. The launch took the internet by storm as Microsoft unveiled its commercial partnership with the firm, and its global marketing machine geared into action.  To accommodate for this new profit-making "partnership" endeavor, the firm came up with anothe...

OpenAI and the Network Effect (ft. Md Rafi and Ola Krutrim)

"Who is the greatest Bollywood singer of all times?" I typed into chat.krutrim.com It listed seven, but missed Mohammad Rafi.  Horrified, I followed up, "Why is Mohammad Rafi not in this list?"  And it missed the context, replying, "Mohammad Rafi is not in the list because the list you are referring to is not provided." With a deep sigh, it reminded me of Altman's India visit June last year. Someone asked him if India should invest in building a Foundational model (assuming funding and talent is not as issue). And he replied , "it would be hopeless to compete with us on training foundation models.. you shouldn’t try”. Try they will, and they should. The world's fourth(?) largest economy has pockets of deep pockets that can sustain the demands of developing a resource hungry technology such at Foundational LLMs. But distribution, diffusion and monetisation remains challenging, when chatGPT, Copilot and Gemini in Indic languages are just an App ...

$NVDA: When You are The Moat

NVIDIA had their earnings call yesterday for the quarter ending Dec'23. Markets were muted in anticipation. As expected, the S&P 500 rose by 2.5% on the back of a strong performance and pipeline. The day after, NVIDIA stock rallied to all time high of $800. This gave the company a market cap of USD 2 Tn, surpassing Alphabet, Inc., and becoming the fourth largest listed company in the world by market value.  For perspective consider this - the single day gain of USD 277Bn was bigger than the largest listed company in India - the world's 4th biggest equity market, and by an estimate its market cap was now larger than the entire SENSEX of India. Who knew? Perhaps not even Berkshire Hathaway. (See share holding pattern in the links below). One of the simplest reasons for the meteoric rise of NVIDIA is, as Warren Buffet once famously said about resilient businesses, that NVIDIA provides a moat to the the software firms for their business of developing and productising AI and, sp...