Non-equilibrium dynamics

From Sustainability Methods

In short: This entry revolves around (non-)equilibria in our world, and how we can understand them from a statistical point of view.

The limits of equilibrium dynamics

Frequentist statistics widely builds on equilibrium dynamics. Generally, equilibrium dynamics are systems that are assumed to have reached a steady state, and whose variability is either stable or understood. The last difference is already of pivotal importance and marks a pronounced difference in how models evolved over the last decades.

Regarding stable dynamics, steady state systems have been known for long in physics, and their dynamics have equally long been explored by mathematics. However, especially the developments in physics during the 20th Century made it clear that many phenomena in the natural world do not follow steady state dynamics, and this thought slowly inched its way into other fields beyond physics. Computers and the availability of measures and numbers raised questions on why certain dynamics remain unexplained. Such dynamics were not necessarily new to the researchers, yet they were never before in the focus of research. Weather models were one of the early breakthroughs that allowed for a view into dynamic systems. It became clear that beyond a certain amount of days, weather prediction models perform worse than predicting the weather based on long term average records. Hence we can clearly diagnose that there are long-term dynamics that dominate short-term changes, and allow for the calculation of averages. Nevertheless, local dynamics and weather changes lead to a deviation from these averages. These dynamics are so difficult to predict over longer time stretches, that today meteorologist rely on so-called 'ensemble models'. This approach looks at, say, one million runs of the same mathematical weather model which also contains a stochastic component and derives average weather conditions over a certain time period. In fact, the statement that there is a 60% probability of rain at a certain day and time follows from the fact that, in this example, 60% of all model runs resulted in rain at that day and time.

Knowledge about complex or chaotic systems slowly inched its way into other branches of science as well, with cotton prices being a prominent example of patterns that could be better explained by chaos theory, at least partly. There seemed to be unexplained variance in such numbers that could not be tamed by the statistical approaches available before. From our viewpoint today, this is quite understandable. Cotton is an industry widely susceptible to fluctuations by water availability and pests, often triggering catastrophic losses in the production of cotton that were as hard to explain as long-term weather conditions. Cotton pests and weather fluctuations can be said to follow patterns that are at least partly comparable, and this is where chaotic dynamics - also known as 'non-equilibrium dynamics' - become a valuable approach.

The same can be said for humanly induced climate change, which is a complex phenomenon. Here, not only the changes in the averages matter, but more importantly the changes in the extremes - in the long run, these may have the most devastating effects on humans and the ecosystem. We see already an increase in both flooding and droughts, which are often contradicting average dynamics, leading to crop failures and other natural disasters. El NiƱo years are among the most known phenomena that showcase that such dynamics existed before, but were seemingly amplified by global climate change, triggering cascades of problems across the globe. Non-equilibrium dynamics hence become a growing reality, and many of the past years were among the most extreme on record concerning many climatic measures. While this is a problem in itself, it is also problematic from a frequentists statistics viewpoint. The world is after all strongly built and optimised by statistical predictions. Non-equilibrium dynamics are hence increasingly impacting our lives more directly. Future agriculture, disaster management and climate change adaption measure will increasingly (have to) build on approaches that are beyond the currently dominating portfolio of statistics.

The chaos increases

This brings us now to the second point: what are current models that try to take non-equilibrium dynamics into account? The most simple models are Correlations and Regressions, with 'time' as the independent variable in the case of regressions, or just look at simple temporal correlations concerning temporal dynamics. Since such models either build on linear dynamics of blunt rank statistics, it is clear that such approaches must fail when it comes to taking non-linear dynamics into account. Non-equilibrium dynamics are more often than not characterised by non-linear patterns, which need us to step out of the world of linear dynamics.

Over the last decades, many types of statistical models emerged that are better suited to deal with such non-linear dynamics. One of the most prominent approaches is surely that of Generalized Additive Models (GAM), which represents a statistical revolution. Much can be said about all the benefits of these models, which in a nutshell are - based on a smooth function - able to compromise predictor variables in a non-linear fashion. Trevor Hastie and Robert Tibshirani (see Key Publications) were responsible for developing these models and matching them with Generalised Linear Models. By building on more computer-intense approaches, such as penalized restricted likelihood calculation, GAMs are able to outperform linear models if predictors follow a non-linear fashion, which seems trivial in itself. This comes however with a high cost, since the ability of higher model fit comes - at least partly - with the loss of our ability to infer causality when explaining the patterns that are being modeled. In other words, GAMs are able to increase the model fit or predictive power, but in the worst case, we are throwing our means to understand or explain the existing relations out of the window.

Under this spell, over the last decades, parts of the statistical modelling wandered more strongly into the muddy waters of superior model fit, yet understood less and less about the underlying mechanisms. Modern science has to date not sufficiently engaged with the questions how predictive modelling can lead to optimific results, and which role our lack of explicit understanding play in terms of worse outcomes. Preventing a pandemic based on a predictive model is surely good, but enforcing a space shuttle start when there is some lack of understanding of the new conditions at launch day led to the Challenger disaster. Many disasters of human history were a lack of understanding the unexpected. When our experience was pushed into the new realms of the previously unknown, and our expactation of the likelihood of such an extreme event happening was low, the impending surprise often came at a high price. Many developments - global change, increasing system complexity, growing inequalities - may further decrease our ability to anticipate infrequent dynamics. This calls for a shift towards maximizing the resilience of our models that may be needed under future circumstances. While this has been highlighted i.e. by the IPCC, the Stockholm resilience center and many other institutions, policy makers are hardly prepared enough, which became apparent during the COVID-19 pandemic.


The world can (not) be predicted

From a predictive or explanatory viewpoint, we can break the world of statistics down into three basic types of dynamics: Linear dynamics, periodical dynamics, and non-linear dynamics. Let us differentiate between these three.

Linear dynamics are increasing or decreasing at an estimate that does not change over time. All linear relations have an endpoint which these patterns do not surpass. Otherwise there would be plants growing on the top of Mount Everest, diseases would spread indefinitely, and we could travel faster than the speed of light. All this is not the case, hence based on our current understanding, there are linear patterns, which at some point cannot increase. Until then, we can observe linear dynamics: you have a higher plant diversity and biomass under increasingly favourable growing conditions, illnesses may affect more and more people in a pandemic, and we can travel faster if we - simply spoken - invest more energy and wit into our travel vehicle. All these are current facts of reality that were broken down by humans into linear patterns and mechanisms.

Periodical dynamics are recognized by humans since the beginning of time itself. Within seasonal environments, plants grow in spring and summer, diseases show increases and decreases that show up-and-down patterns, and travel vehicles do not maintain their speed because of friction or gravity. Hence we can understand and interpret many of the linear phenomena we observe as cyclic or periodical phenomena as well. Many of the laws and principles that are at the basis of our understanding are well able to understand such periodic fluctuations, and the same statistics that can catch linearly increasing or decreasing phenomena are equally enabling us to predict or even understand such periodic phenomena.

Non-linear dynamics are often events or outlier phenomena that trigger drastic changes in the dynamics of our data. While there is no overarching consensus on how such non-linear dynamics can be coherently defined, from a statistical standpoint we can simply say that such non-linear dynamics do not follow the linear dynamics frequentist statistics are built upon. Instead, such phenomena violate the order of statistical distributions, and are therefore not only hard to predict, but also difficult to anticipate. Before something happened, how should we know it would happen? Up until today, earthquakes are notoriously hard to predict, at least over a long-term time scale. The same is also true for extreme weather events, which are notoriously hard to predict over longer time frames. Another example is the last financial crisis, which was anticipated by some economists, but seemingly came as a surprise to the vast majority of the world. The COVID-19 pandemic stands out as the most stark example that hardly anyone expected before it happened. Yet it was predicted by some experts. Their prediction was based on the knowledge that during the last hundred years the parameters after which a specific disease spread were not adding up to a germ that actually leads to a global pandemic that threatens the whole world. At some point - the experts knew - the combination of traits of a germ may be hence unfortunate enough to enable a pandemic. Since the flu - and some may also say, HIV - there was no clear global threat occurring, yet it has happened before, and it may happen again.

Such rare events are hard to anticipate, and almost impossible to predict. Nevertheless, many of the world leading experts became alarmed when the rising numbers from Wuhan were coming in, and with the spread starting outside of China, it became clear that we were in a situation that would take a while and huge efforts to tame. The world is facing such wicked problems at an increasing pace, and while it is always under very individual and novel circumstances, we may become increasingly able to anticipate at least the patterns and mechanisms of such non-linear dynamics. Chaos theory emerged over the last decades as an approach to understand the underlying patterns that contribute to phenomena being unpredictable, yet which still follow underlying meachnisms that can be calculated. In other words, chaos theory is not about specific predictions, but is more about understanding how complex patterns emerge. Edward Lorenz put it best when he defined chaos: when the present determines the future, but the approximate present does not approximately determine the future. The concrete patterns in an emergent chaotic system cannot be predicted, because a later state of the system will always be difficult to foresee due to minute non-recognizable differences in the past. In statistical words, we are incapable to statistically predict chaotic systems because we are incapable to soundly measure the past conditions that lead to future patterns. Not only may our measures be not sensitive enough to measure past conditions sufficiently, but we are also unable to understand interactions within the system. In statistical terms, the unexplained variance in our understanding of a past state of the system translates into an inability to predict the future state of the system.


Approaches to understand the chaos

Let's face it, all statistical models are imperfect. Chaos theory is testimony that this imperfection can add up to us completely losing our grip or our understanding of a system altogether. To any statistical modeler, this should not be surprising. After all, we know that all models are wrong, and some models are useful. It is equally clear that the world changes over time, not only because of Newton's law of physics, but because humans and life make the world a surely less normally-distributed place. Chaos theory does however propose an array of mathematical models that allow to understand the patterns that allow the repeating patterns and mechanisms that often explain the astonishing beauty associated with chaos theory. Many repeated patterns in nature, such as the geometry of many plants, or the mesmerizing beauty of turbulent flow can be approximated by mathematical models. While some branches of modelling, such as weather forecasts, are already utilizing chaos theory, such models are far away from becoming part of the standard array in predictive modelling. As of today, the difference between inductive and deductive models is often not clear, and there is a feeble understanding of what causal knowledge is possible and necessary, and which models and patterns are clearly not causal or explainable but instead predictive.

What did however improve rapidly over the last decades are Machine Learning approaches that utilize diverse algorithmic approaches to maximize model fits and thus take a clear aim at predictive modelling. This whole hemisphere of science is partly strongly interacting and building on data originating in the Internet, and many diverse streams of data demand powerful modelling approaches that build on Machine Learning. 'Machine Learning' itself is a composite term that encapsulates a broad and diverse array of approaches, some of which are established since decades (Clustering, Ordinations, Regression Analysis), while other methods are at the frontier of the current development, such as artificial neural networks or decisions trees. Most approaches were already postulated decades ago, but some only gained momentum with the computer revolution. Since many approaches demand high computational capacity there is a clear rise in the last years, along with the rise of larger computer capacities. Bayesian approaches are the best example of calculations that are even today - depending on the data - more often than not rather intense in terms of the demand on the hardware. If you are interested in learning more about this, please refer to the entry on Bayesian Inference.

Bayes Theorem

In contrast to a Frequentist approach, the Bayesian approach allows researchers to think about events in experiments as dynamic phenomena whose probability figures can change and that change can be accounted for with new data that one receives continuously. However, this coin comes with a flip side

On the one hand, Bayesian Inference can overall be understood as a deeply inductive approach since any given dataset is only seen as a representation of the data it consists of. This has the clear benefit that a model based on a Bayesian approach is way more adaptable to changes in the dataset, even if it is small. In addition, the model can be subsequently updated if the dataset is growing over time. This makes modeling under dynamic and emerging conditions a truly superior approach if pursued through Bayes' theorem. In other words, Bayesian statistics are better able to cope with changing condition in a continuous stream of data.

This does however also represent a flip side of the Bayesian approach. After all, many data sets follow a specific statistical distribution, and this allows us to derive clear reasoning on why these data sets follow these distributions. Statistical distributions are often a key component of deductive reasoning in the analysis and interpretation of statistical results, something that is theoretically possible under Bayes' assumptions, but the scientific community is certainly not very familiar with this line of thinking. This leads to yet another problem of Bayesian statistics: they became a growing hype over the last decades, and many people are enthusiastic to use them, but hardly anyone knows exactly why. Our culture is widely rigged towards a frequentist line of thinking, and this seems to be easier to grasp for many people. In addition, Bayesian approaches are way less implemented software-wise, and also more intense concerning hardware demands.

There is no doubt that Bayesian statistics surpass frequentists statistics in many aspects, yet in the long run, Bayesian statistics may be preferable for some situations and datasets, while frequentists statistics are preferable under other circumstances. Especially for predictive modeling and small data problems, Bayesian approaches should be preferred, as well as for tough cases that defy the standard array of statistical distributions. Let us hope for a future where we surely know how to toss a coin. For more on this, please refer to the entry on Non-equilibrium dynamics.

Key Publications

  • Gleick, J. (2011). Chaos: Making a new science. Open Road Media.
  • Rohde, Klaus. Nonequilibrium ecology. Cambridge University Press, 2006.
  • Kruschke, John. "Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan." (2014).
  • Hastie, T. & Tibshirani, R. 1986. Generalized Additive Models. Statistical Science 1(3). 297-318.

External Links

Articles

One of the classical papers on non-equilibrium ecology Non-equilibrium theory in ecology
Classical account on equilibrium and non-equilibrium dynamics in ecology
A balanced view on rangelands and non equilibrium dynamics
Non-equilibrium dynamics in rangelands
A view on complexity
A deeper dive into chaos
A nice take on Chaos
The most definite guide to chaos Note the synergies to the emergence of sustainability science
Some intro into Bayesian statistics in R
Stochasticity just for kicks.


Videos

Chaos: The Veritasium explanation
Laminar flow is of course more awesome
Random is not random, as this equation proves
One more take on Bayes



The author of this entry is Henrik von Wehrden.