Exploring the stability of systems of ordinary differential equations – an example using the Lotka-Volterra system of equations

Stability when dealing with dynamical systems is important
because we generally like the systems we make decisions on to be predictable.
As such, we’d like to know whether a small change in initial conditions could
lead to similar behavior. Do our solutions all tend to the same point? Would
slightly different initial conditions lead to the same or to a completely
different point for our systems.

This blogpost will consider the stability of dynamical systems of the form:

image001

image002

The equilibria of which are denoted by x* and y*,
respectively.

I will use the example of the Lotka-Volterra system of
equations, which is the most widely known method of modelling many predator-prey/parasite-host interactions encountered in natural systems. The Lotka-Volterra predator-prey equations were discovered independently by both Alfred Lotka and Vito Volterra in 1925-26. Volterra got to these equations while trying to explain why, immediately after WWI, the number of predatory fish was much larger than before the war.

The system is described by the following equations:

image003

image004

Where a, b, c, d > 0 are the parameters describing the
growth, death, and predation of the fish.

In the absence of predators, the prey population (x) grows
exponentially with an intrinsic rate of growth b.

Total predation is proportional to the abundance of prey and
the abundance of predators, at a constant predation rate a.

New predator abundance is proportional to the total
predation (axy) at a constant conversion rate c.

In the absence of prey, the predator population decreases at
a mortality rate d.

The system demonstrates an oscillating behavior, as
presented in the following figure for parameters a=1, b=1, c=2, d=1.

image005

Volterra’s explanation for the rise in the numbers of
predatory fish was that fishing reduces the rate of increase of the prey
numbers and thus increases the rate of decrease of the predator. Fishing does
not change the interaction coefficients. So, the number of predators is
decreased by fishing and the number of prey increases as a consequence. Without
any fishing activity (during the war), the number of predators increased which
also led to a decrease in the number of prey fish.

To determine the stability of a system of this form, we
first need to estimate its equilibria, i.e. the values of x and y for which:

image006

An obvious equilibrium exists at x=0 and y=0, which kinda
means that everything’s dead.

We’ll first look at a system that’s still alive, i.e x>0
and y>0:

image007
image008
image009

And

image010

image011

image012

Looking at these expressions for the equilibria we can also
see that the isoclines for zero growth for each of the species are straight
lines given by b/a for the prey and d/ca for the predator, one horizontal and
one vertical in the (x,y) plane.

In dynamical systems, the behavior of the system near an
equilibrium relates to the eigenvalues of the Jacobian (J) of F(x,y) at the equilibrium.
If the eigenvalues all have real parts that are negative, then the equilibrium
is considered to be a stable node; if the eigenvalues all have real parts that
are positive, then the equilibrium is considered to be an unstable node. In the
case of complex eigenvalues, the equilibrium is considered a focus point and
its stability is determined by the sign of the real part of the eigenvalue.

I found the following graphic from scholarpedia to be a
useful illustration of these categorizations.

image013

So we can now evaluate the stability of our equilibria.
First we calculate the Jacobian of our system and then plug in our estimated
equilibrium.

image014

To find the eigenvalues of this matrix we need to find the
values of λ that satisfy: det⁡(J-λI)=0  where I is
the identity matrix and det denotes the determinant.

image018
image019
image020

Our eigenvalues are therefore complex with their real parts equal to 0. The equilibrium is therefore a focus point, right between instability and asymptotic stability. What this means for the points that start out near the equilibrium is that they tend to both converge towards the equilibrium and away from it. The solutions of this system are therefore periodic, oscillating around the equilibrium point, with a period image021, with no trend either towards the
equilibrium or away from it.

image022

One can arrive at the same conclusion by looking at the
trace (τ) of the Jacobian and its determinant (Δ).

image023

image024

The trace is exactly zero and the determinant is positive
(both d,b>0) which puts the system right in between stability and
instability.

Now let’s look into the equilibrium where x*=0 and y*=0, aka
the total death.

image025

image026

image027

image028

Both b and d are positive real numbers which means that the
eigenvalues will always have real values of different signs. This makes the
(0,0) an unstable saddle point. This is important because if the equilibrium of
total death were a stable point, initial low population levels would tend to
converge towards their extinction. The fact that this equilibrium is unstable
means that the dynamics of the system make it difficult to achieve total death
and that prey and predator populations could be infinitesimally close to zero
and still recover.

Now consider a system where we’ve somehow killed all the
predators (y=0). The prey would continue to grow exponentially with a growth
rate b. This is generally unrealistic for real-life systems because it assumed
infinite resources for the prey. A more realistic model would consider the prey
to exhibit a logistic growth, with a carrying capacity K. The carrying capacity of a biological species is the maximum population size of the species that can be sustained indefinitely given the necessary resources.

The model therefore becomes:

image029

image030

Where a, b, c, d, K > 0.

To check for this system’s stability we have to go through
the same exercise.

The predator equation has remained the same so:

image012

For zero prey growth:

image032

image033

image034

image035

Calculating the eigenvalues becomes a tedious exercise at
this point and the time of writing is 07:35PM on a Friday. I’d rather apply a
small trick instead and use the isoclines to derive the stability of the system. The isocline for the predator zero-growth has remained the same (d/ca), which is a straight line (vertical on the (x,y) vector plane we draw before). The isocline for the prey’s zero-growth has changed to:

image036

Which is again a straight line with a slope of –b/aK, i.e.,
it’s decreasing when moving from left to right (when the prey is increasing). Now looking at the signs in the Jacobian of the first system:

image037

We see no self-dependence for each of the two species (the
two 0), we see that as the predator increases the prey decreases (-) and that
as the prey increases the predator increases too (+).

For our logistic growth the signs in the Jacobian change to:

image038

Because now there’s a negative self-dependence for the prey-as its numbers increase its rate of growth decreases. This makes the trace (τ) of the Jacobian negative and the determinant positive, which implies that our system is now a stable system. Plotting the exact same dynamic system but now including a carrying capacity, we can see how the two populations converge to specific numbers.

image039

Advertisements

Glossary of commonly used terms

I have recently started training with the group and coming from a slightly different research background I was unfamiliar with some a lot of the terminology. I thought it might be useful to put together a glossary of sorts containing the terms that someone new to this field of study might not intuitively understand. The idea is that when someone encounters an unfamiliar term while going through the training or reading through the material, they can come to the blog glossary and quickly Ctrl-F the term (yes, that is a keyboard shortcut used as a verb).

The definitions are not exhaustive, but links/references are provided so the reader can find additional material. The glossary is a work in progress, some definitions are still incomplete, but it will be regularly updated. I’m also probably the least qualified person in the group to provide the definitions, so any corrections/suggestions are more than welcome. If there’s any other term I’ve omitted and you feel should be included, please leave a comment so I can edit the post.

Glossary 

Adaptive metropolis algorithm

It is based on the classical random walk Metropolis algorithm and adapts continuously to the target distribution. 1

Akaike’s Information Criterion (AIC)

A measure of the relative quality of statistical models for a given set of data.2

AMALGAM

MOEA that applies a multi-algorithm search that combines the NSGAII, particle swarm optimization, differential evolution, and adaptive metropolis.3

Approximation set

The set of solutions output by a multi-objective evolutionary algorithm approximating the Pareto front.4

Archive

A secondary population used by many multi-objective evolutionary algorithms to store non-dominated solutions found through the generational process.5

Attainment

Attainment plot

Bayesian Information Criterion

Borg MOEA

Iterative search algorithm for multi-objective optimization. It combines adaptive operator selection with ε-dominance, adaptive population sizing and time continuation. 6

Classification and Regression Trees (CART)

Decision tree algorithms that can used for classification and regression analysis of predictive modeling problems.7

Closed vs. open loop control

Concavity

See Convexity.

Constraints

Restrictions imposed on the decision space by the particular characteristics of the system. They must be satisfied for a certain solution to be considered acceptable.5

Control map

Controllability

Refers to whether the parameterization choices have significant impacts on the success or failure for an algorithm.

Convergence

Progress towards the reference set.

Convexity

Geometrically, a function is convex is a line segment drawn from any point along function f to any other point along f lies on or above the graph of f. For optimization problems, a problem is convex if the objective function and all constraints are convex functions if minimizing, and concave is maximizing.

(The) Cube

The Reed Group’s computing cluster run by the Center for Advanced Computing (CAC) at Cornell. The cluster has 32 nodes, each with Dual 8-core E5-2680 CPUs @ 2.7 GHz and 128 GB of RAM. For more information see: https://www.cac.cornell.edu/wiki/index.php?title=THECUBE_Cluster

Decision space

The set of all decision variables.

Decision variables

The numerical quantities (variables) that are manipulated during the optimization process and they represent how our actions are encoded within the problem.5

Deterioration

When elements of a solution set at a given time are dominated by a solution set the algorithm maintained some time before.5

Differential evolution

Evolutionary algorithm designed to optimize problems over continuous domains.5

Direct policy search

Dominance resistance

The inability of an algorithm to produce offspring that dominates poorly performing, non-dominated members of the population. (See also Pareto dominance).8

DTLZ problems

A suite of representative test problems for MOEAs that for which the analytical solutions have been found.  The acronym is a combination of the first letters of the creators’ last names (Deb, Thiele, Laumanns, Zitzler). DTLZ problems have been used for benchmarking and diagnostics when evaluating the performance of MOEAs.

Dynamic memory

Elitism

Refers to a case where as evolution progresses, non-dominated solutions will not be lost in subsequent generations.

Epsilon (ε) dominance

When dominance is determined by use of a user-specified precision to simplify sorting. Pareto epsilon (ε) optimality and Pareto epsilon (ε) front are defined accordingly. (See also Pareto dominance).5

Epsilon (ε) dominance archive

Epsilon (ε)-box dominance archive

ε-MOEA

A steady-state MOEA, the first to incorporate ε-dominance archiving into its search process.

Epsilon (ε) indicator

Additive ε-indicator (ε+-indicator) (performance metric) 

The smallest distance ε that the approximation set must be translated by in order to completely dominate the reference set.4

Epsilon (ε) progress

Equifinality

Evolutionary algorithms

A class of search and optimization algorithms inspired by processes of natural evolution.5

Multi-objective evolutionary algorithms (MOEAs)

Evolutionary algorithms used for solving multi-objective problems.5

Evolutionary operators

They operate on the population of an evolutionary algorithm attempting to generate solutions with higher and higher fitness.5

Mutation evolutionary operators

They perturb the decision variables of a single solution to look for improvement in its vicinity.

Recombination evolutionary operators

They combine decision variables from two or more solutions to create new solutions.

Selection evolutionary operators

They determine which solutions are allowed to proceed to the next cycle.

Executioner

A cross-language automation tool for running models. (See also Project Platypus).

Exploratory Modeling and Analysis (EMA)

A research methodology that uses computational experiments to analyze complex and uncertain systems.9

Data-driven exploratory modeling

Used to reveal implications of a data set by searching through an ensemble of models for instances that are consistent with the data.

Question-driven exploratory modeling

Searches over an ensemble of models believed to be plausible to answer a question of interest or illuminate policy choices.

Model-driven exploratory modeling

Investigates the properties of an ensemble of models without reference to a data set or policy question. It is rather a theoretical investigation into the properties of a class of models.

Feasible region

The set of all decision variables in the decision space that are feasible (i.e. satisfy all constraints).5

GDE3

Generational algorithms

A class of MOEAs that replace the entire population during each full mating, mutation, and selection iteration of the algorithm.5 (See also Steady-state algorithms).

Generational distance (performance metric)

The average distance from every solution in the approximation set to the nearest solution in the reference set.4

Gini index

A generalization of the binomial variance used in Classification and Regression Trees (CART). (See also Classification and Regression Trees (CART)). 

High performance computing

Hypervolume (performance metric)

The volume of the space dominated by the approximation set.4

Inverted generational distance (performance metric)

The average distance from every solution in the reference set to the nearest solution in the approximation set.4

J3

A free desktop application for producing and sharing high-dimensional, interactive scientific visualizations. (See also Project Platypus).

Kernel density estimation

Latin Hypercube Sampling (LHS)

Stratified technique used to generate samples of parameter values.

Markov chain

Method of moments

MOEA Framework

A free and open source Java library for developing and experimenting with multi-objective evolutionary algorithms and other general-purpose optimization algorithms.4

Monte Carlo

Morris method

NSGA-II

The Non-dominated Sorting Genetic Algorithm-II. MOEA featuring elitism, efficient non-domination sorting, and parameter free diversity maintenance.10

ε-NSGA-II 

A generational algorithm that uses e-dominance archiving, adaptive population sizing and time continuation.

Number of function evaluations (NFE)

Objectives

The criteria used to compare solutions in an optimization problem.

OMOPSO

A particle swarm optimization algorithm—the first to include e-dominance as a means to solve many-objective problems.11

Optimization

The process of identifying the best solution (or a set of best solutions) among a set of alternatives.

Multi-objective optimization

Multi-objective optimization employs two or more criteria to identify the best solution(s) among a set of alternatives

Intertemporal optimization

Parallel computing

Parametric generator

Pareto optimality

The notion that a solution is superior or inferior to another solution only when it is superior in all objectives or inferior in all objectives respectively.

Pareto dominance

A dominating solution is superior to another in all objectives. A dominated solution is inferior to another in all objectives. A non-dominated solution is superior in one objective but inferior in another.  

Pareto front

Contains the objective values of all non-dominated solutions (in the objective function space).

Pareto optimal set

Contains the decision variables of all non-dominated solutions (in the decision variable space).

Particle swarm optimization

Population-based stochastic optimization technique where the potential solutions, called particles, fly through the problem space by following the current optimum particles.

Patient Rule Induction method (PRIM)

A rule induction algorithm.

Performance metrics

Procedures used to compare the performance of approximation sets.

Pointer

Population

The set of encoded solutions that are manipulated and evaluated during the application of an evolutionary algorithm.

Principle Component Analysis (PCA)

Project Platypus

A Free and Open Source Python Library for Multiobjective Optimization. For more information see: https://github.com/Project-Platypus

Radial basis function

Reference set

The set of globally optimal solutions in an optimization problem.

Rhodium

Python Library for Robust Decision Making and Exploratory Modelling. (See also Project Platypus).

Robust Decision Making (RDM)

An analytic framework that helps identify potential robust strategies for a particular problem, characterize the vulnerabilities of such strategies, and evaluate trade-offs among them.12

Multi-objective robust decision making (MORDM)

An extension of Robust Decision Making (RDM) to explicitly include the use of multi-objective optimization to discover robust strategies and explore the trade-offs among multiple competing performance objectives.13

OpenMORDM

An open source implementation of MORDM with the tools necessary to perform a complete MORDM analysis.14 For more information see: https://github.com/OpenMORDM

Safe operating space

SALib

Seeding

Sobol sampling

Spacing (performance metric)

The uniformity of the spacing between the solutions in an approximation set.

SPEA2

MOEA that assigns a fitness value to each solution based on the number of competing solutions it dominates.

 State of the world

A fundamental concept in decision theory which refers to a feature of the world that the agent/decision maker has no control over and is the origin of the agent’s uncertainty about the world.

Steady-state algorithms

A class of MOEAs that only replace one solution in the population during each full mating, mutation, and selection iteration of the algorithm. (See also Generational algorithms).

Time continuation

The injection of new solutions in the population to reinvigorate search.

Tournament

The set of candidate solutions selected randomly from a population.

Trace

Visual analytics

The rapid analysis of large datasets using interactive software that enables multiple connected views of planning problems.

More information on the concepts

  1. Haario, H., Saksman, E. & Tamminen, J. An adaptive Metropolis algorithm. Bernoulli 7, 223–242 (2001).
  2. Akaike, H. Akaike’s information criterion. in International Encyclopedia of Statistical Science 25–25 (Springer, 2011).
  3. Vrugt, J. A. & Robinson, B. A. Improved evolutionary optimization from genetically adaptive multimethod search. Proc. Natl. Acad. Sci. 104, 708–711 (2007).
  4. Hadka, D. Beginner’s Guide to the MOEA Framework. (CreateSpace Independent Publishing Platform, 2016).
  5. Coello, C. A. C., Lamont, G. B. & Van Veldhuizen, D. A. Evolutionary algorithms for solving multi-objective problems. 5, (Springer, 2007).
  6. Hadka, D. & Reed, P. Borg: An Auto-Adaptive Many-Objective Evolutionary Computing Framework. Evol. Comput. 21, 231–259 (2012).
  7. Breiman, L. Classification and Regression Trees. (Wadsworth International Group, 1984).
  8. Reed, P. M., Hadka, D., Herman, J. D., Kasprzyk, J. R. & Kollat, J. B. Evolutionary multiobjective optimization in water resources: The past, present, and future. Adv. Water Resour. 51, 438–456 (2013).
  9. Bankes, S. Exploratory Modeling for Policy Analysis. Oper. Res. 41, 435–449 (1993).
  10. Deb, K., Pratap, A., Agarwal, S. & Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. Evol. Comput. IEEE Trans. On 6, 182–197 (2002).
  11. Sierra, M. R. & Coello, C. C. Improving PSO-based multi-objective optimization using crowding, mutation and e-dominance. in Evolutionary multi-criterion optimization 3410, 505–519 (Springer, 2005).
  12. Lempert, R. J., Groves, D. G., Popper, S. W. & Bankes, S. C. A general, analytic method for generating robust strategies and narrative scenarios. Manag. Sci. 52, 514–528 (2006).
  13. Kasprzyk, J. R., Nataraj, S., Reed, P. M. & Lempert, R. J. Many objective robust decision making for complex environmental systems undergoing change. Environ. Model. Softw. (2013). doi:10.1016/j.envsoft.2012.12.007
  14. Hadka, D., Herman, J., Reed, P. & Keller, K. An open source framework for many-objective robust decision making. Environ. Model. Softw. 74, 114–129 (2015).