Nondimensionalization of differential equations – an example using the Lotka-Volterra system of equations

I decided to write about nondimensionalization today since it’s something I only came across recently and found very exciting. It’s apparently a trivial process for a lot of people, but it wasn’t something I was taught how to do during my education so I thought I’d put a short guide out on the interwebs for other people. Nondimensionalization (also referred to as rescaling) refers to the process of transforming an equation to a dimensionless form by rescaling its variables. There are a couple benefits to this:

  • All variables and parameters in the new system are unitless, i.e. scales and units not an issue in the new system and the dynamics of systems operating on different time and/or spatial scales can be compared;
  • The number of model parameters is reduced to a smaller set of fundamental parameters that govern the dynamics of the system; which also means
  • The new model is simpler and easier to analyze, and
  • The computational time becomes shorter.

I will now present an example using a predator-prey system of equations. In my last blogpost, I used the Lotka-Volterra system of equations for describing predator-prey interactions. Towards the end of that post I talked about the logistic Lotka-Volterra system, which is in the following form:image002image004Where x is prey abundance, y is predator abundance, b is the prey growth rate, d is the predator death rate, c is the rate with which consumed prey is converted to predator abundance, a is the rate with which prey is killed by a predator per unit of time, and K is the carrying capacity of the prey given its environmental conditions.

The first step is to define the original model variables as products of new dimensionless variables (e.g. x*) and scaling parameters (e.g. X), carrying the same units as the original variable.image006The rescaled models are then substituted in the original model:
image008image010Carrying out all cancellations and obvious simplifications:image012image014Our task now is to define the rescaling parameters X, Y, and T to simplify our model – remember they have to have the same units as our original parameters.

 

Variable/parameter Unit
x mass 1
y mass 1
t time
b 1/time
d 1/time
a 1/(mass∙time) 2
c mass/mass 3
K mass

There’s no single correct way of going about doing this, but using the units for guidance and trying to be smart we can simplify the structure of our model. For example, setting X=K will remove that term from the prey equation (notice that this way X has the same unit as our original x variable).

image016image018

The choice of Y is not very obvious so let’s look at T first. We could go with both T=1/b or T=1/d. Unit-wise they both work but one would serve to eliminate a parameter from the first equation and the other from the second. The decision here depends on what dynamics we’re most interested in, so for the purposes of demonstration here, let’s go with T=1/b.

image020image022

We’re now left with defining Y, which only appears in the second term of the first equation. Looking at that term, the obvious substitution is Y=b/a, resulting in this set of equations:image024image022

Our system of equations is still not dimensionless, as we still have the model parameters to worry about. We can now define aggregate parameters using the original parameters in such a way that they will not carry any units and they will further simplify our model.

By setting p1=caK/b and p2=d/b we can transform our system to:image024image026a system of equations with no units and just two parameters.

 

1 Prey and predator abundance don’t have to necessarily be measured using mass units, it could be volume, density or something else. The units for parameters a, c, K would change equivalently and the rescaling still holds.

2 This is the death rate per encounter with predator per time t.

3 This is the converted predator (mass) per prey (mass) consumed.

 

Advertisements

An Introduction to Copulas

Modeling multivariate probability distributions can be difficult when the marginal probability density functions of the component random variables are different. Copulas are a useful tool to model dependence between random variables with any marginal distributions. This post will introduce the idea of a copula, run through the basic math that underlies its composition and discuss some common copulas in use. Through researching for this post I found several comprehensive coding examples in Matlab, Python and R, so instead of creating my own I’m focusing on the a theoretical introduction to copulas in this post and will link to the coding tutorials at the end.

What is a copula?

The word copula is derived from the Latin noun for a link or a tie (as is the English word “couple”) and its purpose is to describe the dependence structure between two variables. Sklar’s theorem states that “Any multivariate joint distribution can be written in terms of univariate marginal distribution functions and a copula which describes the dependence structure between the two variables” [1].

To understand how a copula can describe the dependence function between random variables its helpful to first review some simple statistics.

u1_u2

proof of uniformity.PNG

The above statements can be summarized as saying that the values of the CDF of any marginal distribution are uniformly distributed on the interval [0,1] ie. if you make a random draw from any distribution, you have the same probability of drawing the largest value (U=1) of that distribution as the smallest possible value (U=0) or the median value (U=.5).

So what does this have to do with copulas? Great question, a copula is actually a joint distribution of the CDFs of the random variables it is modeling. Put formally:

A k dimensional copula is a function c:[0,1]^k \rightarrow [0,1] and is a CDF with uniform marginals [1].

copula defined

So now that we’ve defined what a copula is, lets take a look at the form of some commonly used ones.

The Gaussian Copula

The Gaussian takes the form:

Guassian

Where:

Φ_R is the joint standard normal CDF with: mean and var

ρ_(i,j) is the correlation between random variables X_i and X_j.

Φ^-1 is the inverse standard normal CDF.

It’s important to note that the Pearson Correlation coefficient is a bad choice for ρ row here, a rank based correlation such as Spearman’s ρ or Kendall’s τ are better options since they are scale invariant and do not require linearity.

Issues with Tail Dependency

The Gaussian copula is a helpful tool and relatively easy to fit, even for relatively large numbers of RVs with different marginal distributions. Gaussian copulas however, do not do a good job capturing tail dependence and can cause one to underestimate risk of simultaneously being in the tails of each distribution. The failure of the Gaussian copula to capture tail dependence has been blamed for contributing the the 2008 financial crisis after it was widely used by investment firms on Wall Street (this is actually a really interesting story, for more details check out this article from the financial times) .

Tail dependency can be quantified by the coefficients of upper and lower tail dependence (λ_u and λ_l) defined as:

upper tail dependence.PNG

lower tail dependence

The Student t Copula

Like the student t distribution, the student t copula has a similar shape to the Gaussian copula, but with fatter tails, thus it can do a slightly better job capturing tail dependence.

student t.PNG

Where:

t_ν,Σ is the joint student t CDF, Σ is covariance matrix (again don’t use Pearson correlation coefficient), ν is the degrees of freedom and t^-1_ν is the inverse student t CDF.

Archimedean Copulas

Archimedian copulas are a family of copulas with the following form:

archimedean.PNG

ψ(u|θ) is called the generator function and θ is the parameters for the copula.

3 common Archimedean Copulas are:

  • Gumbel: which is good at modeling upper tail dependence
  • Clayton: which is good at modeling lower tail dependence
  • Frank: has lighter tails and more density in the middle

It’s important to note that these copulas are usually employed for bivariate cases, for more than two variables, the Gaussian or Student t copulas are usually used.

A comparison of the shape of the copulas above can be found in Figure 1.

compare

Image source: wikipedia.org

Coding Copulas

There are numerous packages for modeling copulas in Matlab, Python and R.

In Matlab, the Statistics and Machine learning Toolbox has some helpful functions. You can find some well narrated examples of copulas here. There’s also the Multivariate Copula Analysis Toolbox from UC Irvine.

In Python, the copulalib package can be used to model the Clayton, Frank and Gumbel copulas. The statsmodels  package also has copulas built in. I found this post on the copulalib package, it has an attached Jupyter notebook with nice coding examples and figures. Here’s a post on statsmodels copula implementation, along with example Jupyter notebook.

Finally, here’s an example of coding copulas in R using the copulas library.

References and Acknowledgments

  1. Rüschendorf, L. (2013). Mathematical risk analysis dependence, risk bounds, optimal allocations and portfolios. Berlin: Springer.

I’d like to note that the majority of the content in the post came from Scott Steinschneider’s excellent course, BEE 6940: Multivariate Analysis, at Cornell.

Simple tricks to make your C/C++ code run faster

Us, engineers with no formal computer science training, for a myriad of good reasons tend to favor friendly programming languages such as Python over evil C/C++. However, when our application is performance sensitive and we cannot wait for results sitting in our chairs for as long as a Python code would require us to, we are sometimes forced to us C/C++. Why then not making the most of it when it this is the case?

Here are some simple tips to make your C/C++ code run even faster, and how to get some advice about further performance improvements. The last idea (data locality) is transferable to Python and other languages.

Most important trick

Improve your algorithm. Thinking if there is a simpler way of doing what you coded may reduce your algorithm’s complexity (say, from say n3 to n*log(n)), which would:

  • yield great speed-up when you have lots of data or needs to run a computation several times in a row, and
  • make you look smarter.

Compiler flags

First and foremost, the those who know what they are doing — compiler developers — do the work for you by calling the compiler with the appropriate flags. There are an incredible amount of flags you can call for that, but I would say that the ones you should have on whenever possible are -O3 and –march=native.

The optimization flags (-O1 to -O3, the latter more aggressive than the former) will perform a series of modification on your code behind the scenes to speed it up, some times by more than an order of magnitude. The issue is that this modifications may eventually make your code behave differently than you expected, so it’s always good to do a few smaller runs with -O0 and -O3 and compare their results before getting into production mode.

The –march=native flag will make the compiler fine tune your code to the processor it is being compiled on (conversely, –march=haswell would fine tune it to haswell architectures, for example). This is great if you are only going to run your code on your own machine or in another machine known to have a similar architecture, but if you try to run the binary on a machine with a different architecture, specially if it is an older one, you may end up with illegal instruction errors.

Restricted pointer array

When declaring a pointer array which you are sure will not be subject to pointer aliasing — namely there will be no other pointer pointing to the same memory address –, you can declare that pointer as a restrict pointer, as below:

  • GCC: double* __restrict__ my_pointer_array
  • Intel compiler: double* restrict my_pointer_array

This will let the compiler know that it can change order of certain operations involving my_pointer_array to make your code faster without changing some read/write order that may change your results. If you want to use the restricted qualifier with the intel compiler the flag -restrict must be passed as an argument to the compiler.

Aligned pointer array

By aligning an array, you are making sure the data is going to lie in the ideal location in memory for the processor to fetch it and perform the calculations it needs based on that array. To help your compiler optimize your data alignment, you need to (1) align your array when it is declared by a specific number of bytes and (2) tell your the compiler the array is aligned when using it to perform calculations — the compiler has no idea whether or not arrays received as arguments in function are aligned. Below are examples in C, although the same ideas apply to C++ as well.

GCC

#include <stdio.h>
#include <omp.h>

#define SIZE_ARRAY 100000
#define ALIGN 64

void sum(double *__restrict__ a, double *__restrict__ b, double *__restrict__ c, int n) {
    a = (double*) __builtin_assume_aligned(a, ALIGN);
    b = (double*) __builtin_assume_aligned(b, ALIGN);
    c = (double*) __builtin_assume_aligned(c, ALIGN);
    for (int i = 0; i < n; ++i)
        c[i] = a[i] + b[i];
}

int main(void) {

    double a[SIZE_ARRAY] __attribute__((aligned(ALIGN )));
    double b[SIZE_ARRAY] __attribute__((aligned(ALIGN )));
    double c[SIZE_ARRAY] __attribute__((aligned(ALIGN )));

    for (int i = 0; i < SIZE_ARRAY; ++i) {
        a[i] = 5.;
        b[i] = 2.;
    }
    double start_time = omp_get_wtime();
    sum(a, b, c, SIZE_ARRAY);
    double time = omp_get_wtime() - start_time;
    printf("%0.6fs", time);
}

Intel compiler

#include <stdio.h>
#include <omp.h>

#define SIZE_ARRAY 100000
#define ALIGN 64

void sum(double* restrict a, double* restrict b, double* restrict c, int n) {
    __assume_aligned(a, ALIGN);
    __assume_aligned(b, ALIGN);
    __assume_aligned(c, ALIGN);
    for (int i = 0; i < n; ++i)
        c[i] = a[i] + b[i];
}

int main(void) {

    __declspec(align(ALIGN )) float a[SIZE_ARRAY];
    __declspec(align(ALIGN )) float b[SIZE_ARRAY];
    __declspec(align(ALIGN )) float c[SIZE_ARRAY];

    for (int i = 0; i < SIZE_ARRAY; ++i) {
        a[i] = 5.;
        b[i] = 2.;
    }
    double start_time = omp_get_wtime();
    sum(a, b, c, SIZE_ARRAY);
    double time = omp_get_wtime() - start_time;

    printf("%0.6fs", time);
}

Edit: In a comment to this post, Krister Walfridsson not only caught an issue with my GCC code, for which mention I thank him, but he also shows the differences in machine code generated with and without alignment.

Data Locality

Computers are physical things, which means that data is physically stored and needs to be physically moved around in memory and between cache and processor in order to be used in calculations. This means that, if your data is stored all over the place in memory — e.g. in multiple pointer arrays in different parts of memory –, the processor will have to reach out to several parts of memory to fetch all your data before performing any computations. By having the data intelligently laid out in memory you ensure all data required for each computation is stored close to each other and in cache at the same time, which becomes even more important if your code uses too much data to fit in the cache at once.

In order to making your processor’s life easier, it is a good idea to ensure that all data required for a calculation step is close together. For example, if a given computation required three arrays of fixed sizes, it is always a good idea to merge them into one long array, as in the example below for the Intel compiler.

#include <stdio.h>
#include <omp.h>

#define SIZE_ARRAY 100000
#define ALIGN 64

void sum(double* restrict a, double* restrict b, double* restrict c, int n) {
    __assume_aligned(a, ALIGN);
    __assume_aligned(b, ALIGN);
    __assume_aligned(c, ALIGN);
    for (int i = 0; i < n; ++i)
        c[i] = a[i] + b[i];
}

int main(void) {

    __declspec(align(ALIGN )) float abc[3 * SIZE_ARRAY];

    for (int i = 0; i < 2 * SIZE_ARRAY; ++i) {
        a[i] = 5.;
        b[i] = 2.;
    }
    double start_time = omp_get_wtime();
    sum(abc, abc + SIZE_ARRAY, abc + 2 * ARRAY, SIZE_ARRAY);
    double time = omp_get_wtime() - start_time;

    printf("%0.6fs", time);
}

or even, since c[i] depends only on b[i] and a[i], we can have the values of a, b and c intercalated to assure that all computations will be performed on data that is right next to each other in memory:

#include <stdio.h>
#include <omp.h>

#define SIZE_ARRAY 100000
#define ALIGN 64
#define STRIDE 3

void sum(double* restrict abc, int n, int stride) {
    __assume_aligned(abc, ALIGN);

    for (int i = 0; i < n; i += stride)
        abc[i+2] = abc[i] + abc[i+1];
}

int main(void) {
    __declspec(align(ALIGN )) double abc[3 * SIZE_ARRAY];

    for (int i = 0; i < 3 * SIZE_ARRAY; i += STRIDE) {
        abc[i] = 5.;
        abc[i+1] = 2.;
                                            }
    double start_time = omp_get_wtime();
    sum(abc, SIZE_ARRAY, STRIDE );
    double time = omp_get_wtime() - start_time;

    printf("%0.6fs", time);
}

Conclusion

According a class project in which we had to write C code to perform matrix multiplication, the improvements suggested should may improve the performance of your code by 5 or 10 times. Also, the idea of data locality can be transferred to other languages, such as Python.

Exploring the stability of systems of ordinary differential equations – an example using the Lotka-Volterra system of equations

Stability when dealing with dynamical systems is important
because we generally like the systems we make decisions on to be predictable.
As such, we’d like to know whether a small change in initial conditions could
lead to similar behavior. Do our solutions all tend to the same point? Would
slightly different initial conditions lead to the same or to a completely
different point for our systems.

This blogpost will consider the stability of dynamical systems of the form:

image001

image002

The equilibria of which are denoted by x* and y*,
respectively.

I will use the example of the Lotka-Volterra system of
equations, which is the most widely known method of modelling many predator-prey/parasite-host interactions encountered in natural systems. The Lotka-Volterra predator-prey equations were discovered independently by both Alfred Lotka and Vito Volterra in 1925-26. Volterra got to these equations while trying to explain why, immediately after WWI, the number of predatory fish was much larger than before the war.

The system is described by the following equations:

image003

image004

Where a, b, c, d > 0 are the parameters describing the
growth, death, and predation of the fish.

In the absence of predators, the prey population (x) grows
exponentially with an intrinsic rate of growth b.

Total predation is proportional to the abundance of prey and
the abundance of predators, at a constant predation rate a.

New predator abundance is proportional to the total
predation (axy) at a constant conversion rate c.

In the absence of prey, the predator population decreases at
a mortality rate d.

The system demonstrates an oscillating behavior, as
presented in the following figure for parameters a=1, b=1, c=2, d=1.

image005

Volterra’s explanation for the rise in the numbers of
predatory fish was that fishing reduces the rate of increase of the prey
numbers and thus increases the rate of decrease of the predator. Fishing does
not change the interaction coefficients. So, the number of predators is
decreased by fishing and the number of prey increases as a consequence. Without
any fishing activity (during the war), the number of predators increased which
also led to a decrease in the number of prey fish.

To determine the stability of a system of this form, we
first need to estimate its equilibria, i.e. the values of x and y for which:

image006

An obvious equilibrium exists at x=0 and y=0, which kinda
means that everything’s dead.

We’ll first look at a system that’s still alive, i.e x>0
and y>0:

image007
image008
image009

And

image010

image011

image012

Looking at these expressions for the equilibria we can also
see that the isoclines for zero growth for each of the species are straight
lines given by b/a for the prey and d/ca for the predator, one horizontal and
one vertical in the (x,y) plane.

In dynamical systems, the behavior of the system near an
equilibrium relates to the eigenvalues of the Jacobian (J) of F(x,y) at the equilibrium.
If the eigenvalues all have real parts that are negative, then the equilibrium
is considered to be a stable node; if the eigenvalues all have real parts that
are positive, then the equilibrium is considered to be an unstable node. In the
case of complex eigenvalues, the equilibrium is considered a focus point and
its stability is determined by the sign of the real part of the eigenvalue.

I found the following graphic from scholarpedia to be a
useful illustration of these categorizations.

image013

So we can now evaluate the stability of our equilibria.
First we calculate the Jacobian of our system and then plug in our estimated
equilibrium.

image014

To find the eigenvalues of this matrix we need to find the
values of λ that satisfy: det⁡(J-λI)=0  where I is
the identity matrix and det denotes the determinant.

image018
image019
image020

Our eigenvalues are therefore complex with their real parts equal to 0. The equilibrium is therefore a focus point, right between instability and asymptotic stability. What this means for the points that start out near the equilibrium is that they tend to both converge towards the equilibrium and away from it. The solutions of this system are therefore periodic, oscillating around the equilibrium point, with a period image021, with no trend either towards the
equilibrium or away from it.

image022

One can arrive at the same conclusion by looking at the
trace (τ) of the Jacobian and its determinant (Δ).

image023

image024

The trace is exactly zero and the determinant is positive
(both d,b>0) which puts the system right in between stability and
instability.

Now let’s look into the equilibrium where x*=0 and y*=0, aka
the total death.

image025

image026

image027

image028

Both b and d are positive real numbers which means that the
eigenvalues will always have real values of different signs. This makes the
(0,0) an unstable saddle point. This is important because if the equilibrium of
total death were a stable point, initial low population levels would tend to
converge towards their extinction. The fact that this equilibrium is unstable
means that the dynamics of the system make it difficult to achieve total death
and that prey and predator populations could be infinitesimally close to zero
and still recover.

Now consider a system where we’ve somehow killed all the
predators (y=0). The prey would continue to grow exponentially with a growth
rate b. This is generally unrealistic for real-life systems because it assumed
infinite resources for the prey. A more realistic model would consider the prey
to exhibit a logistic growth, with a carrying capacity K. The carrying capacity of a biological species is the maximum population size of the species that can be sustained indefinitely given the necessary resources.

The model therefore becomes:

image029

image030

Where a, b, c, d, K > 0.

To check for this system’s stability we have to go through
the same exercise.

The predator equation has remained the same so:

image012

For zero prey growth:

image032

image033

image034

image035

Calculating the eigenvalues becomes a tedious exercise at
this point and the time of writing is 07:35PM on a Friday. I’d rather apply a
small trick instead and use the isoclines to derive the stability of the system. The isocline for the predator zero-growth has remained the same (d/ca), which is a straight line (vertical on the (x,y) vector plane we draw before). The isocline for the prey’s zero-growth has changed to:

image036

Which is again a straight line with a slope of –b/aK, i.e.,
it’s decreasing when moving from left to right (when the prey is increasing). Now looking at the signs in the Jacobian of the first system:

image037

We see no self-dependence for each of the two species (the
two 0), we see that as the predator increases the prey decreases (-) and that
as the prey increases the predator increases too (+).

For our logistic growth the signs in the Jacobian change to:

image038

Because now there’s a negative self-dependence for the prey-as its numbers increase its rate of growth decreases. This makes the trace (τ) of the Jacobian negative and the determinant positive, which implies that our system is now a stable system. Plotting the exact same dynamic system but now including a carrying capacity, we can see how the two populations converge to specific numbers.

image039

Let your Makefile make your life easier

This semester I’m taking my first official CS class here at Cornell, CS 5220 Applications of Parallel Computers taught by Dave Bindel (for those of you in the Reed group or at Cornell, I would definitely recommend taking this class, which is offered every other year, once you have some experience coding in C or C++). In addition to the core material we’ve been learning in the class, I’ve been learning a lot by examining the structure and syntax of the code snippets written by the instructor and TA that are offered as starting points and examples for our class assignments. One element in particular  that stood out to me from our first assignment was the many function calls made through the makefile. This post will first take a closer look into the typical composition of a makefile and then examine how we can harness the structure of a makefile to help improve workflow on complicated projects.

Dissecting a typical makefile

On the most basic level, a makefile simply consists of series of rules that each have an associated set of actions. Makefiles are how you use the “make” utility, a software package installed on all linux systems. Make has its own syntax similar to bash but with some distinct idiosyncrasies. For example, make allows you to store a snippet of code in whats called a “macro” (these are pretty much analogous to variables in most other languages).  A macro to store the flags you would like to run with your compiler could be defined like this:

CFLAGS  = -g -Wall

To referenced the CFLAGS macro, use a dollar sign and brackets, like this:

 $(CFLAGS)

There are a series of “special” predifined macros that can be used in any makefile which are fairly common, you can find them here.

Now that we’ve discussed makefile syntax, lets take a look at how rules are structured within a makefile. A rule specified by a makefile has the following shape:

target: prerequisites
    recipe
    ...
    ...

The target is usually the name of the file that is generated by  a program, for example an executable or object file. A prerequisite is the specified input used to create the target (which can often depend on several files). The recipe is the action that make carries out for the intended target (note: this must be indented at every line).

For example, a rule to build an executable called myProg from a c file called myProg.c using the gcc compiler with flags defined in CFLAGS might look like this:

myProg: myProg.c
    gcc $(CFLAGS) $<

Make the makefile do the work

The most common rules within makefiles call the compiler to build code (hence the name “makefile”) and many basic makefiles are used for this sole purpose. However, a rule simply sends a series commands specified by its recipe to the command line and a rule can actually specify any action or series of actions that you want. A ubiquitous example of a rule that specifies an action is “clean”, which may be implemented like this:

clean:
    rm -rf *.o $(PROGRAM)

Where PROGRAM is a macro containing the names of the executable compiled by the makefile.

In this example, the rule’s target is an action rather than an output file. You can call “clean” by simply typing “make clean” into the command line and you will remove all .o files and the executable PROGRAM from your working directory.

Utilizing this application of rules, we can now have our makefile do a lot of our command line work for us. For example, we could create a rule called “run” which submits a series of pbs jobs into a cluster.

run:
    qsub job-$*.pbs

We can then enter “make run” into the command line to execute this rule, which will submit the .pbs jobs for us (note that this will not perform any of the other rules defined in the makefile). Using this rule may be handy if we have a large number of jobs to submit.

Alternatively we could  make a rule called “plot” which calls a plotting function from python:

plot: 
    python plotter.py $(PLOTFILES)

Where PLOTFILES is a macro containing the name of files to be plotted and plotter.py is a python function that takes the file names as input.

Those are just two examples (loosely based on a makefile given in CS 5220) of how you can use a makefile to do your command line work for you, but the possibilities are endless!! Ok, maybe that’s getting a bit carried away, but I do find this functionality to be a simple and elegant way to improve the efficiency of your workflow on complex projects.

For some background on makefiles, I’d recommend taking a look a Billy’s post from last year. I also found the GNU make user manual helpful as well as this tutorial from Swarthmore that has some nice example makefiles.

Open Source Streamflow Generator Part II: Validation

This is the second part of a two-part blog post on an open source synthetic streamflow generator written by Matteo Giuliani, Jon Herman and me, which combines the methods of Kirsch et al. (2013) and Nowak et al. (2010) to generate correlated synthetic hydrologic variables at multiple sites. Part I showed how to use the MATLAB code in the subdirectory /stationary_generator to generate synthetic hydrology, while this post shows how to use the Python code in the subdirectory /validation to statistically validate the synthetic data.

The goal of any synthetic streamflow generator is to produce a time series of synthetic hydrologic variables that expand upon those in the historical record while reproducing their statistics. The /validation subdirectory of our repository provides Python plotting functions to visually and statistically compare the historical and synthetic hydrologic data. The function plotFDCrange.py plots the range of the flow duration (probability of exceedance) curves for each hydrologic variable across all historical and synthetic years. Lines 96-100 should be modified for your specific application. You may also have to modify line 60 to change the dimensions of the subplots in your figure. It’s currently set to plot a 2 x 2 figure for the four LSRB hydrologic variables.

plotFDCrange.py provides a visual, not statistical, analysis of the generator’s performance. An example plot from this function for the synthetic data generated for the Lower Susquehanna River Basin (LSRB) as described in Part I is shown below. These probability of exceedance curves were generated from 1000 years of synthetic hydrologic variables. Figure 1 indicates that the synthetic time series are successfully expanding upon the historical observations, as the synthetic hydrologic variables include more extreme high and low values. The synthetic hydrologic variables also appear unbiased, as this expansion is relatively equal in both directions. Finally, the synthetic probability of exceedance curves also follow the same shape as the historical, indicating that they reproduce the within-year distribution of daily values.

Figure 1

To more formally confirm that the synthetic hydrologic variables are unbiased and follow the same distribution as the historical, we can test whether or not the synthetic median and variance of real-space monthly values are statistically different from the historical using the function monthly-moments.py. This function is currently set up to perform this analysis for the flows at Marietta, but the site being plotted can be changed on line 76. The results of these tests for Marietta are shown in Figure 2. This figure was generated from a 100-member ensemble of synthetic series of length 100 years, and a bootstrapped ensemble of historical years of the same size and length. Panel a shows boxplots of the real-space historical and synthetic monthly flows, while panels b and c show boxplots of their means and standard deviations, respectively. Because the real-space flows are not normally distributed, the non-parametric Wilcoxon rank-sum test and Levene’s test were used to test whether or not the synthetic monthly medians and variances were statistically different from the historical. The p-values associated with these tests are shown in Figures 2d and 2e, respectively. None of the synthetic medians or variances are statistically different from the historical at a significance level of 0.05.

Figure 2

In addition to verifying that the synthetic generator reproduces the first two moments of the historical monthly hydrologic variables, we can also verify that it reproduces both the historical autocorrelation and cross-site correlation at monthly and daily time steps using the functions autocorr.py and spatial-corr.py, respectively. The autocorrelation function is again set to perform the analysis on Marietta flows, but the site can be changed on line 39. The spatial correlation function performs the analysis for all site pairs, with site names listed on line 75.

The results of these analyses are shown in Figures 3 and 4, respectively. Figures 3a and 3b show the autocorrelation function of historical and synthetic real-space flows at Marietta for up to 12 lags of monthly flows (panel a) and 30 lags of daily flows (panel b). Also shown are 95% confidence intervals on the historical autocorrelations at each lag. The range of autocorrelations generated by the synthetic series expands upon that observed in the historical while remaining within the 95% confidence intervals for all months, suggesting that the historical monthly autocorrelation is well-preserved. On a daily time step, most simulated autocorrelations fall within the 95% confidence intervals for lags up to 10 days, and those falling outside do not represent significant biases.

Figure 3

Figures 4a and 4b show boxplots of the cross-site correlation in monthly (panel a) and daily (panel b) real-space hydrologic variables for all pairwise combinations of sites. The synthetic generator greatly expands upon the range of cross-site correlations observed in the historical record, both above and below. Table 1 lists which sites are included in each numbered pair of Figure 4. Wilcoxon rank sum tests (panels c and d) for differences in median monthly and daily correlations indicate that pairwise correlations are statistically different (α=0.5) between the synthetic and historical series at a monthly time step for site pairs 1, 2, 5 and 6, and at a daily time step for site pairs 1 and 2. However, biases for these site pairs appear small in panels a and b. In summary, Figures 1-4 indicate that the streamflow generator is reasonably reproducing historical statistics, while also expanding on the observed record.

Figure 4

Table 1

Pair Number Sites
1 Marietta and Muddy Run
2 Marietta and Lateral Inflows
3 Marietta and Evaporation
4 Muddy Run and Lateral Inflows
5 Muddy Run and Evaporation
6 Lateral Inflows and Evaporation

 

 

Water Programming Blog Guide (3)

This is the final post on the Water Programming Blog Guide.  I encourage you to also look at the Water Programming Blog Guide (Part I) and Water Programming Blog Guide (Part 2) for a different set of topics.  In this post, the following topics will be covered:

  1. Visualization and figure editing
  2. Tutorials
  3. Training videos
  4. LaTex
  5. Reference management
  6. NetCDF
  7. Miscellaneous topics: Optimization and Statistics, Hydrologic Modeling, Global change assessment model

1.  Visualization and figure editing

Visualization is very important in our field,  its a good way to analyze tradeoffs and even interact with the system at hand.  We want to represent multi-dimensional data sets in an informative and appealing way , and the methods for doing are are well documented in our blog, we even have a brief historical overview  for visualizing multi-dimensional , along with useful resources to create either quick and interactive plots  or publication ready ones:

Parallel coordinate plots

Creating parallel axes plots

New web-based multi-objective data interactive visualization tools

Saving d3.parcoords to SVG

The Linked Parallel Coordinate Plot with Linked Heatmap

Alluvial Plots

Multidimensional data and color palettes

Visualization strategies for multidimensional data

Colorbrewer: Color palettes for your figures

Aerovis

AeroVis Documentation

AeroVis: Reproducing the DTLZ1 Animation

AeroVis: Turning a Glyph Plot Into a Figure

Code Sample: Modify AeroVis files for Matlab

Figure editing

Scientific figures in Illustrator

9 basic skills for editing and creating vector graphics in Illustrator

Other Resources

Data Visualization Resources

Plotting Code added to Lake Problem Diagnostics Repository

2. Tutorials

In this section you can find a series of training tutorials.  They consist on short test studies that emulate some of the larger studies that our group members have published,  and it will give you a good idea on how to use some of the tools so you can implement them on your own.  I also recommend looking at the Glossary of commonly used terms to get  familiar with the terminology on multi-objective optimization, decision analytics, systems analysis and water resources and some other lingo that we use in the research group that may not be evident to new members.

MOEA runtime analysis

Runtime dynamics with serial Borg

Random Seed Analysis for the Borg MOEA using DTLZ2, 3 objective instance

MOEA diagnostics

MOEA diagnostics for a simple test case (Part 1/3) ,  (Part 2/3) , (Part 3/3)

Algorithm Diagnostics Walkthrough using the Lake Problem as an example (Part 1 of 3: Generate Pareto approximate fronts) ,  (Part 2 of 3: Calculate metrics for Analysis) , (Part 3 of 3: Metrics-based analysis of algorithm performance)

Determining Whether Differences in Diagnostic Results are Statistically Significant (Part 1) , (Part 2)

3. Training videos

The following videos were developed mainly for MOEAFramework  training , they are not necessarily in sequence but they cover key topics that will enable you to use the MOEAFramework capabilities within your own problems.

Training Video: External problems in MOEAFramework

Training video: MOEAFramework GUI

Training video: Java file for external problems

Training video: Elements of Problem Formulation

Training video: MOEAframework Sobol Sensitivity Analysis

Training video: MOEAFramework, submitting multiple random seeds

Training video: Cluster job submission basics

4.  LaTeX

LaTex helps the quality and aesthetics of documents; also,  as a researcher LaTex gives you the opportunity to focus on the parts of your work that  really matter since it saves you the headache of formatting.   In our blog we have a variety of posts that will help you setup LaTex, have consistent fonts across figures and text within your document, use online platforms to work on LaTex documents collaboratively, and easily track changes from one document to another, this last feature is extremely handy for revisions.

Beginner’s LaTeX Guide

Setup for LaTeX and Sublime

Embedding figure text into a Latex document

Overleaf: LaTeX Collaboration and Publication

latexdiff: “track changes” for LaTeX

5. Reference management

Reference management can be a graduate student’s best friend.  It plays an important role in increasing your productivity by maintaining good record of the articles that you commonly use  and by keeping track of encountered literature.  In our blog you can find useful resources and tips to take advantage of reference managers and on how to populate efficiently your references.

PDFExtract: Get a list of BibTeX references from a scholarly PDF

RSS Feeds for Water Resources Journals

Writing a Paper in Markdown Using Pandoc

Zotero introduction (video)

Web-based Free Options for Bibliography Management and LaTeX Editing

6. NetCDF

The NetCDF website describes this data form better than I can:

NetCDF (network Common Data Form) is a set of interfaces for array-oriented data access and a freely distributed collection of data access libraries for C, Fortran, C++, Java, and other languages. The netCDF libraries support a machine-independent format for representing scientific data. Together, the interfaces, libraries, and format support the creation, access, and sharing of scientific data.

Source: NetCDF

We also have plenty of documentation on handling NetCDF from previous group members:

Using HDF5/zlib Compression in NetCDF4

Use python cf and esgf-python-client package to interact with ESGF data portal

Basic NetCDF processing in Matlab

Reducing size of NetCDF spatial data with list representation

Plotting a map of NetCDF data with Matplotlib/Basemap

7.  Miscellaneous topics

Finally, this is a recompilation of interesting topics that contributors to the blog have come across and have used in their own research and/or classes.

Fitting Multivariate Normal Distributions

Solving non-linear problems using linear programming

Hydrologic modeling

Updated rainfall-runoff models

Diffusion in a 2D Box

Global Change Assessment Model

Porting GCAM onto a UNIX cluster