Welcome to our blog!

Welcome to Water Programming! This blog is by Pat Reed’s group at Cornell, who use computer programs to solve problems — Multiobjective Evolutionary Algorithms (MOEAs), simulation models, visualization, and other techniques. Use the search feature and categories on the right panel to find topics of interest. Feel free to comment, and contact us if you want to contribute posts.

To find software:  Please consult the Pat Reed group website, MOEAFramework.org, and BorgMOEA.org.

The MOEAFramework Setup Guide: A detailed guide is now available. The focus of the document is connecting an optimization problem written in C/C++ to MOEAFramework, which is written in Java.

The Borg MOEA Guide: We are currently writing a tutorial on how to use the C version of the Borg MOEA, which is being released to researchers here.

Call for contributors: We want this to be a community resource to share tips and tricks. Are you interested in contributing? Please email dfg42 “at” cornell.edu. You’ll need a WordPress.com account.

ggplot (Part 2)

This is the second part of the ggplot introduction. In this blog post, I am going to go over how you can make a decent density plot in ggplot. Density plots are basically smoothed versions of the histogram and show the distribution of your data while also presenting the probability distribution of the data using the kernel density estimation procedure. For example, when we have a regional data set, it is important to look at the distribution of our data across the region instead of just considering the region average. In our example (download the data set from here), we are going to visualize the regional distribution of simulated average winter wheat yield for 30 years from 1981 to 2010. The “ID” column in the data set represents one grid cell in the region, and there are 1,812 total grid cells. For each grid cell, the average historical yield and the standard deviation of yield during 30 years were given. First, we need to load the library; then, in the general code structure of “ggplot ( dataframe , aes ( x , y , fill )),” we need to specify x-axis to “yield.” The y-axis will be calculated and added through “geom_density()”. Then, we can add a color, title, and label and customize the background.

example1<- read.csv("(your directory)/example_1.csv")
ggplot(example1, aes(x=example1$period_ave_Y))+ 
 theme(panel.background = element_rect(fill = 'white'),axis.line = element_line(size = 0.5, linetype = "solid",colour = "black"))+
  labs(title = paste("Density Plot of Regional Average Historical Yield (30 years)"),x = "Winter Wheat Yield (tonnes/ha)", y = "Density", color="black")

Now, we want to know how the standard deviation of 30 years’ average yield for all the grid cells in the region can be mapped into this density plot.

We can add another column (name it “SD_class”) to the data set and classify the standard deviations. The maximum and minimum standard deviations among all the grid cells are the following.

# [1] 3.605131
# [1] 0.8645882

For example, I want to see this plot categorized by standard deviations between 0.8 to 1.5, 1.5 to 2.5, and 2.5 to the maximum value. Here, I am writing a simple loop to go over each row and check the standard deviation value for each row (corresponding to each grid cell in a region); I fill the newly added column (“SD_class”) with the correct class that I specify in the “if statement.”

example1$SD_class<- NA
for (i in 1:nrow(example1)){
  if(example1[i,2]>0.8 && example1[i,2]<= 1.5) {example1[i,4]<- c("0.8-1.5")}
  if(example1[i,2]>1.5 && example1[i,2]<= 2.5) {example1[i,4]<- c("1.5-2.5")}
  if(example1[i,2]>2.5) {example1[i,4]<- c("2.5-3.6")}

Now, we just need to add “fill” to the aesthetics section of the code, specify the column with the classifications, and add “alpha” to make the color transparent in order to see the shapes of the graphs and whether they have overlaps.

ggplot(example1, aes(x=example1$period_ave_Y,fill =SD_class))+
  theme(panel.background = element_rect(fill = 'white'),axis.line = element_line(size = 0.5, linetype = "solid",colour = "black"),
        axis.text=element_text(size=16),axis.title=element_text(size=16,face="bold"),plot.title = element_text(size = 20, face = "bold"),
  labs(title = paste("Density Plot of Regional Average Historical Yield (30 years)"),x = "Winter Wheat Yield (tonnes/ha)", y = "Density", color="black")

We can also use the “facet_grid()” option, like the plot in Part (1), and specify the column with classification to show each of these classes in a separate panel.

ggplot(example1, aes(x=example1$period_ave_Y,fill =SD_class))+
  geom_density(alpha=0.4)+facet_grid(example1$SD_class ~ .)+
  theme(panel.background = element_rect(fill = 'white'),axis.line = element_line(size = 0.5, linetype = "solid",colour = "black"),
        axis.text=element_text(size=16),axis.title=element_text(size=16,face="bold"),plot.title = element_text(size = 20, face = "bold"),
  labs(title = paste("Density Plot of Regional Average Historical Yield (30 years)"),x = "Winter Wheat Yield (tonnes/ha)", y = "Density", color="black")

The other interesting variables that we can explore are different percentiles of our data set that correspond to the density plot. For this, we need to obtain the density values (y-axis on the plot) for the percentiles that we are interested in—for example 10%, 25%, 50%, 75%, and 90%. Also we need to find out the actual yield value corresponding to each percentile:

quantiles_yield <- quantile(example1$period_ave_Y, prob=c(0.1, 0.25, 0.5, 0.75, 0.9))
#     10%      25%      50%      75%      90% 
#  4.229513 5.055070 5.582192 5.939071 6.186014

Now, we are going to estimate the density value for each of the yields at the 10th, 25th, 50th, 75th, and 90th percentiles.

df <- approxfun(density(example1$period_ave_Y))

The above function will give us the approximate density value for each point (yield) in which we are interested—in our case, yields for the above percentiles:

#[1] 0.1176976 0.3267841 0.6129621 0.6615790 0.4345247

Now, we can add several vertical segments to the density plot that show where each percentile is located on this graph. The limits of these segments on the y-axis are based on the density values for each percentile that we got above. Also, note that I used those values to adjust the positions of the labels for the segments.

      geom_density(aes(x=example1$period_ave_Y),fill="blue",alpha=0.4) + 
    geom_segment(aes(x=quantiles_yield, y=0, xend =quantiles_yield,
                     yend= df(c(quantiles_yield))),size=1,colour =c("red","green","blue","purple","orange"),linetype='dashed')+
      theme(panel.background = element_rect(fill = 'white'),axis.line = element_line(size = 0.5, linetype = "solid",colour = "black"),
            axis.text=element_text(size=16),axis.title=element_text(size=16,face="bold"),plot.title = element_text(size = 20, face = "bold"),
      labs(title = paste("Density Plot of Regional Average Historical Yield (30 years) and Percentiles"),x = "Winter Wheat Yield (tonnes/ha)", y = "Density", color="black")+
    annotate("text", x=4.229513, y=0.15, label=paste("10%"),size=5)+
    annotate("text", x=5.055070, y=0.36, label=paste("25%"),size=5)+
    annotate("text", x=5.582192, y=0.65, label=paste("50%"),size=5)+
    annotate("text", x=5.939071, y=0.7, label=paste("75%"),size=5)+
    annotate("text", x=6.186014, y=0.47, label=paste("90%"),size=5) 

Hydro Packages in R: HydroGOF

In this blog post, I will go over a very helpful hydrologic package in R that can make your hydro-life much easier. The package is called HydroGOF, and it can be used to make different types of plots, including mean monthly, annual, and seasonal plots for streamflow, rainfall, temperature, and other environmental variables. You can also use HydroGOF to compare your simulated flow to observed flow and calculate various performance metrics such as Nash-Sutcliffe efficiency. Indeed, the GOF part of HydroGOF stands for “goodness of fit.” More information about HydroGOF and its applications for hydrologists can be found here. Also, you can find a more comprehensive list of hydrologic R packages from this water programming blog post.

1- Library and Data Preparation

HydroGOF accepts R data frames and R zoo objects. If you are not familiar with R’s zoo objects, you can find more information at here. In this tutorial, I use HydroGOF’s test case streamflow data, which are in zoo format. Here is how you can install and load zoo and HydroGOF.

install.packages("hydroGOF ")

After you load the package, you need to activate your streamflow data. This is how you do so.

# Activate HydroGOF's streamflow data

Now, let’s take a look at the first few lines of our streamflow data.


Note that the first column is date and that the second column is streamflow data; the unit is m3/sec. Also, keep in mind that you can use zoo to manipulate the temporal regime of your data. For example, you can convert your daily data to monthly or annual.

2- Streamflow Plots

Now, let’s use HydroGOF to visualize our observed streamflow data. You can use the following commands to generate some nice figures that will help you explore the overall regime of the streamflow data.


hydroplot(x = obs,var.type = "FLOW", var.unit = "m3/s", ptype = "ts+boxplot", FUN=mean)
# Note that "hydroplot" command is very flexible and there are many
# options that users can add or remove

3- Generate Simulated Dataset

For this tutorial, I have written the following function, which uses observed streamflow to generate a very simple estimation of daily streamflow. Basically, the function takes the observed data and calculates daily average flow for each day of the year. Then, the function repeats the one-year data as many times as you need, which for our case, is ten times to match the observed flow.

  # This function generates a very simple prediction of streamflow 
  # based on observed streamflow inputs

  DOY<-data.frame(matrix(ncol =1, nrow = length(EgaEnEstellaQts))) 
  for (i_day in 1:length(EgaEnEstellaQts)){
    DOY[i_day,]=as.numeric(strftime(index(EgaEnEstellaQts[i_day]), format = "%j"))
# Create a 365 day timeseries of average daily streamflow.  
  m_inflow_obs<-as.numeric(aggregate(obs[,1], by=list(DOY[,1]), mean)) 
  simplest_predictor<-data.frame(matrix(ncol=3, nrow =length(obs )))
  names(simplest_predictor)<-c("Date", "Observed", "Predicted")
  for (i_d in 1:length(obs)){
   # Iterates average daily flow for entire simulation period 
  # Convert to zoo format
  dt_z<-read.zoo(simplest_predictor, format="%Y-%m-%d") 

After loading the function, you can use the following to create your combined observed and simulated data frame.

# Here we just call the function

4- Hydrologic Error Metrics Using HydroGOF

There are twenty error metrics in HydroGOF—for example, mean error (ME), mean absolute error (MAE), root mean square error (RMSE), normalized root mean square error (NRMSE), percent bias (PBIAS), ratio of standard deviations (Rsd), and Nash-Sutcliffe efficiency (NSE). You can find more information about them here. You can use the following commands to calculate specific error metrics.

# Nash-Sutcliffe Efficiency
NSE(sim=obs_sim$Predicted, obs=obs_sim$Observed) 
# Root Mean Squared Error
rmse(sim=obs_sim$Predicted, obs=obs_sim$Observed) 
# Mean Error
me(sim=obs_sim$Predicted, obs=obs_sim$Observed) 

You can also use this cool command to see all of the available error metrics in HydroGOF.

gof(sim=obs_sim$Predicted, obs=obs_sim$Observed)

5- Visualizing Observed and Simulated Streamflow

Here is the most interesting part: you can plot observed and simulated on the same graph and add all error metrics to the plot.

ggof(sim=obs_sim$Predicted, obs=obs_sim$Observed, ftype="dm", gofs = c("NSE", "rNSE", "ME", "MSE",  "d", "RMSE", "PBIAS"), FUN=mean)
# You should explore different options that you can add to this figure. 
# For example you can choose which error metrics you want to display, etc

Beginner’s Guide to TensorFlow and Keras

This post is meant to provide a very introductory overview of TensorFlow and Keras and concludes with example of a how to use these libraries to implement a very basic neural network.


At core, TensorFlow is a free, open-source symbolic math library that expresses calculations in terms of dataflow graphs that are composed of nodes and tensors. TensorFlow is particularly adept at handling operations required to train neural networks and thus has become a popular choice for machine learning applications1. All nodes and tensors as well as the API are Python-based. However, the actual mathematical operations are carried out efficiently using high-performance C++ binaries2. TensorFlow was originally developed by the Google Brain Team for internal use but since has been released for public use. The library has a reputation of being on the more technical side and less intuitive to new users, so based on user feedback from initial versioning, TensorFlow decided to add Keras to their core library in 2017.


Keras is an open-source neural network Python library that has become a popular API to run on top of TensorFlow and other deep learning frameworks. Keras is useful especially for beginners in machine learning because it offers a user-friendly, intuitive, and quick way to build models. Particularly, users tend to most interested in quickly implementing neural networks. In Keras, neural network models are composed of standalone modules of neural network layers, cost functions, optimizers, activation functions, and regularizers that can be built separately and combined3.

Example: Predicting the age of abalone

In the example below, I will implement a very basic neural network to illustrate the main components of Keras. The problem we are interested in solving is predicting the age of abalone (sea snails) given a variety of physical characteristics. Traditionally, the age of abalone is calculated by cutting the shell, staining it, and counting the number of rings in the shell, so the goal is to see if less time-consuming and intrusive measurements, such as the diameter, length, and gender, can be used to predict the age of the snail. This dataset is one of the most popular regression datatsets from the UC Irvine Machine Learning Repository. The full set of attributes and dataset can be found here.

Step 1: Assessing and Processing Data

The first step to approaching this (and any machine learning) problem is to assess the quality and quantity of information that is provided in the dataset. We first import relevant basic libraries. More specific libraries will be imported above their respective functions for illustrative purposes. Then we import the dataset which has 4177 observations of 9 features and also has no null values.

#Import relevant libraries
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt

#Import dataset
dataset = pd.read_csv('Abalone_Categorical.csv')
#Print number of observations and features
print('This dataset has {} observations with {} features.'.format(dataset.shape[0], dataset.shape[1]))
#Check for null values

Note that the dataset is composed of numerical attributes except for the “Gender” column which is categorical. Neural networks cannot work with categorical data directly, so the gender must be converted to a numerical value. Categorical variables can be assigned a number such as Male=1, Female=2, and Infant=3, but this is a fairly naïve approach that assumes some sort of ordinal nature. This scale will imply to the neural network that a male is more similar to a female than an infant which may not be the case. Instead, we instead use one-hot encoding to represent the variables as binary vectors.

#One hot encoding for categorical gender variable

Gender = dataset.pop('Gender')

dataset['M'] = (Gender == 'M')*1.0
dataset['F'] = (Gender == 'F')*1.0
dataset['I'] = (Gender == 'I')*1.0

Note that we now have 10 features because Male, Female, and Infant are represented as the following:





0 0




0 0


It would be beneficial at this point to look at overall statistics and distributions of each feature and to check for multicollinearity, but further investigation into these topics will not be covered in this post. Also note the difference in the ranges of each feature. It is generally good practice to normalize features that have different units which likely will correspond to different scales for each feature. Very large input variables can lead to large weights which, in turn, can make the network unstable. It is up to the user to decide on the normalization technique that is appropriate for their dataset, with the condition that the output value that is returned also falls in the range of the chosen activation function. Here, we separate the dataset into a “data” portion and a “label” portion and use the MinMaxScalar from Scikit-learn which will, by default, transform the data into the range: (0,1).

#Reorder Columns

dataset = dataset[['Length', 'Diameter ', 'Height', 'Whole Weight', 'Schucked Weight','Viscera Weight ','Shell Weight ','M','F','I','Rings']]

#Separate input data and labels

#Normalize the data using the min-max scalar

from sklearn.preprocessing import MinMaxScaler
scalar= MinMaxScaler()
X= scalar.fit_transform(X)
y= y.reshape(-1,1)

We then split the data into a training set that is composed of 80% of the dataset. The remaining 20% is the testing set.

#Split data into training and testing 

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)

Step 2: Building and Training the Model

Then we build the Keras structure. The core of this structure is the model, which is of the “Sequential” form. This is the simplest style of model and is composed of a linear stack of layers.

#Build Keras Model

import keras
from keras import Sequential
from keras.layers import Dense

model = Sequential()
model.add(Dense(units=10, input_dim=10,activation='relu'))
model.compile(optimizer='adam', loss='mean_squared_error',  metrics=['mae','mse'])

early_stop = keras.callbacks.EarlyStopping(monitor='val_loss', patience=10)

history=model.fit(X_train,y_train,batch_size=5, validation_split = 0.2, callbacks=[early_stop], epochs=100)

# Model summary for number of parameters use in the algorithm

Stacking layers is executed with model.add(). We stack a dense layer of 10 nodes that has 10 inputs feeding into the layer. The activation function chosen for this layer is a ReLU (Rectified Linear Unit), a popular choice due to better gradient propagation and sparser activation than a sigmoidal function for example. A final output layer with a linear activation function is stacked to simply return the model output. This network architecture was picked rather arbitrarily, but can be tuned to achieve better performance. The model is compiled using model.compile(). Here, we specify the type of optimizer- in this case the Adam optimizer which has become a popular alternative to the more traditional stochastic gradient descent. A mean-squared-error loss function is specified and the metrics reported will be “mean squared error” and “mean absolute error”. Then we call model.fit() to iterate training in batch sizes. Batch size corresponds to the number of training instances that are processed before the model is updated. The number of epochs is the number of complete passes through the dataset. The more times that the model can see the dataset, the more chances it has to learn the patterns, but too many epochs can also lead to overfitting. The number of epochs appropriate for this case is unknown, so I can implement a validation set that is 20% of my current training set. I set a large value of 100 epochs and add early stopping criteria that stops training when the validation score stops improving and helps prevent overfitting.

Training and validation error can be plotted as a function of epochs4 .

def plot_history(history):
  hist = pd.DataFrame(history.history)
  hist['epoch'] = history.epoch

  plt.ylabel('Mean Square Error [$Rings^2$]')
  plt.plot(hist['epoch'], hist['mean_squared_error'],
           label='Train Error')
  plt.plot(hist['epoch'], hist['val_mean_squared_error'],
           label = 'Val Error')


This results in the following figure:


Error as function of epochs

Step 3: Prediction and Assessment of Model

Once our model is trained, we can use it to predict the age of the abalone in the test set. Once the values are predicted, then they must be re-scaled back which is performed using the inverse_transform function from Scikit-learn.

#Predict testing labels

y_pred= model.predict(X_test)

#undo normalization 


Then the predicted and actual ages are plotted along with a 1-to-1 line to visualize the performance of the model. An R-squared value and a RMSE are calculated as 0.554 and 2.20 rings respectively.

#visualize performance
fig, ax = plt.subplots()
ax.scatter(y_test_transformed, y_pred_transformed)
ax.plot([y_test_transformed.min(), y_test_transformed.max()], [y_test_transformed.min(), y_test_transformed.max()], 'k--', lw=4)
ax.set_xlabel('Measured (Rings)')
ax.set_ylabel('Predicted (Rings)')

#Calculate RMSE and R^2
from sklearn.metrics import mean_squared_error
from math import sqrt
rms = sqrt(mean_squared_error(y_test_transformed, y_pred_transformed))

from sklearn.metrics import r2_score


Predicted vs. Actual (Measured) Ages

The performance is not ideal, but the results are appropriate given that this dataset is notoriously hard to use for prediction without other relevant information such as weather or location. Ultimately, it is up to the user to decide if these are significant/acceptable values, otherwise the neural network hyperparameters can be further fine-tuned or more input data and more features can be added to the dataset to try to improve performance.


[1]Dean, Jeff; Monga, Rajat; et al. (November 9, 2015). “TensorFlow: Large-scale machine learning on heterogeneous systems” (PDF)TensorFlow.org. Google Research

[2] Yegulalp, Serdar. “What Is TensorFlow? The Machine Learning Library Explained.” InfoWorld, InfoWorld, 18 June 2019, http://www.infoworld.com/article/3278008/what-is-tensorflow-the-machine-learning-library-explained.html.

[3]“Keras: The Python Deep Learning Library.” Home – Keras Documentation, keras.io/.

[4] “Basic Regression: Predict Fuel Efficiency  :   TensorFlow Core.” TensorFlow, http://www.tensorflow.org/tutorials/keras/regression.

Dynamic Emulation in Water Resources Applications

The purpose of this blog post is to introduce dynamic emulation in the context of applications to hydrology. Hydrologic modelling involves implementing mathematical equations to represent physical processes such as precipitation, runoff, and evapotranspiration and to characterize energy and water flux within the hydrologic system (Abbott et al., 1986). Users of a model might be interested in using it to approach a variety of problems related to, for instance, modeling the rainfall-runoff process in a river basin. The user might try to validate the model, perform sensitivity or uncertainty analysis, determine optimal planning and management of the basin, or test how hydrology of the basin is affected by different climate scenarios. Each of these problems requires a numerically intensive Monte Carlo style approach for which a physical model is not conducive. A solution to this problem is to create an emulator for the physical model. An emulator is a computationally efficient model whose response characteristics mimic those of a complex model as closely as possible. This model can then replace the more complicated model in computationally intensive exercises (Tych & Young, 2012).

There are many different approaches to creating an emulator; one particularly unified approach is Dynamic Emulation Modelling (DEMo) (Castelletti et al., 2012). DEMo seeks to accomplish three goals when creating an emulator:

  1. The emulator is less computationally intensive than the physical model.
  2. The emulator’s input-output behavior approximates as well as possible the behavior of the physical model.
  3. The emulator is credible to users.

DEMo has five main steps:

    1. Design of Computer Experiments: Determine a set of input data to build the emulator off of that will cover the range of responses of the physical model
    2. Variable Aggregation: Reduce dimensionality of the input data
    3. Variable Selection: Select components of the reduced inputs that are most relevant to explaining the output data
    4. Structure Identification and Parameter Estimation: In the case of a rainfall runoff model, choose a set of appropriate black box models that can capture the complex, non-linear process and fit the parameters of these models accordingly.
    5. Evaluation and Physical Interpretation: Evaluate the model on a validation set and determine how well the model’s behavior and structure can be explained or attributed to physical processes.

The next section outlines two data-driven style models that can be used for hydrologic emulation.

Artificial Neural Networks (ANNs)

Rainfall-runoff modelling is one of the most complex and non-linear hydrologic phenomena to comprehend and model. This is due to tremendous spatial and temporal variability in watershed characteristics. Because ANNs can mimic high-dimensional non-linear systems, they are a popular choice for rainfall-runoff modeling (Maier at al., 2010). Depending on the time step of interest as well as the complexity of the hydrology of the basin, a simple feedforward network may be sufficient for accurate predictions. However, the model may benefit from having the ability to incorporate memory that might be inherent in the physical processes that dictate the rainfall-runoff relationship. Unlike feedforward networks, recurrent neural networks are designed to understand temporal dynamics by processing inputs in sequential order (Rumelhart et al., 1986) and by storing information obtained from previous outputs. However, RNNs have trouble learning long-term dependencies greater than 10 time steps (Bengio, 1994). The current state of the art is Long Short-Term Memory (LSTM) models. These RNN style networks contain a cell state that has the ability of learn long-term dependencies as well as gates to regulate the flow of information in and out the cell, as shown in Figure 1.


Figure 1: Long Short-Term Memory Module Architecture1

LSTMs are most commonly used in speech and writing recognition but have just begun to be implemented in hydrology applications with success especially in modelling rainfall-runoff in snow-influenced catchments. Kratzert et al., 2018, show that the LSTM is able to outperform the Sacramento Soil Moisture Accounting Model (SAC-SMA) coupled with a Snow-17 routine to compute runoff in 241 catchments.

Support Vector Regression (SVR)

Support vector machines are commonly used for classification but have been successfully implemented for working with continuous values prediction in a regression setting. Support vector regression relies on finding a function within a user specified level of precision, ε, from the true value of every data point, shown in Figure 2.


Figure 2: Support Vector Regression Model2

It is possible that this function may not exist, and so slack variables, ξ , are introduced which allow errors up to ξ to still exist. Errors that lie within the epsilon bounds are treated as  0, while points that lie outside of the bounds will have a loss equal to the distance between the point and the ε bound. Training an SVR model requires solving the following optimization problem:


Where w is the learned weight vector and xand yare training points. C is a penalty parameter imposed on observations that lie outside the epsilon margin and also serves as a method for regularization. In the case that linear regression is not sufficient for the problem, the inner products in the dual form of the problem above can be substituted with a kernel function that maps x to a higher dimensional space, This allows for estimation of non-linear functions. Work by Granata et al., 2016 compares an SVR approach with EPA’s Storm Water Management Model (SWMM) and finds comparable results in terms of RMSE and R2 value.


Abbott, M.b., et al. “An Introduction to the European Hydrological System — Systeme Hydrologique Europeen, ‘SHE’, 1: History and Philosophy of a Physically-Based, Distributed Modelling System.” Journal of Hydrology, vol. 87, no. 1-2, 1986, pp. 45–59., doi:10.1016/0022-1694(86)90114-9.

Bengio, Y., et al. “Learning Long-Term Dependencies with Gradient Descent Is Difficult.” IEEE Transactions on Neural Networks, vol. 5, no. 2, 1994, pp. 157–166., doi:10.1109/72.279181.

Castelletti, A., et al. “A General Framework for Dynamic Emulation Modelling in Environmental Problems.” Environmental Modelling & Software, vol. 34, 2012, pp. 5–18., doi:10.1016/j.envsoft.2012.01.002.

Castelletti, A., et al. “A General Framework for Dynamic Emulation Modelling in Environmental Problems.” Environmental Modelling & Software, vol. 34, 2012, pp. 5–18., doi:10.1016/j.envsoft.2012.01.002.

Granata, Francesco, et al. “Support Vector Regression for Rainfall-Runoff Modeling in Urban Drainage: A Comparison with the EPA’s Storm Water Management Model.” Water, vol. 8, no. 3, 2016, p. 69., doi:10.3390/w8030069.

Kratzert, Frederik, et al. “Rainfall–Runoff Modelling Using Long Short-Term Memory (LSTM) Networks.” Hydrology and Earth System Sciences, vol. 22, no. 11, 2018, pp. 6005–6022., doi:10.5194/hess-22-6005-2018.

Maier, Holger R., et al. “Methods Used for the Development of Neural Networks for the Prediction of Water Resource Variables in River Systems: Current Status and Future Directions.” Environmental Modelling & Software, vol. 25, no. 8, 2010, pp. 891–909., doi:10.1016/j.envsoft.2010.02.003.

Rumelhart, David E., et al. “Learning Representations by Back-Propagating Errors.” Nature, vol. 323, no. 6088, 1986, pp. 533–536., doi:10.1038/323533a0.

Tych, W., and P.c. Young. “A Matlab Software Framework for Dynamic Model Emulation.” Environmental Modelling & Software, vol. 34, 2012, pp. 19–29., doi:10.1016/j.envsoft.2011.08.008.


(1) https://colah.github.io/posts/2015-08-Understanding-LSTMs/

(2) Kleynhans, Tania, et al. “Predicting Top-of-Atmosphere Thermal Radiance Using MERRA-2 Atmospheric Data with Deep Learning.” Remote Sensing, vol. 9, no. 11, 2017, p. 1133., doi:10.3390/rs9111133.




MORDM using Rhodium – the lake problem workflow

Rhodium is a Python library for Many-Objective Robust Decision Making (MORDM), part of Project Platypus. Multiple past posts have addressed the use of Rhodium on various applications. This post mirrors this older post by Julie which demonstrated the MORDM functionality of Rhodium using a different model; I’ll be doing the same using the lake problem, a case study we often use for training purposes. In this post I will use Rhodium to replicate the analysis in the paper and discuss the importance of the various steps in the workflow, why they are performed and what they mean for MORDM. As this post will be quite long, I will be emphasizing discussion of results and significance and will be using previously written code found in the Rhodium repository (slightly adapted) and in Julie’s lake problem repository. In general, this post will not be re-inventing the wheel in terms of scripts, but will be an overview of why. For a more detailed description of the model, the reader is directed to the paper linked, Dave’s recent post, and the accompanying Powerpoint presentation for this training.

Problem Formulation

The system presents a hypothetical town that seeks to balance economic benefits resulting in phosphorus (P) emissions with the ecological benefits of a healthy lake. The lake naturally receives inflows of phosphorus from the surrounding area and recycles phosphorus in its system. The concentration of P in the lake at every timestep is described by the following equation:

X_{t+1} = X_{t}+a_t+Y_t+\frac{X_t^q}{1+X^q_t}-bX_t

Where a represents the anthropogenic phosphorus inputs from the town, Y ~ LN(mu, sigma) are natural P inputs to the lake, q is a parameter controlling the recycling rate of P from sediment, b is a parameter controlling the rate at which P is removed from the system and t is the time index in years. Looking at the intersections of the temporal fluxes of P, one can see there are three equilibria for this system, two of which are stable and attractive, and one unstable, a tipping point in this system.

Figure by Julie Quinn

Why this matters:
We’re trying to decide on values of a_t that maximize our economic benefits, but also increase P in the lake. If we increase P too much, and we cross a tipping point of no return. Even though this is vital to the design of the best possible management policy, this value of P is hard to know exactly because:
It depends on stochastic Y_t
The rate of inputs is non-linear
The system parameters determining this point might be (deeply) uncertain

The management of this hypothetical system is concerned in four objectives:

  • Maximize Expected Economic Benefits
  • Minimize Maximum P Concentration
  • Maximize Inertia (Measure of Stability)
  • Maximize Reliability (Fraction of Years Below Threshold)

To search for and identify solution, we will be using two strategies:

Open loop control (Intertemporal):

Each candidate solution is a set of T decision variables a_t , representing the anthropogenic discharges at each timestep t\in(0, 1, \dots, T-1). This strategy necessitates 100 decision variables.

Closed loop control (DPS):

Each candidate solution is a control rule mapping the current P concentration, X_t, to a decision a_t. This strategy necessitates 6 decision variables.

Please refer to the paper for details on the formulations of the objectives and strategies.

Problem Formulation in Rhodium

In general, to set up any Rhodium model, you’d need to follow this template:

The scripts for the two formulations can be found here and here. In general, Rhodium is written in a declarative manner so after you define what everything is (model, parameter, lever, etc.), you only need to describe the operation to be performed (e.g., “optimize”), without needing to specify all the details of how that should be done.

Why this matters:
This library is set up in a way that allows rapid and intuitive application of the MORDM framework. This is especially beneficial in situations were alternative problem formulations are tested (as is the case here with the two search strategies).

Exploring alternatives using J3

To explore the two sets of candidate strategies using the tools provided by Rhodium, we need to merge them into a single dataset. We can do so by adding a key to each solution identified by each strategy:

for i in range(len(output)):

merged = DataSet(output+dps_output)

We can visualize the two sets using J3 (see my past post on it here):

from j3 import J3

Why this matters:
The DPS strategy was able to identify more non-dominated solutions than the intertemporal strategy. Many of the DPS solutions outperform intertemporal solutions in every objective. This is important because we’re trying to identify the best possible management policies for this system and had we only used the Intertemporal strategy we could potentially miss multiple viable solutions that are better.

Policy diagnostics

Now we have a set of candidate management policies, we should like to see what makes their performance different and how they affect the system when applied. Using J3, I can interactively explore the sets using the visualizations provided as well as the tables reporting on the performance of each solution. In the example figure below, I am highlighting the DPS solution with the highest reliability performance.

Using the script found here, I can draw two DPS policies, the one producing the highest reliability and the one resulting in the highest profits. What I am trying to do here is investigate more closely how the two policies prescribe action on the system and how they result in the performance that they do.

Both solutions map low P concentrations, X_t , to higher P releases, a_t , and prescribe decreased releases as the concentration increases. The most reliable policy is more conservative, starting to decrease its emissions earlier. Both policies start increasing discharges beyond some lake P concentration, with the benefits-maximizing solution doing so at lower values. This appears to be as the levels approach the tipping point, beyond which it is best to further maximize economic benefits.

Why this matters:
Understanding how the candidate policies act on a system to achieve their performance gives us further insight on how the system behaves and reacts to our actions. The candidate policies are also not aware of where the critical P threshold is, but appear to “discover” it and when crossed, they prescribe increased emissions to maximize profits.

Robustness analysis and scenario discovery

Finally, we’d like to investigate how the candidate solutions perform under uncertainty. To do so using Rhodium, we need to define which parameters are exogenous uncertainties, including their distributions and ranges, sample them and re-evaluate either specific solutions or all of them in these sampled worlds. The script is simple and intuitive:

model.uncertainties = [UniformUncertainty("b", 0.1, 0.45),
                       UniformUncertainty("q", 2.0, 4.5),
                       UniformUncertainty("mean", 0.01, 0.05),
                       UniformUncertainty("stdev", 0.001, 0.005),
                       UniformUncertainty("delta", 0.93, 0.99)]
SOWs = sample_lhs(model, 1000)
policy = output.find_max("reliability")
scenario_discovery = evaluate(model, update(SOWs, policy))

To perform scenario discovery, Rhodium supports Prim and Cart. We first need to classify the output into “success” and “failure”:

classification = scenario_discovery.apply("'Reliable' if reliability >= 0.95 and utility >=0.2 else 'Unreliable'")

p = Prim(scenario_discovery, classification, include=model.uncertainties.keys(), coi="Reliable")
box = p.find_box()
fig = box.show_tradeoff()

c = Cart(scenario_discovery, classification, include=model.uncertainties.keys(), min_samples_leaf=50)

Why this matters:
Scenario discovery allows us to discover regions in the parametric space that cause our policies to fail. In this particular case, it appears that there is a non linear combination of b and q values that lead to failure in meeting the criterion. Both parameters shift the input and output fluxes of P, shifting the tipping point to lower levels.

ggplot (Part 1)

In this blog post, I am going to introduce a powerful plotting package in R. The plotting package is called ggplo2. This library allows us to quickly create different plots (scatter plots, boxplots, histograms, density plots, time series plots: you name it!) while also customizing them to produce elegant graphics beyond regular line or bar charts. First, we need to download the library and then activate it:


I am going to outline how to build two different types of map: (1) calendar heat map and (2) alluvial map. The first is used to present the variations of a variable or an activity over a long period of time through color coding so that we can easily recognize the trend, the seasonality, or any patterns or anomalies. The alluvial map provides better visualization for categorical and multidimensional data. They show the changes in magnitude of some variables at different situations that can be any discrete indexes. To create this type of map, we also need the “ggalluvial” library that has to be called with ggplot2.

The general code structure for plotting calendar heat map is the following:

ggplot ( dataframe , aes ( x , y , fill )) + geom_tile ( ) + facet_grid ( )

With “aes,” which stands for aesthetics, we describe how variables in the data frame are mapped to visual properties, so we specify the x- and y-axes. We use the fill in aesthetic to specify the fill color for different variables.

“Geom” specifies the geometric objects that define the graph type—which can be point, line, etc.—and show the value we assigned in “aes(fill)”. The “geom_tile” tiles plane with rectangles uses the center of the tile and its size. A rectangle can be divided into smaller rectangles or squares.

The “facet” command creates a trellis graph or panel by specifying two variables or one on top of “aes.”

We can show the daily, weekly, or monthly values in the calendar heat map. As an example of calendar heat map, I am using weather data for the Yakima River Basin in central Washington State; these data were originally downloaded from Northwest Knowledge Network. The data set includes three downscaled global climate models (GCMs) from 2019 to 2061 with the resolution of 1/24th degree; the data is also aggregated for the basin and monthly time step. You can download data here. Now, let’s read the data.

gcm1 <- read.csv("(your directory)/CanESM2_rcp85.csv")
gcm2 <- read.csv("(your directory)/inmcm4_rcp85.csv")
gcm3 <- read.csv("(your directory)/GFDL-ESM2G_rcp85.csv")

By running header(gcm1) or colnames(gcm1), you can see different columns in each data set, including “Year”; “Month”; name of the “GCM”; and weather variables including tasmin, tasmax, pr, PotEvap corresponded to minimum and maximum temperature, precipitation, and potential evapotranspiration. The goal is to visualize the difference between these realizations of monthly precipitation for 21 future years from 2020 to 2040. To lay out panels in a faced_grid, I want to show years and months. In each month, I am going to show precipitation values for each GCM.

gcms<- rbind(gcm1,gcm2,gcm3)  # Join three data frames into one

# Add a new column to the data frame and call it “Month_name”; then, fill it with the name of the months for each row
tst<- c("Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct",
gcms$Month_name <- tst[gcms$Month]  
gcms$nmonth<- as.factor(1)  # add a new column to a data frame and fill it with value of 1, as a factor
gcm_2040<- subset(gcms, gcms$Year<2041) # select just the years before 2041
prec_fut<- ggplot(gcm_2040, aes(x=gcm_2040$gcm,nmonth,fill = pr)) +
  facet_grid(Year~Month_name) +
  theme(strip.text.x = element_text(colour = "black",size =9 ,margin=margin(0.2,0,0.2,0,"cm")),
        strip.text.y = element_text(colour = "black",size = 9,angle =360),
        strip.background = element_rect(colour="black", size=1, linetype="solid"),
        axis.text.x=element_text(size=9,angle = 90),
        panel.background = element_rect(fill = "white"))+
scale_fill_gradient(low="green",high="red",limits=c(0,230),breaks=c(0,25,50,75,100,125,150,175,200,230),labels=c(0,25,50,75,100,125,150,175,200,230)) +
       title = "Time-Series Calendar Heatmap",
       subtitle="Projected Monthly Precipitation-Yakima Rive Basin",
       fill="Precipitation (mm)") 
ggsave("(your directory)/name.png", prec_fut)

With theme(), you can modify the non-data parts of your plot. For example, “strip.text.x and .y” and “strip.background“ adjust facet labels of each panel along horizontal and vertical directions and background format.

The “axis.text” and “axis.ticks” commands adjust the tick labels and marks along axes, and by using element_blank(), you can remove specific labels. With “panel.background,” the underlying background of the plot can be customized.

With argument “limits” in “scale_fill_gradient,” you can extend the color bar to the minimum and maximum values you need. In our example, these limits are obtained by the following commands.


With the labs() command, you can change axis labels, plot, and legend titles. Finally, ggsave() is used to save a plot; the scale, size, and resolution of the saved plot can also be specified. Ggsave() supports various formats including “eps,” “ps,” “tex” (pictex), “pdf,” “jpeg,” “tiff,” “png,” “bmp,” “svg,” and “wmf” (Windows only).

What can we learn from this graph? Well, we can see the interannual variability of precipitation based on three GCMs and how monthly precipitation varies in different GCMs. Compared to precipitation values for wet months, when variations are generally higher, precipitation values for dry months are more consistent among GCMs.

Now, let’s create an alluvial diagram. For that, we need to prepare the essential components. The data should be categorical, and for each row, there should be a frequency for a category that we are interested in presenting. In this example, I am going to show simulated (by a crop model) winter wheat yield changes in dryland low rainfall zone of the Pacific Northwest during the future period (2055–2085) compared to the historical period (1980–2010). The low zone in this region includes 1,384 grid cells with the dimension of 4 by 4 km. Different GCMs projected different weather scenarios, which we showed in the calendar heat map plot. As you can imagine, if you force your crop model with different GCMs, you will get different projections for the crop yield. The example data set can be read in R by the following:

L_zones<- read.csv("(your directory)/yield_gcms.csv",head=T)

This data set includes a few columns: “fre_yield,” “GCM,” “Zone,” “RCP,” and “Ratio.”

For each row, there is a GCM name that corresponds to RCP and Zone. Then, there is a Ratio column, which shows the category of the yield ratio. These yield ratio categories correspond to the average winter wheat yield during the future period, divided by average yield during the historical period. For each of the categories under the Ratio column, the number of grid cells was counted and is reported under the “fre_ yield” column. For example, Row 2 (L_zones[2,]) shows that 976 grid cells out of 1,384 cells in a low-rainfall zone are projected to have yield ratio between 1.2 and 1.5 during 2055–2085 compared to 1980–2010, under CanESM2 and RCP 4.5 future weather scenarios.

Ggplot and ggalluvial provide an easy way to illustrate this type of data set. At each category on the x-axis, we can have multiple groups, and they are called “strata.” Alluvial diagrams have horizontal splines that span across the categories at the x-axis, and they are called “alluvia.” A fixed value is assigned to an alluvium at each category at the x-axis that can be represented by a fill color.


yield_ratio<- ggplot(data = L_zones, aes(axis=GCM, axis2=RCP, y = fre_yield)) + 
 # We can add more (axis4, …) to have more groups in the x-axis
  scale_x_discrete(limits = c("GCM","RCP"), expand = c(.1, .1)) +
  geom_alluvium(aes(fill = Ratio)) +
  geom_stratum(width = 1/12, fill = "lightgrey", color = "black") + 
  geom_label(stat = "stratum", label.strata = TRUE) +
  scale_fill_brewer(type = "qual", palette = "Set1")+
       y="Number of Grid cells in the Zone",
       title = "Average Change in Winter Wheat Yield During 2055-2085 Compared to 1980-2010",
       subtitle="Low Rainfall Zone in Pacific Northwest")
ggsave("(your directory)/Future_WW_Yield.jpeg",yield_ratio, width=10, height=10)

Y in the aes() controls the heights of the alluvia and is aggregated across equivalent observations.

“Scale_x_discrete” allows you to place labels between discrete position scales. You can use “limit” to define values of the scale and also their order, and “expand” adds some space around each value of the scale.

“Geom_alluvium” receives the x and y from the data set from ggplot and plots the flows between categories.

“Geom_stratum” plots the rectangles for the categories, and we can adjust their appearance.

 Labels can be assigned to strata by adding “stat = stratum” and “label.strata = TRUE” to the geom_label. Then, the unique values within each stratum are shown on the map.

“Scale_fill_brewer” is useful for displaying discrete values on a map. The type can be seq (sequential), div (diverging), or qual (qualitative). The “palette” can be a string of the named palette or a number, which will index into the list of palettes of appropriate type.

Now, we can easily see in this graph that the three GCMs used in the crop model produced different results. The changes in winter wheat yield during the future period compared to the historical period are not predicted with the same magnitude based on different future weather scenarios, and these differences are more profound under RCP 8.5 compared to RCP 4.5.

A Python Implementation of grouped Radial Convergence Plots to visualize Sobol Sensitivity Analysis results

TDLR; A Python implementation of grouped radial convergence plots based on code from the Rhodium library. This script is will be added to Antonia’s repository for Radial Convergence Plots.

Radial convergence plots are a useful tool for visualizing results of Sobol Sensitivities analyses. These plots array the model parameters in a circle and plot the first order, total order and second order Sobol sensitivity indices for each parameter. The first order sensitivity is shown as the size of a closed circle, the total order as the size of a larger open circle and the second order as the thickness of a line connecting two parameters.

In May, Antonia created a new Python library to generate Radial Convergence plots in Python, her post can be found here and the Github repository here. I’ve been working with the Rhodium Library a lot recently and found that it contained a Radial Convergence Plotting function with the ability to plot grouped output, a functionality that is not present in Antonia’s repository. This function produces the same plots as Calvin’s R package. Adding a grouping functionality allows the user to color code the visualization to improve the interpretability of the results. In the code below I’ve adapted the Rhodium function to be a standalone Python code that can create visualizations from raw output of the SALib library. When used on a policy for the Lake Problem, the code generates the following plot shown in Figure 1.

Figure 1: Example Radial Convergence Plot for the Lake Problem reliability objective. Each of the points on the plot represents a sampled uncertain parameter in the model. The size of the filled circle represents the first order Sobol Sensitivity Index, the size of the open circle represents the total order Sobol Sensitivty Index and the thickness of lines between points represents the second order Sobol Sensitivity Index.

import numpy as np
import itertools
import matplotlib.pyplot as plt
import seaborn as sns
import math
sns.set_style('whitegrid', {'axes_linewidth': 0, 'axes.edgecolor': 'white'})

def is_significant(value, confidence_interval, threshold="conf"):
    if threshold == "conf":
        return value - abs(confidence_interval) > 0
        return value - abs(float(threshold)) > 0

def grouped_radial(SAresults, parameters, radSc=2.0, scaling=1, widthSc=0.5, STthick=1, varNameMult=1.3, colors=None, groups=None, gpNameMult=1.5, threshold="conf"):
    # Derived from https://github.com/calvinwhealton/SensitivityAnalysisPlots
    fig, ax = plt.subplots(1, 1)
    color_map = {}
    # initialize parameters and colors
    if groups is None:
        if colors is None:
            colors = ["k"]
        for i, parameter in enumerate(parameters):
            color_map[parameter] = colors[i % len(colors)]
        if colors is None:
            colors = sns.color_palette("deep", max(3, len(groups)))
        for i, key in enumerate(groups.keys()):
            for parameter in groups[key]:
                color_map[parameter] = colors[i % len(colors)]
    n = len(parameters)
    angles = radSc*math.pi*np.arange(0, n)/n
    x = radSc*np.cos(angles)
    y = radSc*np.sin(angles)
    # plot second-order indices
    for i, j in itertools.combinations(range(n), 2):
        #key1 = parameters[i]
        #key2 = parameters[j]
        if is_significant(SAresults["S2"][i][j], SAresults["S2_conf"][i][j], threshold):
            angle = math.atan((y[j]-y[i])/(x[j]-x[i]))
            if y[j]-y[i] < 0:
                angle += math.pi
            line_hw = scaling*(max(0, SAresults["S2"][i][j])**widthSc)/2
            coords = np.empty((4, 2))
            coords[0, 0] = x[i] - line_hw*math.sin(angle)
            coords[1, 0] = x[i] + line_hw*math.sin(angle)
            coords[2, 0] = x[j] + line_hw*math.sin(angle)
            coords[3, 0] = x[j] - line_hw*math.sin(angle)
            coords[0, 1] = y[i] + line_hw*math.cos(angle)
            coords[1, 1] = y[i] - line_hw*math.cos(angle)
            coords[2, 1] = y[j] - line_hw*math.cos(angle)
            coords[3, 1] = y[j] + line_hw*math.cos(angle)

            ax.add_artist(plt.Polygon(coords, color="0.75"))
    # plot total order indices
    for i, key in enumerate(parameters):
        if is_significant(SAresults["ST"][i], SAresults["ST_conf"][i], threshold):
            ax.add_artist(plt.Circle((x[i], y[i]), scaling*(SAresults["ST"][i]**widthSc)/2, color='w'))
            ax.add_artist(plt.Circle((x[i], y[i]), scaling*(SAresults["ST"][i]**widthSc)/2, lw=STthick, color='0.4', fill=False))
    # plot first-order indices
    for i, key in enumerate(parameters):
        if is_significant(SAresults["S1"][i], SAresults["S1_conf"][i], threshold):
            ax.add_artist(plt.Circle((x[i], y[i]), scaling*(SAresults["S1"][i]**widthSc)/2, color='0.4'))
    # add labels
    for i, key in enumerate(parameters):                
        ax.text(varNameMult*x[i], varNameMult*y[i], key, ha='center', va='center',
                rotation=angles[i]*360/(2*math.pi) - 90,
    if groups is not None:
        for i, group in enumerate(groups.keys()):
            group_angle = np.mean([angles[j] for j in range(n) if parameters[j] in groups[group]])
            ax.text(gpNameMult*radSc*math.cos(group_angle), gpNameMult*radSc*math.sin(group_angle), group, ha='center', va='center',
                rotation=group_angle*360/(2*math.pi) - 90,
                color=colors[i % len(colors)])
    plt.axis([-2*radSc, 2*radSc, -2*radSc, 2*radSc])

    return fig

The code below implements this function using the SALib to conduct a Sobol Sensitivity Analysis on the Lake Problem to produce Figure 1.

import numpy as np
import itertools
import matplotlib.pyplot as plt
import math
from SALib.sample import saltelli
from SALib.analyze import sobol
from lake_problem import lake_problem
from grouped_radial import grouped_radial

# Define the problem for SALib
problem = {
	'num_vars': 5,
	'names': ['b', 'q', 'mean', 'stdev', 'delta'],
	'bounds': [[0.1, 0.45],
			   [2.0, 4.5],
			   [0.01, 0.05],
			   [0.001, 0.005],
			   [0.93, 0.99]]

# generate Sobol samples
param_samples = saltelli.sample(problem, 1000)

# extract each parameter for input into the lake problem
b_samples = param_samples[:,0]
q_samples = param_samples[:,1]
mean_samples = param_samples[:,2]
stdev_samples = param_samples[:,3]
delta_samples = param_samples[:,4]

# run samples through the lake problem using a constant policy of .02 emissions
pollution_limit = np.ones(100)*0.02

# initialize arrays to store responses
max_P = np.zeros(len(param_samples))
utility = np.zeros(len(param_samples))
inertia = np.zeros(len(param_samples))
reliability = np.zeros(len(param_samples))

# run model across Sobol samples
for i in range(0, len(param_samples)):
	print("Running sample " + str(i) + ' of ' + str(len(param_samples)))
	max_P[i], utility[i], inertia[i], reliability[i] = lake_problem(pollution_limit,

#Get sobol indicies for each response
SA_max_P = sobol.analyze(problem, max_P, print_to_console=False)
SA_reliability = sobol.analyze(problem, reliability, print_to_console=True)
SA_inertia = sobol.analyze(problem, inertia, print_to_console=False)
SA_utility = sobol.analyze(problem, utility, print_to_console=False)

# define groups for parameter uncertainties
groups={"Lake Parameters" : ["b", "q"],
        "Natural Pollution" : ["mean", "stdev"],
        "Discounting" : ["delta"]}

fig = grouped_radial(SA_reliability, ['b', 'q', 'mean', 'stdev', 'delta'], groups=groups, threshold=0.025)