Welcome to our blog!

Welcome to Water Programming! This blog is by Pat Reed’s group at Cornell, who use computer programs to solve problems — Multiobjective Evolutionary Algorithms (MOEAs), simulation models, visualization, and other techniques. Use the search feature and categories on the right panel to find topics of interest. Feel free to comment, and contact us if you want to contribute posts.

To find software:  Please consult the Pat Reed group website, MOEAFramework.org, and BorgMOEA.org.

The MOEAFramework Setup Guide: A detailed guide is now available. The focus of the document is connecting an optimization problem written in C/C++ to MOEAFramework, which is written in Java.

The Borg MOEA Guide: We are currently writing a tutorial on how to use the C version of the Borg MOEA, which is being released to researchers here.

Call for contributors: We want this to be a community resource to share tips and tricks. Are you interested in contributing? Please email dfg42 “at” cornell.edu. You’ll need a WordPress.com account.

Scaling experiments: how to measure the performance of parallel code on HPC systems

Parallel computing allows us to speed up code by performing multiple tasks simultaneously across a distributed set of processors. On high performance computing (HPC) systems, an efficient parallel code can accomplish in minutes what might take days or even years to perform on a single processor. Not all code will scale well on HPC systems however. Most code has inherently serial components that cannot be divided among processors. If the serial component is a large segment of a code, the speedup gained from parallelization will greatly diminish. Memory and communication bottlenecks also present challenges to parallelization, and their impact on performance may not be readily apparent.

To measure the parallel performance of a code, we perform scaling experiments. Scaling experiments are useful as 1) a diagnostic tool to analyze performance of your code and 2) a evidence of code performance that can be used when requesting allocations on HPC systems (for example, NSF’s XSEDE program requires scaling information when requesting allocations). In this post I’ll outline the basics for performing scaling analysis of your code and discuss how these results are often presented in allocation applications.

Amdahl’s law and strong scaling

One way to measure the performance a parallel code is through what is known as “speedup” which measures the ratio of computational time in serial to the time in parallel:

speedup = \frac{t_s}{t_p}

Where t_s is the serial time and t_p is the parallel time.

The maximum speedup of any code is limited the portion of code that is inherently serial. In the 1960’s programmer Gene Amdahl formalized this limitation by presenting what is now known as Amdahl’s law:

Speedup = \frac{t_s}{t_p} = \frac{1}{s+(1-s)/p} < \frac{1}{s}

Where p is the number of processors, and s is the fraction of work that is serial.

On it’s face, Amdahl’s law seems like a severe limitation for parallel performance. If just 10% of your code is inherently serial, then the maximum speedup you can achieve is a factor of 10 ( s= 0.10, 1/.1 = 10). This means that even if you run your code over 1,000 processors, the code will only run 10 times faster (so there is no reason to run across more than 10 processors). Luckily, in water resources applications the inherently serial fraction of many codes is very small (think ensemble model runs or MOEA function evaluations).

Experiments that measure speedup of parallel code are known as “strong scaling” experiments. To perform a strong scaling experiment, you fix the amount of work for the code to do (ie. run 10,000 MOEA function evaluations) and examine how long it takes to finish with varying processor counts. Ideally, your speedup will increase linearly with the number of processors. Agencies that grant HPC allocations (like NSF XSEDE) like to see the results of strong scaling experiments visually. Below, I’ve adapted a figure from an XSEDE training on how to assess performance and scaling:

Plots like this are easy for funding agencies to assess. Good scaling performance can be observed in the lower left corner of the plot, where the speedup increases linearly with the number of processors. When the speedup starts to decrease, but has not leveled off, the scaling is likely acceptable. The peak of the curve represents poor scaling. Note that this will actually be the fastest runtime, but does not represent an efficient use of the parallel system.

Gustafson’s law and weak scaling

Many codes will not show acceptable scaling performance when analyzed with strong scaling due to inherently serial sections of code. While this is obviously not a desirable attribute, it does not necessarily mean that parallelization is useless. An alternative measure of parallel performance is to measure the amount of additional work that can be completed when you increase the number of processors. For example, if you have a model that needs to read a large amount of input data, the code may perform poorly if you only run it for a short simulation, but much better if you run a long simulation.

In the 1980s, John Gustafson proposed a relationship that notes relates the parallel performance to the amount of work a code can accomplish. This relationship has since been termed Gustafson’s law:

speedup = s+p*N

Where s and p are once again the portions of the code that are serial and parallel respectively and N is the number of core.

Gustafson’s law removes the inherent limits from serial sections of code and allows for another type of scaling analysis, termed “weak scaling”. Weak scaling is often measured by “efficiency” rather than speedup. Efficiency is calculated by proportionally scaling the amount of work with the number of processors and measure the ratio of completion times:

efficiency = \frac{t_1}{t_N}

Ideally, efficiency will be close to one (the time it take one processor to do one unit of work is the same time it takes N processors to do N units of work). For resource allocations, it is again advisable to visualize the results of weak scaling experiments by creating plots like the one shown below (again adapted from the XSEDE training).

Final thoughts

Scaling experiments will help you understand how your code will scale and give you a realistic idea of computation requirements for large experiments. Unfortunately however, it will not diagnose the source of poor scaling. To improve scaling performance, it often helps to improve the serial version of your code as much as possible. A helpful first step is to profile your code. Other useful tips are to reduce the frequency of data input/output and (if using compiled code) to check the flags on your compiler (see some other tips here).

MORDM Basics IV: Visualizing ROF-Storage Dynamics (finally)

The previous post described a simple, two-objective test case in which the city of Cary employed risk-of-failure (ROF) triggers as levers to adjust for its preferred tradeoff level between its objectives. The example given showed how ROF triggers allowed Cary to account for future uncertainty in its system inputs, thus enabling it to visualize how their risk appetite would affect their desired outcomes.

In meeting these objectives, different risk thresholds would have affected Cary’s response to extreme events such as floods and droughts, and its ability to fulfill demand. Simply analyzing the tradeoffs between objectives that result from a range of ROF trigger values only presents one side of the story. It is vital to visualize how these performance objectives and tradeoffs manifest in the system’s capacity (pun intended) to store enough water in times of scarcity, and by extension, its ability to fulfill its customers’ demand for water.

Using ROFs allow us to more concretely measure how the dynamics of both storage and demand fulfillment evolve and change over time for a given risk tolerance. In the long term, these dynamics will influence when and where new water infrastructure is built to cope with storage requirements and demand growth, but this is a topic for a future blog post. This week, we will focus on unpacking the dynamic evolution of storage and demand in response to different ROF trigger values.

As a quick refresher, our system is a water supply utility located in Cary, which is a city within the Research Triangle region in North Carolina (Trindade et al, 2017). Cary uses water-use restrictions when a weekly ROF exceeds the threshold of risk that Cary is willing to tolerate (α) during which only 50% of demand is met. More frequent water use restrictions help to maintain the reservoir levels and ensure reliability, which was defined in the previous blog post. However, the decision to implement restrictions (or not) will impact the storage levels of the system. With this in mind, we will first examine storage responds to the triggering of a water use restriction. For context, we consider a scenario in which Cary’s inflow timeseries is only 20% of the original levels. Figure 1 below shows the inflow, demand and storage timeseries for this scenario.

Figure 1: The hydrologic timeseries for Cary given that no water restrictions are implemented in a scenario where inflows are 20% of the original levels.

Cary’s challenge becomes apparent in Figure 1. While inflow decreases over time (fewer peaks), demand is steadily growing and has effectively tripled by the end of the period. This results in periods during which storage levels drop to zero, which occurs once past 2040. Also note that the frequency of low storage peaks have increased in the second half of the period. The following questions can thus be posed:

  1. How does the system’s ROF change with increasing demand and decreasing supply?
  2. How does risk tolerance affect the implementation of water-use restrictions during drought?
  3. How will the system reservoir levels respond to different levels of risk tolerance?
Figure 2: The length of the pink bars denote the nth-week during which the first water use restriction was implemented for a given α-value. This is an indicator of the responsiveness of the system to a drought, or decrease in storage levels. The blue line indicates the percent of storage filled with water.

To answer the first question, it is useful to identify how different values of α affect the first instance of a water-use restriction. Figure 2, generated using ‘rof_dynamics.py‘, demonstrates that lower risk tolerances result in earlier implementations of restrictions. This is reasonable, as an actor who more risk-averse will quickly implement water-use restrictions to maintain reliable levels of storage during a drought. However, an actor who is more willing to tolerate the change of low reservoir levels will delay implementing water use restrictions. The blue line juxtaposed on top of the bars indicates the inflows to the reservoir. After the first period of low flows between weeks 30-40, the plot shows that the amount of inflows do not recover, and is likely insufficient to fill the reservoir to initial levels. With a lower α, an actor is more likely to implement restrictions almost immediately after observing merely a few weeks of low inflows. In contrast, an actor who opts for a higher α will only resort to restrictions after seeing an extended period of low flows during which they can be more certain that restrictions are absolutely necessary.

Answering the second and third questions first require that periods of drought are more definitively quantified. To do this, the standardized streamflow indicator (SSI6) was used. The SSI6 is a method that identifies time periods during which the standardized inflow is less than the 6-month rolling mean (Herman et al, 2016). It detects a drought period when the value of the SSI6 < 0 for three consecutive months and SSI6 < -1 at least once during the three-month period. The juxtaposition of storage-restrictions and the periods of drought will allow us to see where restrictions were imposed and its impacts on reservoir levels for a given demand timeseries.

Figure 3 and Figure 4 are a visualization of how the system’s storage levels responds to drought (the red bars in the lower subplot) by implementing water-use restrictions (the dark red lines in the upper subplot) given α = 1% and α = 15% respectively. Predictably, restrictions coincide with periods of drought as defined by the SSI6. However, with a lower risk tolerance, period of restrictions are longer and more frequent. As Figure 3 shows, an actor with a lower risk tolerance may implement restrictions where only a slight risk of failure exists.

Figure 3: Storage dynamics given α=1%. (Upper subplot) The blue lines indicate the reservoir storage levels in billion gallons per week. The yellow lines are the weekly ROF values, or the likelihood that the percent of water stored will drop below 20% of the reservoir levels. The grey lines indicate where water use restrictions are implemented, and the red dashed line denotes α=2%. (Lower subplot) The zones are where droughts were detected using the SSI6 (Herman et al, 2016) method are highlighted in red.

Compared to α = 1%, an actor who is willing to tolerate higher ROF values (Figure 4 as an example) will implement restrictions less frequently and for shorter periods of time. Although this means that demands are less likely to get disrupted, this also puts water supplies at a higher risk of dropping to critical levels (<20%), as restrictions may not get implemented even during times of drought.

Figure 4: Storage dynamics given α=15%. (Upper subplot) The blue lines indicate the reservoir storage levels in billion gallons per week. The yellow lines are the weekly ROF values, or the likelihood that the percent of water stored will drop below 20% of the reservoir levels. The grey lines indicate where water use restrictions are implemented, and the red dashed line denotes α=15%. (Lower subplot) The zones are where droughts were detected using the SSI6 (Herman et al, 2016) method are highlighted in red.

There is one important thing to note when comparing Figures 3 and 4. When the periods water use restrictions coincide for both α-values (between 2040 and 2050), the actor with a lower tolerance implements water use restrictions at the beginning of both drought periods. This decision makes the biggest difference in terms of the reservoir storage levels. By implementing water use restrictions early and for a longer period of time, Cary’s reservoir levels are consistently kept at levels above 50% of full capacity (given full capacity of 7.45 BG). A different actor with higher risk tolerance resulted in water levels that dropped below the 30% of full capacity during periods of drought.

Although this seems undesirable, recall that the system is said to have failed if the capacity drops below 20% of full capacity. Herein lies the power of using an ROF metric – questions 2 and 3 can be answered by generating storage-restriction response figures as shown in the above figures, which allows an actor to examine the consequences of being varying levels of risk tolerance on their ability to fulfill demand while maintaining sufficient water levels. This ability can improve judgement on how much risk a utility can actually tolerate without adversely impacting the socioeconomic aspects of the systems dependent on a water supply utility. In addition, using ROFs enable a utility to better estimate when new infrastructure really needs to be built, instead of making premature investments as a result of unwarranted risk aversion.

To briefly summarize this blog post, we have shown how different risk tolerance levels affect the decisions made by an actor, and how these decisions in turn impact the system. Not shown here is the ability of an ROF to evolve over time given climate change and the building of new water supply infrastructure. In the next blog post, we will briefly discuss the role of ROFs in mapping out adaptation pathways for a utility, how ROFs form the basis of a dynamic and adaptive pathway and their associated operation policies, and connect this to the concept of the soft path (Gleick, 2002) in water supply management.

References

Gleick, P., 2002. Water management: Soft water paths. Nature, 418(6896), pp.373-373.

Herman, J., Zeff, H., Lamontagne, J., Reed, P. and Characklis, G., 2016. Synthetic Drought Scenario Generation to Support Bottom-Up Water Supply Vulnerability Assessments. Journal of Water Resources Planning and Management, 142(11), p.04016050.

Trindade, B., Reed, P., Herman, J., Zeff, H. and Characklis, G., 2017. Reducing regional drought vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty. Advances in Water Resources, 104, pp.195-209.

NetCDF Operators

This post is an introduction to Linux-based climate data and NetCDF operators (CDOs or NCOs) which allow you to perform various operations on netNDF files through the command line. I found these commands to be really nifty when I was working with pre-industrial control runs from a GCM. The output was being written on daily timestep, across 1200 years, and for the whole world, so it was absolutely essential that I cut the size of the files down as much as I could before transferring to my own computer.

The official documentation and installation instructions for NCO can be found here and CDO here, but if you’re working on a supercomputer, the libraries will already likely be installed. I will outline how I used some of these functions for my pre-industrial runs.

Concatenation

Some of the NCO commands have size limits of 40 GB, so it’s important to use the right order of operations when processing your files, which will be different depending on your ultimate goal. My goal was to ultimately get the 500-hpa geopotential height anomalies across the whole 1200 year period for specifically the Western US. Assuming you have a directory with all the NetCDF files, the first goal is to concatenate the data, since my run was broken into many smaller files. The easiest way to do this is with the following command which will take all the netcdf files in the directory (using the *) and merge them into a file called merged_file.nc:

cdo mergetime *.nc merged_file.nc

Return Individual Monthly Files

When calculating anomalies, you will need to determine a mean geopotential height value for each of the 12 months, and then calculate daily deviations with respect to these months to obtain daily deviations. You can do this with the following command:

cdo splitmon merged_file.nc zg500.mon

This command will return 12 files of the form zg500.mon$i.nc.

Return Monthly Means and Daily Anomalies

The next step is to calculate a monthly mean for each of these files. For example, for January use:

cdo timmean zg500.mon1.nc zg500.mean.mon1.nc

Return Anomalies

Now we subtract the means from each monthly file to return the daily anomalies for each month, which will be of the form: zg500.mean.mon${i}.anom.nc. If you want to combine the last two steps into one loop, you can use the code below:

for i in $(seq 1 12)
do
  cdo timmean zg500.mon${i}.nc zg500.mean.mon${i}.nc
  cdo sub zg500.mon${i}.nc zg500.mean.mon${i}.nc zg500.mean.mon${i}.anom.nc
done 

Cut Down to Geographical Area of Interest

Finally, we need to cut down the data just to the Western US. We use ncks (NetCDF Kitchen Sink) from NCO, which is probably the most versatile of all the functions (hence the name). This command is one that has the 40 GB limit, which is why I had to wait until I had monthly files before I could cut them down geographically. We must first specify the variable of interest using the -v flag. In my case, I only had one variable to extract, but you can also extract multiple in one command. Then denote the range of latitude and longitude using the -d flags. It is very important to include the periods at the end of each lat/lon (even if your bounds are integers) otherwise the command does not work.

for i in $(seq 1 12)
do
  ncks -v zg500 -d lon,180.,260. -d lat,30.,60. zg500.mean.mon${i}.cut.anom.nc -o zg500.mean.mon${i}.cut.anom.region.nc
done 

Ultimately, you will get 12 files of the form: zg500.mean.mon${i}.cut.anom.region.nc. And that’s it! You can concatenate the monthly files back together and try to resort the data back into the correct sequence according to time. I was unsuccessful at finding a quick way to do this, but it is possible. I found it much easier to work on this within R. I imported each of the 12 anomaly files, assigned a time vector, concatenated each monthly anomaly matrix into a larger matrix and then sorted according to date. If your files are small enough by the end of the process, this likely is the easiest way to take care of resorting. Enjoy!

Visualizing large directed networks with ggraph in R

How you choose to visualize complex multidimensional data can significantly shape the insights your audience derives from the plots. My colleague Antonia has written a couple of excellent blog posts on analyzing geospatial networks in Python using the NetworkX library which can be found here and here. I generally lean towards Python for coding but have recently come around on R, mostly because of how easy it is in R to make pretty network visualizations. I will go over some basic network visualizations in R using the igraph and ggraph libraries in this blog post. All the code and data I am using can be found here.

The data I will be using in this post is a processed and cleaned csv-file of Upper Colorado River Basin (UCRB) user interactions obtained from CDSS. The Colorado River is one of the most important river systems in North America, supplying water to millions of Americans. The river has been facing a record 20 year-drought — a situation that is further complicated by prior appropriation doctrine where senior users can put junior ones out of priority until their own needs are met.

Shown below is a snippet of the dataset. Column A shows the user whose water right was put out of priority in 2002 by a water right of column C. Column E shows the total number of days that a water right of column A was put out priority by one of column C in 2002. The rest of the columns contain user attributes.

Now on to turning this lifeless spreadsheet into some pretty pictures. We begin by importing all the necessary libraries.

library(tidyverse)
library(igraph)
library(ggraph)
library(dplyr)
library(ggplot2)

Next we will import the csv-file shown above, and create a list of nodes and edges. This information can be used by the igraph library to create a directed network object. In this directed network, the source node of each edge is priorityWdid (column C) while the destination node is analysisWdid (column A), since the former is putting a call on the latter to divert flow away.

# read network .csv file
data <- read.csv('network_csv_files/priorityWdid/v2_network_2002.csv')

# create nodes
from <- unique(data[c('priorityWdid','priorityStructure', 'priorityNetAbs', 
                      'priorityStreamMile')]) %>% rename(wdid = priorityWdid) %>%
  rename(structure = priorityStructure) %>% rename(netAbs = priorityNetAbs) %>%
  rename(streamMile = priorityStreamMile)
to <- unique(data[c('analysisWdid','analysisStructure', 'analysisNetAbs',
                    'analysisStreamMile')]) %>% rename(wdid = analysisWdid) %>% 
  rename(structure = analysisStructure) %>% rename(netAbs = analysisNetAbs) %>%
  rename(streamMile = analysisStreamMile)
nodes <- unique(rbind(from, to))

# create edges
edges <- data[c('priorityWdid', 'analysisWdid', 'sumWtdCount')] %>% rename(from = priorityWdid) %>% rename(to = analysisWdid)

# create network using igraph package
network <- graph_from_data_frame(d = edges, vertices = nodes, directed = TRUE)

This network has over 1400 nodes! That would be an unreadable mess of a visualization, so let us filter down this network to only include interactions between the 200 senior-most water rights in the UCRB.

# only include top 200 users by seniority
users <- read.csv('../data/CDSS_WaterRights.csv')
users <- head(users[order(users$Priority.Admin.No),], 200)

top_users <- users$WDID
for (i in 1:length(top_users)){
  top_users[i] <- toString(top_users[i])
}

# create subnetwork with only top 200 users by seniority
sub_nodes <- intersect(nodes$wdid, top_users)
subnet <- induced.subgraph(network, sub_nodes)

Excellent! Only 37 of the top 200 users in the UCRB interacted with each other in 2002. This is a much more manageable number for plotting. Let us start with the most basic visualization. In keeping with ggplot syntax, we can conveniently just keep adding plot specifications with a “+”.

# basic network graph
ggraph(subnet, layout = 'stress') +
  ggtitle('2002') + 
  geom_edge_link() +
  geom_node_point() +
  theme_graph()

Alright this is nice, but not particularly insightful. We have no idea which user each node corresponds to, and this figure would have us believe that all nodes and edges were created equal. When we constructed the network we actually added in a bunch of node and edge attributes, which we can now use to make our visuals more informative.

Shown below is a more helpful visualization. I went ahead and added some attributes to the network, and labelled all the nodes with their structure names. The nodes are sized by their out-degree and colored by their stream mile. The edge widths are determined by the total number of days the source node put the destination out of priority. In order to do this, I leveraged an amazing feature of ggplot2 and ggraph called aesthetic mapping which is quick way to map variables to visual cues on a plot. It automatically scales the data and creates a legend, which we can then further customize.

# plot graph in circular layout
ggraph(subnet, layout = "circle") +
  ggtitle('2002: "circle" Layout') +
  geom_edge_link(aes(width = sumWtdCount), alpha = 0.8, color = 'skyblue', 
                 arrow = arrow(length = unit(2, 'mm')), end_cap = circle(2, 'mm')) +
  labs(edge_width = "Number of days") + 
  geom_node_point(aes(size = deg, colour=streamMile)) +
  labs(colour = "Stream mile") +
  labs(size = "Out-degree") +
  scale_color_gradient(low = "skyblue", high = "darkblue") +
  scale_edge_width(range = c(0.2, 2)) +
  geom_node_text(aes(label = structure), repel = TRUE, size=2) +
  theme_graph()

The above network has a circle layout because it’s the easiest to read and is replicable. But there are actually a number of layouts available to choose from. Here is another one of my favorites, graphopt. While this layout is harder to read, it does a better job of revealing clusters in the network. The only change I had to make to the code above was swap out the word ‘circle’ for ‘graphopt’!

# plot graph in graphopt layout
set.seed(1998)
ggraph(subnet, layout = "graphopt") +
  ggtitle('2002: "graphopt" Layout') +
  geom_edge_link(aes(width = sumWtdCount), alpha = 0.8, color = 'skyblue', 
                 arrow = arrow(length = unit(2, 'mm')), end_cap = circle(2, 'mm')) +
  labs(edge_width = "Number of days") + 
  geom_node_point(aes(size = deg, colour=streamMile)) +
  labs(colour = "Stream mile") +
  labs(size = "Out-degree") +
  scale_color_gradient(low = "skyblue", high = "darkblue") +
  scale_edge_width(range = c(0.2, 2)) +
  geom_node_text(aes(label = structure), repel = TRUE, size=2) +
  theme_graph()

The above graph would be a lot easier to read if it weren’t for the long labels cluttering everything up. One way to deal with this is to adjust the opacity (alpha) of the text by the degree of the node. This way only the important and central nodes will have prominent labels. Again, all I have to do is add two extra words in line 12 of the code block above. Notice that I did set show.legend to False because I don’t want a legend entry for text opacity in my plot.

ggraph(subnet, layout = "graphopt") +
  ggtitle('2002: "graphopt" Layout') +
  geom_edge_link(aes(width = sumWtdCount), alpha = 0.8, color = 'skyblue', 
                 arrow = arrow(length = unit(2, 'mm')), end_cap = circle(2, 'mm')) +
  labs(edge_width = "Number of days") + 
  geom_node_point(aes(size = deg, colour=streamMile)) +
  labs(colour = "Stream mile") +
  labs(size = "Out-degree") +
  scale_color_gradient(low = "skyblue", high = "darkblue") +
  scale_edge_width(range = c(0.2, 2)) +
  geom_node_text(aes(label = structure, alpha = deg), repel = TRUE, size=2, show.legend = F) +
  theme_graph()

This is just a small sampling of the possibilities for network visualization in R. I have only just begun exploring the igraph and ggraph libraries, but the syntax is fairly intuitive, and the resultant plots are highly customizable. The data-to-viz blog is a pretty incredible resource to look at other network visualizations in R, if you are interested.

MORDM Basics III: ROF Triggers and Performance Objective Tradeoffs

We recently covered an introduction to the concept of risk of failure (ROF), ROF triggers and ROF table generation. To provide a brief recap, an ROF is the probability that the system will fail to meet its performance objectives, whereas an ROF trigger is a measure of the amount of risk that a stakeholder is willing to take before initiating mitigating or preventive action. We also discussed the computational drawbacks of iteratively evaluating the ROF for each hydrologic scenario, and generated ROF tables as a way to circumvent those drawbacks.

In this post, we will explore the use of ROF metrics and triggers as levers to adjust for preferred levels of tradeoffs between two tradeoffs. Once again, we will revisit Cary, a city located in the Research Triangle region of North Carolina whose stakeholders would like to develop a robust water management policy.

To clarify, we will be generating ROF metrics while evaluating the performance objectives and will not be using the ROF tables generated in the previous blog post. Hence, as stated Bernardo’s blog post, we will begin generating ROF metrics using data from the weeks immediately prior to the current week. This is because performance metrics reflect current (instead of historical) hydrologic dynamics. To use ROF tables for performance metrics, a table update must be performed. This is a step that will possibly be explored in a future methodological blog post.

The test case

The city of Cary (shown in the image below) is supplied by the Jordan Lake. It has 50 years of historical streamflow and evaporation rate data, which can be found in the first 2600 columns of the data files found in the GitHub repository. In addition, these files contain 45 years of synthetically-generated projected streamflow and evaporation data obtained from Cary’s stakeholders. They also have 45 years of projected demand, and would like to use a combination of historical and synthetic streamflow and evaporation to explore how their risk tolerance will affect their water utility’s performance over 45 years.

Cary is located in the red box shown in the figure above
(source: Trindade et. al., 2019).

Performance objectives

Two performance objectives have been identified as measures of Cary’s water utility’s performance:

Maximize reliability: Cary’s stakeholders would like to maximize the reliability of the city’s water supply. They have defined failure as at least one event in which the Jordan Lake reservoir levels drop below 20% of full capacity in a year. Reliability is calculated as the following:

Reliability = 1 – (Total number of failures over all realizations/Total number of realizations)

Minimize water use restrictions: Water use restrictions are triggered every time the ROF for a current week exceed the ROF trigger (or threshold) that has been set by Cary’s stakeholders. Since water use restrictions have negative political and social implications, the average number water use restrictions frequency per realization should be minimized and is calculated as follows:

Average water use restriction frequency = Total number of restrictions per realization per year / Total number of realizations

Visualizing tradeoffs

Here, we will begin with a moderate scenario in which the Jordan Lake reservoir is 40% full. We will examine the response of average reliability and restriction frequency over 1000 realizations for varying values of the ROF trigger.

Since the risk tolerance of a stakeholder will affect how often they choose to implement water use restrictions, this will, by extension, affect the volume of storage in the reservoir. Intuitively, a less risk-averse stakeholder would choose to prioritize supply reliability (i.e., consistent reservoir storage levels), resulting in them requiring more frequent water use restrictions. The converse is also true.

The code to generate this tradeoff plot can be found under tradeoff.py in the GitHub folder. This Python script contains the following helper functions:

  1. check_failure: Checks if current storage levels are below 20% of full reservoir capacity.
  2. rof_evaluation: Evaluates the weekly ROF metrics for current demands, streamflows, and evaporation rates.
  3. restriction_check: Checks if the current weekly ROF exceeds the threshold set by the stakeholder.
  4. storage_r: Calculates the storage based on the ROF metrics. If a restriction is triggered during, only 90% of total weekly demands are met for the the smaller of either the next 4 weeks (one month of water use restrictions) or the remaining days of the realization.
  5. reliability_rf_check: Checks the reliability and the restriction frequency over all realizations for a given ROF trigger.

Send help – what is going on here?

Picture yourself as Cary. Knowing that you cannot take certain actions without adversely affecting the performance of your other system objectives, you would like an intuitive, straightforward way to ‘feel around’ for your risk tolerance. More traditionally, this would be done by exploring different combinations of your system’s decision variables (DVs) – desired reservoir storage levels, water use restriction frequency, etc – to search for a policy that is both optimal and robust. However, this requires multiple iterations of setting and tuning these DVs.

In contrast, the use of ROF metrics is more akin to a ‘set and forget’ method, in which your risk appetite is directly reflected in the dynamic between your performance objectives. Instead of searching for specific (ranges of) DV values that map to a desired policy, ROF metrics allow you to explore the objective tradeoffs by setting a threshold of acceptable risk. There are a couple of conveniences that this affords you.

Firstly, the number of DVs can be reduced. The examples of DVs given previously simply become system inputs, and ROF trigger values instead become your DVs, with each ROF trigger an reflection of the risk threshold that an objective should be able to tolerate. Consequently, this allows a closed-loop system to be approximated. Once an ROF trigger is activated, a particular action is taken, which affects the performance of the system future timesteps. These ‘affected’ system states then become the input to the next timestep, which will be used to evaluate the system performance and if an ROF trigger should be activated.

An example to clear the air

The closed-loop approximation of Cary’s water supply system.

In the Python code shown above, there is only one DV – the ROF trigger for water use restrictions. If the ROF for the current week exceeds this threshold, Cary implements water use restrictions for the next 30 days. This in turn will impact the reservoir storage levels, maintaining a higher volume of water in the Jordan Lake and improving future water supply reliability. More frequent water restrictions implies a higher reliability, and vice versa. Changing the ROF trigger value can be thought of as a dial that changes the degree of tradeoff between your performance objectives (Gold et. al., 2019). The figure on the right summarizes this process:

This process also allows ROF triggers to account for future uncertainty in the system inputs (storage levels, streamflow, demand growth rates and evaporation rates) using present and historical observations of the data. This is particularly useful when making decisions under deep uncertainty (Marchau et. al., 2019) where the uncertainty in the system inputs and internal variability can be difficult to characterize. Weekly ROFs dynamically change to reflect a posteriori system variations, enabling adaptivity and preventing the decision lock-in (Haasnoot et. al., 2013) characteristic of more a priori methods of decision-making.

Summary

Here we have shown how setting different ROF triggers can affect a system’s performance objective tradeoffs. In simpler terms, a stakeholder with a certain policy preference can set an ROF trigger value that results in their desired outcomes. Using ROF triggers also allows stakeholders the ease and flexibility to explore a range of risk tolerance levels through simulations, and discover vulnerabilities (and even opportunities) that they may have previously not been privy to.

Coming up next, we will cover how ROF triggers can be used to approximate a closed-loop system by examining the changing storage dynamics under a range of ROF trigger values. To do this, we will generate inflow and storage time series, and examine where water use restrictions were activated under different ROF trigger values. These figures will also be used to indicate the effect of ROF triggers on a utility’s drought response.

References

Gold, D. F., Reed, P. M., Trindade, B. C., & Characklis, G. W. (2019). Identifying actionable compromises: Navigating multi‐city robustness conflicts to discover cooperative safe operating spaces for regional water supply portfolios. Water Resources Research, 55(11), 9024-9050. doi:10.1029/2019wr025462

Haasnoot, M., Kwakkel, J. H., Walker, W. E., & Ter Maat, J. (2013). Dynamic adaptive policy pathways: A method for crafting robust decisions for a deeply uncertain world. Global Environmental Change, 23(2), 485-498. doi:10.1016/j.gloenvcha.2012.12.006

Marchau, V., Walker, W. E., M., B. P., & Popper, S. W. (2019). Decision making under deep uncertainty: From theory to practice. Cham, Switzerland: Springer.

Trindade, B., Reed, P., & Characklis, G. (2019). Deeply uncertain pathways: Integrated multi-city regional water supply infrastructure investment and portfolio management. Advances in Water Resources, 134, 103442. doi:10.1016/j.advwatres.2019.103442

Do The (Schaake) Shuffle

This post in an introduction to the Schaake Shuffle, a method that can be used to address reconstructing space time variability in forecasted and synthetic variables. The Schaake Shuffle was originally introduced in a synthetic weather generation post by Julie Quinn almost 5 years ago. Lately, the importance (and difficulty) of being able to reproduce spatial and temporal variability in forecasts and synthetically generated variables across multiple correlated sites has been a prominent topic in our group. The goal of this post is to just “bump” this topic back into discussion and to make readers aware of its existence as a nifty post-generation way to build spatial and temporal variability back into synthetically generated data or forecasts. In the fundamental paper that establishes the method, Clark et al., 2004, the authors are looking to apply the method to forecasts of precipitation and temperature. In the case of weather variables such as temperature and precipitation, it is common to create forecasts for individual stations from a Numerical Weather Prediction (NWP) model. These variables serve as predictor variables in regression models that can be used to generate forecasts. The problem with these styles of approaches is that spatial correlation is not preserved between multiple stations nor temporal persistence, which is very important for hydrologic applications with memory.

The Schaake Shuffle is a method that reorders ensemble forecasts of precipitation and temperature to better reconstruct the space-time variability using a rank-ordering approach constructed from a historical record. The basic steps are as follows:

  1. Gather appropriate data: The NWP model outputs forecasts of accumulated precipitation, air temperature, relative humidity at 700 hpa, wind speed, total column precipitable water, and mean sea level pressure which are used as predictors in the forecast equations. Further, the authors acquire historical precipitation and temperature data for stations within four basins across the United States.
  2. Create Forecasts: The next step involves creating the precipitation and temperature ensemble forecasts. A multiple linear regression is used to develop model output statistics (MOS) equations. The forecasted variables that are taken from the NWP model are ultimately filtered down to keep on the variables that explain the highest variance in the response variable (in this example, response variables are precipitation, minimum temperature, maximum temperature). A separate regression equation is fit for each variable, station, and forecast lead time. The residuals of the regression equation are modeled stochastically to generate an ensemble of forecasts. Alternatively, one can apply the Schaake Shuffle to synthetically generated ensembles (not limited to forecasts).
  3. Reorder Forecasts: The reordering method can best be described by an example. For a given time, assume you have an ensemble of possible forecasts that you align in a 3D matrix: Xi,j,k where i=ensemble member, j=station, and k=variable of interest (precipitation or temperature). From the historical record, you must construct an equally sized matrix Yi,j,k which contains historical station observations for the same date. For this matrix, i=an index of dates in the historical time period, j=station, and k=variable of interest (precipitation or temperature).

Using this same notation the authors create a toy example to demonstrate the process. For some time t, imagine we have forecasts of maximum temperature for 10 ensembles, for a given date and station.

Let X be a 10 member ensemble in consideration.

X=[15.3,11.2,8.8,11.9,7.5,9.7,8.3,12.5,10.3,10.1]

We can sort the vector X to create χ

χ=[7.5,8.3,8.8,9.7,10.1,10.3,11.2,11.9,12.5,15.3].

Then we go to the historical record and choose 10 dates that reside in a 7 day window around the date that is being forecasted. This is our Y vector.

Y=[10.7,9.3,6.8,11.3,12.2,13.6,8.9,9.9,11.8,12.9]

We can sort this vector to create γ.

γ=[6.8,8.9,9.3,9.9,10.7,11.3,11.8,12.2,12.9,13.6]

We also create a vector B, which denotes the order of the sorted historical vector with respect to the unsorted vector.

B=[3,7,2,8,1,4,9,5,10,6]

The key is to now to reorder the ensemble forecast in the same order as the B vector. The rank order 1 value is in position 5 of the B vector. Therefore, we take the 5th value from χ (10.1). Then rank order 2 is in position 3. We take the third value from χ (8.8). We continue doing this until we have

Xss=[10.1, 8.8, 7.5, 10.3, 11.9, 15.3, 8.3, 9.7, 11.2, 12.5]

These are the basic fundamentals of the reordering algorithm and it can be extended to involve forecasting at multiple stations, demonstrated in the figure below. Table A shows 10 ensembles for forecasting weather on January 14th, 2004, ranked from lowest to highest value for three stations. Table B shows the historical record and the black and light gray ellipses represent the 1st and 2nd ensemble respectively. Table C shows the sorted historical record and where the selected historical observations lie in the sorted list. Finally Table A can be reordered accordingly to form Table D.

Schaake Shuffle for 10 member ensemble and 3 stations (Figure 2 from Clarke et al., 2004)

It’s important to remember that the Schaake Shuffle is only meant to capture the Spearman rank correlation of observations, but not to reconstruct the actual spearman correlations. The results from the paper, however, are quite remarkable and show how well the method captures spatial and temporal properties. The figure below shows an example of how the method preserves spatial correlation between two selected stations. The top set of figures show raw ensemble output while the bottom figures show results after the ensemble is reordered. The black lines denote the target observed correlation. Clearly, the reordered output approximates the observed correlation across lead times better than the raw ensemble output.

Preservation of spatial correlation for raw (top) and reordered (bottom) forecast ensembles (Figure 6 from Clarke et al, 2004)

One basic limitation of this approach is the assumption of stationarity and that the structure in the historical record will be applicable to the forecasted data. While other methods exist which can potentially preserve space-time variability well, the advantage of the Schaake Shuffle is the ability to reconstruct these patterns after the fact, as a post-processing step. If readers are interested in implementing the Schaake Shuffle, basic pseudocode is included at the end of the paper but there are also R packages that can automate the reordering process here. The steps to download the package and run the algorithm are denoted here. Note that this code only works for a single-station case. Here each column in the X vector will be an ensemble and the rows correspond to the number of days in the forecast. Implementing the example in Figure 2 for one station will requires X and Y to be a single row vector. Of course, one can manually extend this process to multiple stations.

install.packages("devtools")

devtools::install_github("katerobsau/depPPR")

library(depPPR)

forecast_example=as.matrix(read.delim("C:/Users/Rohini/Documents/synthetic.txt",header=FALSE))
climate_example=as.matrix(read.delim("C:/Users/Rohini/Documents/historical.txt",header=FALSE))

schaake_shuffle(X = forecast_example, Y = climate_example)

References:

All material is derived from the fundamental paper that introduces the Shaake Shuffle:

Clark, M., Gangopadhyay, S., Hay, L., Rajagopalan, B., & Wilby, R. (2004). The Schaake shuffle: A method for reconstructing space–time variability in forecasted precipitation and temperature fields. Journal of Hydrometeorology5(1), 243-262.

Potential Biases and Pitfalls of Machine Learning

Artificial intelligence has already changed the world in many ways. It is very difficult to find a manmade tool that does not use AI in one way or another. It can be argued that many of these changes have been positive; however, it can also easily be shown that AI has not benefited everybody evenly.

In terms of the application of AI in science, the number of publications on this topic has significantly increased during the last few years. I searched for the words water, machine, and learning on Google Scholar and limited the search to only count the studies with all of these words in their titles. I created the following figure from the results of that simple search. As you can see, the number of publications has significantly increased in the last few years. This is obviously not a thorough study, but it shows how important artificial intelligence has become in academic research.

However, there are many pitfalls that can degrade the usefulness of AI in academic and non-academic applications. Some people even argue that these pitfalls could potentially lead us to a new AI winter that nobody will enjoy. In this blog post, I decided to go over some of these issues and difficulties. Some of the problems have emerged recently, but others have been known by statisticians for decades. For many reasons, they continue to exist and negatively affect artificial intelligence projects.

Any machine learning project usually includes the following components: 1) sample collection and preparation, 2) ML model generation, and 3) result interoperation and extraction of actionable insights. Here, I will talk about some of the mistakes and biases that can happen in each of these steps.

1- Sample Collection and Preparation

The input dataset is a crucial source of bias in any ML project. The problem basically occurs when the dataset is not a good representation of the real world. Here are a few variants of this issue:

Exclusion Bias

This happens when specific groups that exist in the real world are systematically absent in the dataset. For example, a specific part of the population can be absent in the dataset, or the dataset can only contain information about particular countries, climates, etc. Having a diverse group that represents various segments of society and different worldviews can reduce the consequences of this type of bias.

Measurement Bias

There are different reasons that observations can be faulty and unreliable. For example, observations can be sensed and collected through malfunctioning measurement devices that can bias the entire dataset. Also, human reading errors and mistakes can cause measurement errors. Additionally, there are many things that can go wrong during the post-processing of raw measurements and can lead to measurement bias.

Other Sampling Biases

There are also other types of data collection biases that have been discussed in statistical literature for years. One example of these is self-selection bias. Let’s imagine that you are running an online poll and want to analyze the results, but your participants are not fully random. People who choose to participate in your poll might have specific personalities or worldviews and not represent the population at large. Other biases that have been widely discussed in statistical literature include survivor bias, prejudice bias, observer bias, and recall bias. You can find more information about these biases here, here, and here. If possible, these biases should be avoided. Otherwise, their effects should be estimated and removed from the analysis. Also, acquiring more comprehensive datasets can reduce these types of biases.

Imbalanced Datasets

If the data collection process is fair and reliable, the imbalanced dataset is not really a bias: it’s more of a challenge that is inherent to specific problems. Some events just don’t happen frequently. The problems and research questions that focus on rare events need their own appropriate treatments. In a previous blogpost, I explained this issue and the ways that we can deal with this problem. You can also refer to this article and this article for information about imbalance datasets.

Data Leakage

Data leakage is a situation in which we mistakenly share a portion (or the entirety) of the data between the training and validation datasets. Data leakage can happen due to different types of mistakes during data preparation. Timeseries can also implicitly include data leakage. For example, if training and testing datasets are selected in a way that allows the training dataset to gain information about the removed period, we have implicitly leaked data. For example, if we extract the testing dataset from the middle of the original timeseries, we basically give the model insight about what happens before and after the testing period, which is basically cheating and can reduce the accuracy of the model. One way to avoid this is to make sure that our training data period precedes the validation period.

Another type of leakage is called feature-leakage, and it happens when our feature list includes a variable that is either closely correlated with data labels or is a duplicate of the data label. Obviously, we won’t have that feature during the prediction. Therefore, our trained model will be highly biased.

More information about data leakage can be found here and here.

2- ML and Statistical Models

Lack of Physical Understanding of the System

In machine learning, we use statistical relationships to explore how the system would behave under a specific condition. However, ML models do not gain any explicit knowledge about the actual physical laws that govern the system. For example, in hydrology, it is very possible to violate water and energy conservation laws while training and using machine learning-driven models. However, there are studies (here and here) that have been trying to incorporate these insights into their ML model. This issue is likely to attract more attention in the future.

Reproducibility

There is a general consensus among ML researchers that many ML-based studies are not reproducible. This means that an individual researcher cannot replicate the ML process and find the same results. Many recent studies have shown that ML scientists are often not able to reproduce the results of other studies. While some people refer to this as the reproducibility crisis in ML studies, it is more of a general academic research issue. Based on a 2016 survey published by Nature, 70% of scientists across various disciplines were not able to reproduce the results of other studies, and more than 50% even failed to reproduce their own previously developed results. This is obviously a big challenge and needs to be taken into consideration. More information about reproducibility challenge can be find blog posts (here and here).

Instability of the Model

Model stability is also an important challenge. As discussed earlier, models can be very efficient and accurate under specific random conditions (e.g., random seed or data sample) and will fail under different conditions. This problem is also closely related to the reproducibility challenge. Using different random seeds and developing model ensembles can provide a solution to this problem. The solution also depends on the inherent nature of the ML algorithm. For example, this blog post nicely explains the issue of stability and its solution in K-Means clustering.

Simpson’s Paradox

Simpson’s Paradox describes a situation in which our analysis generates a different conclusion if we investigate each individual group separately while comparing them to the situation where these groups are all combined. The following figure shows an example of this situation under a simple linear regression. The implication of this phenomenon should be taken into account when we make decisions about labels and when we interpret our results.

Correlation Bias

It is important to make sure that there is no significant correlation between different features, which can lead to the creation of unstable and unreliable ML models. This is the same issue as the widely discussed multicollinearity problem in regression. However, removing feature columns needs to be done carefully in order to avoid significant loss of information.

Other Statistical and ML Biases

There are various ways that machine learning studies and statistical analysis studies can be biased and misleading, and P-hacking is one of the widely discussed pathways of that in hypothesis testing. P-hacking basically means that the researcher changes the definition of the problem or the significance threshold to prove a positive relationship. However, when reporting the results of the analysis, only the successful experiment gets reported to the scientific community.

Another issue occurs when the researcher works on a big dataset with many features. The researcher keeps exploring different parts of the dataset to find a statistically significant positive relationship and fails to do so; after enough digging, though, a positive relationship can be found. However, when reporting the data through scientific publications, only the significant positive results get reported and not the number of times that the research attempted to find them. This issue is often referred to as the multiple comparisons problem or the look-elsewhere effect.

There are many studies that show that p-hacking is a common problem and has significantly biased many scientific findings, but other studies exist that argue that, while p-hacking is common in scientific studies, it does not significantly change the big picture of what we eventually learn from scientific literature.

We should also always keep in mind that ML and generally data mining analyses are inherently exploratory, not confirmatory. It means that ML studies can only generate a hypothesis, while separate statistical testing is necessary to confirm the theory (more on this can be found here and here). This can also be seen as another aspect of the reproducibility problem in ML. Machine learning methods do not provide insights into causal relationships. Therefore, ML-driven methods can give us high predication accuracy under a specific condition, but their accuracy might significantly drop under a different condition, and we need to pay attention to this issue.

Performance Metrics

It is crucial to take into account different performance aspects of an ML model, and, to do that, we often need to consider various metrics (refer to this blog post for more information). As I discussed in this previous blog post, it is also important to carefully think about which metrics are really desirable for the problem at hand. For example, the decision about focusing on specificity vs. sensitivity should be determined by examining the nature of our problem. The multiple metrics issue has been widely discussed in past hydrology literature, but it has also been ignored in many water-related studies.

3- Results Interoperation and Extraction of Actionable Insights

Overinterpretation and Generalization Bias

The ML models are trained and tested for a specific dataset. Therefore, when they are used in the real-world application for prediction, they may or may not be as reliable. Therefore, we should also be careful about how we extrapolate the fidelity of our model to new datasets.

Confirmation Bias

This bias takes various forms. For example, we are subconsciously biased toward what we already think is the truth that will show up in the final results of our analysis. Also, we can unknowingly lean toward the results that are shinier and more groundbreaking. In addition, we might want to make our results more interesting for our stakeholders, clients, and the organizations we work for. These are some of the biases that we need to avoid in any type of research and certainly in ML projects.

Interpretability

ML models can benefit from powerful and sophisticated algorithms that are built based on comprehensive lists of features. However, that does not guarantee a decent level of interoperability. In fact, it might make the model and its results harder to understand. I won’t get into the details of this here, but there are many approaches that have been used to control model complexity and interpretability (here and here).

Publication Bias

Scientific journals and media outlets are less interested in publishing results that show non-significant causal relationships. In other words, research efforts that find negative results are less likely to get published. This phenomenon affects academic culture and creates pressure to find and submit positive results, and it also leads to the publication of positive results that have been found by chance. Publication bias has been recognized and talked about for a long time. The issue was first introduced by Theodore Sterling (1959), but it continues to harm the products of academia today.

There is another type of bias that frequently happens in publications. There is always a tendency toward showing that the model developed in the study outperforms popular state-of-the-art algorithms. The issue is that the models developed by the authors often benefit from careful fine-tuning of hyperparameters, while a generic type of the popular model is used in the comparison. This creates an unfair comparison. Refer to this blog post and this publication for more on this type of bias.

Also, ML models can yield a very good response under a specific condition (e.g., a random seed) and fall apart under a different condition (e.g., a different random seed). For example, this problem frequently happens in unsupervised clustering. Running the model with multiple random seeds and starting points is one of the ways to avoid this.

How Can We Address These Limitations?

As discussed in this blog post, ML studies can be negatively affected by various dimensions of these problems, but there are ways to respond to them. For example, biases in data collection can be improved through promoting diversity in research teams (more on this here, and here) or paying more attention to the design of the experiment. Careful definition of the problem and scope of the study also seems to be critical. It would be idealistic, but a cultural shift in the academic world that results in giving more credit to studies that find negative results would improve some of these problems. Finally, effectively using multiple performance metrics, synthetic data, and visual analytics should also benefit ML models.

MORDM Basics II: Risk of Failure Triggers and Table Generation

Previously, we demonstrated the key concepts, application, and validation of synthetic streamflow generation. A historical inflow timeseries from the Research Triangle region was obtained, and multiple synthetic streamflow scenarios were generated and validated using the Kirsch Method (Kirsch et. al., 2013). But why did we generate these hundreds of timeseries? What is their value within the MORDM approach, and how do we use them?

These questions will be addressed in this blog post. Here, we will cover how risk of failure (ROF) triggers use these synthetic streamflow timeseries to dynamically assess a utility’s ability to meet its performance objectives on a weekly basis. Once more, we will be revisiting the Research Triangle test case.

Some clarification

Before proceeding, there are some terms we will be using frequently that require definition:

  1. Timeseries – Observations of a quantity (e.g.: precipitation, inflow) recorded within a pre-specified time span.
  2. Simulation – A set of timeseries (synthetic/historical) that describes the state of the world. In this test case, one simulation consists of a set of three timeseries: historical inflow and evaporation timeseries, and one stationary synthetic demand timeseries.
  3. State of the world (SOW) – The “smallest particle” to be observed, or one fully realized world that consists of the hydrologic simulations, the set deeply-uncertain (DU) variables, and the system behavior under different combinations of simulations and DU variables.
  4. Evaluation – A complete sampling of the SOW realizations. One evaluation can sample all SOWs, or a subset of SOWs.

About the ROF trigger

In the simplest possible description, risk of failure (ROF) is the probability that a system will fail to meet its performance objective(s). The ROF trigger is a measure of a stakeholder’s risk tolerance and its propensity for taking necessary action to mitigate failure. The higher the magnitude of the trigger, the more risk the stakeholder must be willing to face, and the less frequent an action is taken.

The ROF trigger feedback loop.

More formally, the concept of Risk-of-Failure (ROF) was introduced as an alternative decision rule to the more traditional Days-of-Supply-Remaining (DSR) and Take-or-Pay (TOP) approaches in Palmer and Characklis’ 2009 paper. As opposed to the static nature of DSR and TOP, the ROF method emphasizes flexibility by using rule-based logic in using near-term information to trigger actions or decisions about infrastructure planning and policy implementation (Palmer and Characklis, 2009).

Adapted from the economics concept of risk options analysis (Palmer and Characklis, 2009), its flexible, state-aware rules are time-specific instances, thus overcoming the curse of dimensionality. This flexibility also presents the possibility of using ROF triggers to account for decisions made by more than one stakeholder, such as regional systems like the Research Triangle.

Overall, the ROF trigger is state-aware, system-dependent probabilistic decision rule that is capable of reflecting the time dynamics and uncertainties inherent in human-natural systems. This ability is what allows ROF triggers to aid in identifying how short-term decisions affect long-term planning and vice versa. In doing so, it approximates a closed-loop feedback system in which decisions inform actions and the outcomes of the actions inform decisions (shown below). By doing so, the use of ROF triggers can provide system-specific alternatives by building rules off historical data to find triggers that are robust to future conditions.

ROF triggers for water portfolio planning

As explained above, ROF triggers are uniquely suited to reflect a water utility’s cyclical storage-to-demand dynamics. Due to their flexible and dynamic nature, these triggers can enable a time-continuous assessment (Trindade et. al., 2019) of:

  1. When the risks need to be addressed
  2. How to address the risk

This provides both operational simplicity (as stakeholders only need to determine their threshold of risk tolerance) and system planning adaptability across different timescales (Trindade et. al., 2019).

Calculating the ROF trigger value, α

Cary is located in the red box shown in the figure above
(source: Trindade et. al., 2019).

Let’s revisit the Research Triangle test case – here, we will be looking at the data from the town of Cary, which receives its water supply from the Jordan Lake. The necessary files to describe the hydrology of Cary can be found in ‘water_balance_files’ in the GitHub repository. It is helpful to set things up in this hypothetical scenario: the town of Cary would like to assess how their risk tolerance affects the frequency at which they need to trigger water use restrictions. The higher their risk tolerance, the fewer restrictions they will need to implement. Fewer restrictions are favored as deliberately limiting supply has both social and political implications.

We are tasked with determining how different risk tolerance levels, reflected by the ROF trigger value α, will affect the frequency of the utility triggering water use restrictions. Thus, we will need to take the following steps:

  1. The utility determines a suitable ROF trigger value, α.
  2. Evaluate the current risk of failure for the current week m based on the week’s storage levels. The storage levels are a function of the historical inflow and evaporation rates, as well as projected demands.
  3. If the risk of failure during m is at least α, water use restrictions are triggered. Otherwise, nothing is done and storage levels at week m+1 is evaluated.

Now that we have a basic idea of how the ROF triggers are evaluated, let’s dive in a little deeper into the iterative process.

Evaluating weekly risk of failure

Here, we will use a simple analogy to illustrate how weekly ROF values are calculated. Bernardo’s post here provides a more thorough, mathematically sound explanation on this method.

For now, we clarify a couple of things: first we have two synthetically-generated datasets for inflow and evaporation rates that are conditioned on historical weekly observations (columns) and SOWs (rows). We also have one synthetically-generated demand timeseries conditioned on projected demand growth rates (and yes, this is were we will be using the Kirsch Method previously explained). We will be using these three timeseries to calculate the storage levels at each week in a year.

The weekly ROFs are calculated as follows:

We begin on a path 52 steps from the beginning, where each step represents weekly demand, dj where week j∈[1, 52]

We also have – bear with me, now – a crystal ball that let’s us gaze into n-multiple different versions of past inflows and evaporation rates.

At step mj:

  1. Using the crystal ball, we look back into n-versions of year-long ‘pasts’ where each alternative past is characterized by:
    • One randomly-chosen annual historical inflow timeseries, IH beginning 52 steps prior to week mj
    • One randomly-chosen annual historical evaporation timeseries, EH beginning 52 steps prior to week mj
    • The chosen demand timeseries, DF beginning 52 steps prior to week mj
    • An arbitrary starting storage level 52 weeks prior to mj, S0
  2. Out of all the n-year-long pasts that we have gazed into, count the total number of times the storage level dropped to below 20% of maximum at least once, f.
  3. Obtain the probability that you might fail in the future (or ROF), pf = ROF =  f/n
  4. Determine if ROF > α.
  5. Take your next step:

This process is repeated for all the k-different hydrologic simulations.

Here, the “path” represents the projected demand timeseries, the steps are the individual weekly projected demands, and the “versions of the past” are the n-randomly selected hydrologic simulations that we have chosen to look into. It is important that n ≥ 50 for the ROF calculation to have at least 2% precision (Trindade et. al., 2019).

An example

For example, say you (conveniently) have 50 years of historical inflow and evaporation data so you choose n=50. You begin your ROF calculation in Week 52. For n=1, you:

  1. Select the demands from Week 0-51.
  2. Get the historical inflow and evaporation rates for Historical Year 1.
  3. Calculate the storage for each week, monitoring for failure.
  4. If failure is detected, increment the number of failures and move on to n=2. Else, complete the storage calculations for the demand timeseries.

This is repeated n=50 times, then pf is calculated for Week 52.

You then move on to Week 53 and repeat the previous steps using demands from Week 1-52. The whole process is completed once the ROFs in all weeks in the projected demand timeseries has been evaluated.

Potential caveats

However, this process raises two issues:

  1. The number of combinations of simulations and DU variables are computationally expensive
    • For every dj DF, n-simulations of inflows and evaporation rates must be run k-times, where k is the total number of simulations
    • This results in (n × k) computations
    • Next, this process has to be repeated for as many SOWs that exist (DU reevaluation). This will result in (n × k × number of DU variables) computations
  2. The storage values are dynamic and change as a function of DF, IH and EH

These problems motivate the following question: can we approximate the weekly ROF values given a storage level?

ROF Tables

To address the issues stated above, we generate ROF tables in which approximate ROF values for a given week and given storage level. To achieve this approximation, we first define storage tiers (storage levels as a percentage of maximum capacity). These tiers are substituted for S0 during each simulation.

Thus, for each hydrologic simulation, the steps are:

  1. For each storage tier, calculate the ROF for each week in the timeseries.
  2. Store the ROF for a given week and given storage level in an ROF table unique to the each of the k-simulations
  3. This associates one ROF value with a (dj, S0) pair

The stored values are then used during the DU reevaluation, where the storage level for a given week is approximated to its closest storage tier value, Sapprox in the ROF table, negating the need for repeated computations of the weekly ROF value.

The process of generating ROF tables can be found under rof_table_generator.py in the GitHub repository, the entirety of which can be found here.

Conclusion

Previously, we generated synthetic timeseries which were then applied here to evaluate weekly ROFs. We also explored the origins of the concept of ROF triggers. We also explained how ROF triggers encapsulate the dynamic, ever-changing risks faced by water utilities, thus providing a way to detect the risks and take adaptive and mitigating action.

In the next blog post, we will explore how these ROF tables can be used in tandem with ROF triggers to assess if Cary’s water utility will need to trigger water use restrictions. We will also dabble in varying the value of ROF triggers to assess how different risk tolerance levels, action implementation frequency, and individual values can affect a utility’s reliability by running a simple single-actor, two-objective test.

References

Kirsch, B. R., Characklis, G. W., & Zeff, H. B. (2013). Evaluating the impact of alternative hydro-climate scenarios on transfer agreements: Practical improvement for generating synthetic streamflows. Journal of Water Resources Planning and Management, 139(4), 396-406. doi:10.1061/(asce)wr.1943-5452.0000287

Palmer, R. N., & Characklis, G. W. (2009). Reducing the costs of meeting regional water demand through risk-based transfer agreements. Journal of Environmental Management, 90(5), 1703-1714. doi:10.1016/j.jenvman.2008.11.003

Trindade, B., Reed, P., & Characklis, G. (2019). Deeply uncertain pathways: Integrated multi-city regional water supply infrastructure investment and portfolio management. Advances in Water Resources, 134, 103442. doi:10.1016/j.advwatres.2019.103442

Automate remote tasks with Paramiko

This is a short blogpost to demonstrate a the Paramiko Python package. Paramiko allows you to establish SSH, SCP or SFTP connections within Python scripts, which is handy when you’d like to automate some repetitive tasks with on remote server or cluster from your local machine or another cluster you’re running from.

It is often used for server management tasks, but for research applications you could consider situations where we have a large dataset stored at a remote location and are executing a script that needs to transfer some of that data depending on results or new information. Instead of manually establishing SSH or SFTP connections, those processes could be wrapped and automated within your existing Python script.

To begin a connection, all you need is a couple lines:

import paramiko

ssh_client = paramiko.SSHClient()
ssh_client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh_client.connect(hostname='remotehose',username='yourusername',password='yourpassword')

The first line creates a paramiko SSH client object. The second line tells paramiko what to do if the host is not a known host (i.e., whether this host should be trusted or not)—think of when you’re setting up an SSH connection for the first time and get the message:

The authenticity of host ‘name’ can’t be established. RSA key fingerprint is ‘gibberish’. Are you sure you want to continue connecting (yes/no)?

The third line is what makes the connection, the hostname, username and password are usually the only necessary things to define.

Once a connection is established, commands can be executed with exec_command(), which creates three objects:

stdin, stdout, stderr = ssh_client.exec_command("ls")

stdin is write-only file which can be used for commands requiring input, stdout contains the output of the command, and stderr contains any errors produced by the command—if there are no errors it will be empty.

To print out what’s returned by the command, use can use stdout.readlines(). To add inputs to stdin, you can do so by using the write() function:

stdin, stdout, stderr = ssh.exec_command(“sudo ls”)
stdin.write(‘password\n’)

Importantly: don’t forget to close your connection, especially if this is an automated script that opens many of them: ssh_client.close().

To transfer files, you need to establish an SFTP or an SCP connection, in a pretty much similar manner:

ftp_client=ssh_client.open_sftp()
ftp_client.get('/remote/path/to/file/filename','/local/path/to/file/filename')
ftp_client.close()

get() will transfer a file to a local directory, put(), used in the same way, will transfer a file to a remote directory.

Introduction to Convergent Cross Mapping

This post is a follow-on to my previous post on Granger causality. Granger causality has well-known limitations.  As previously discussed, the test can only find “predictive causality” and not true causality. Further, a key requirement of Granger causality is separability, meaning the casual variable is independent of the variable that it influences. This separability tends to be characteristic of stochastic and linear systems. For systems in which separability is not satisfied or which there are shared driving variables, Granger causality is not applicable. A new approach has been suggested to try to understand causality in dynamical systems where effects of casual variables cannot be separated or uncoupled from the variables that they influence.

This approach, called Convergent Cross Mapping (CCM), was first tested in 1991 but was later evolved by Sugihara et al., 2012 for identifying causation in ecological time series. CCM relies on the fundamental assumption that dynamics in the world aren’t purely stochastic and that in some applications, there are governing dynamics that can be represented by some underlying manifold, M, shown in Figure 1. If two variables, X and Y are causally linked, they should share an underlying M as a common attractor manifold. This means that the state of the variables can be used to inform each other. Furthermore, if X is an environmental driver of a population variable Y, information about the states of X can be recovered from Y, but not vice versa. The CCM approach intends to test how much Y can be a reliable estimate of states of  X [1].

Figure 1: Figure 2 from Sugihara et al., 2012 demonstrating CCM on shadow manifolds Mx and My

 In essence, the approach seeks to create “shadow” manifolds of M, termed Mx and My which are constructed using lagged information from each time series. At some time, t, the nearest neighbors on each manifold are determined. Weights are assigned to each nearest neighbor and Y is estimated using the weighted nearest neighbors from X. Then, the correlation between Y and Y|Mx is computed. The length of the time series is important; a longer time series results in closer nearest neighbors and better predictability [2].

Sugihara et al., 2012 demonstrates situations in which CCM is applicable:

  1. Unidirectional Causality: Species X influences Y, but the reverse is not true.
  2. Transivity/External Forcing: Two species, X and Y, do not interact, but are driven by a common external variable Z. This type of interaction has been studied extensively in the famous anchovy and sardine example, where the two species tend to have linked dynamics (one peaks when the other is in a trough). Some theories suggest that the species are directly related and influence each other’s dynamics, but research has shown that neither species interacts with the other and rather that the population dynamics are shaped by an external driver (temperature). However, using correlation or applying Granger Causality tends to imply a causal link. Using CCM, Sugihara et al., 2012 confirms that there is no cross-map signal between the anchovy and sardine time series, but there is a clear mapping between sea surface temperature and anchovies and sea temperature and sardines.

An R package called multispatialCCM is available to implement CCM but the author, Adam Clark, adapts the methods to be applicable to ecological time series that are shorter (<30 sequential observations which is usually required for the traditional CCM method) and potentially disjoint [3]. In the documentation of the package, there is a full example demonstrating the application of the method to some synthetic time series, A and B, meant to demonstrate A being causally forced by B, but the opposite not being true. In that spirit, I tried out the package with a very simple example and using the same dataset from the last post. Here I use precipitation and outflow where precipitation should causally force outflow, but the opposite should not be true. The following code demonstrates this:

library(multispatialCCM)

#Import data
setwd("D:/Documents/")
data=read.delim("Minimal_Train.txt")
Accm=as.vector(data$observed_cms)[1:200]
Bccm=as.vector(data$Precip)[1:200]

#Simulate data to use for multispatial CCM test
#See function for details - A is causally forced by B,
#but the reverse is not true.
#Calculate optimal E
maxE<-5 #Maximum E to test
#Matrix for storing output
Emat<-matrix(nrow=maxE-1, ncol=2); colnames(Emat)<-c("A", "B")

#Loop over potential E values and calculate predictive ability
#of each process for its own dynamics
for(E in 2:maxE) {
  #Uses defaults of looking forward one prediction step (predstep)
  #And using time lag intervals of one time step (tau)
  Emat[E-1,"A"]<-SSR_pred_boot(A=Accm, E=E, predstep=1, tau=1)$rho
  Emat[E-1,"B"]<-SSR_pred_boot(A=Bccm, E=E, predstep=1, tau=1)$rho
}

#Look at plots to find E for each process at which
#predictive ability rho is maximized
matplot(2:maxE, Emat, type="l", col=1:2, lty=1:2,
        xlab="E", ylab="rho", lwd=2)
legend("bottomleft", c("A", "B"), lty=1:2, col=1:2, lwd=2, bty="n")
E_A<-2
E_B<-3
#Check data for nonlinear signal that is not dominated by noise
#Checks whether predictive ability of processes declines with

#increasing time distance
#See manuscript and R code for details
signal_A_out<-SSR_check_signal(A=Accm, E=E_A, tau=1,
                               predsteplist=1:10)
signal_B_out<-SSR_check_signal(A=Bccm, E=E_B, tau=1,
                               predsteplist=1:10)
#Run the CCM test
#E_A and E_B are the embedding dimensions for A and B.
#tau is the length of time steps used (default is 1)
#iterations is the number of bootstrap iterations (default 100)
# Does A "cause" B?
#Note - increase iterations to 100 for consistent results

CCM_boot_A<-CCM_boot(Accm, Bccm, E_A, tau=1, iterations=100)
# Does B "cause" A?
CCM_boot_B<-CCM_boot(Bccm, Accm, E_B, tau=1, iterations=100)
#Test for significant causal signal
#See R function for details
(CCM_significance_test<-ccmtest(CCM_boot_A,
                                CCM_boot_B))
#Plot results
plotxlimits<-range(c(CCM_boot_A$Lobs, CCM_boot_B$Lobs))
#Plot "A causes B"
plot(CCM_boot_A$Lobs, CCM_boot_A$rho, type="l", col=1, lwd=2,
     xlim=c(plotxlimits[1], plotxlimits[2]), ylim=c(0,1),
     xlab="L", ylab="rho")
#Add +/- 1 standard error
matlines(CCM_boot_A$Lobs,
         cbind(CCM_boot_A$rho-CCM_boot_A$sdevrho,
               CCM_boot_A$rho+CCM_boot_A$sdevrho),
         lty=3, col=1)
#Plot "B causes A"
lines(CCM_boot_B$Lobs, CCM_boot_B$rho, type="l", col=2, lty=2, lwd=2)
#Add +/- 1 standard error
matlines(CCM_boot_B$Lobs,
         cbind(CCM_boot_B$rho-CCM_boot_B$sdevrho,
               CCM_boot_B$rho+CCM_boot_B$sdevrho),
         lty=3, col=2)
legend("topleft",
       c("A causes B", "B causes A"),
       lty=c(1,2), col=c(1,2), lwd=2, bty="n")


First we read in the appropriate columns. I also decided to use just the first 200 observations from my 5-year long dataset. While longer time series are supposed to lead to better convergence, I found that for my time series (characterized by a lot of natural variability and noise), that it was best to limit the sequence to cleaner signal.

The next steps involve determining an appropriate E matrix for A and B. This matrix is an embedding dimension matrix and is similar to determining the number of time steps that a single process needs to predict its own dynamics.

Figure 2: Predictability (rho) as a function of E

When looking at Figure 2, you can see that the rho (predictability) is maximized for A=3 and B=2 (for the value of E). The CCM_Boot function runs the actual Convergent Cross Mapping and implements a bootstrapping method where a subset of the time series is used to determine E and the predictability in order to make sure that the ordering of the sample is not a defining factor of the outcome and to address uncertainty in the prediction. Finally, the results are plotted in Figure 3.  There are two main takeaways from this figure: The predictability of A causing B (outflow causing precipitation) is a flat line around 0 while the predictability of B causing A (precipitation causing outflow) is much higher. Furthermore, note how the predictability increases with an increasing time series length which allows for better convergence on the system dynamics. Clark et al., 2015 contains many other more well-defined ecological examples to demonstrate the package as well.

Figure 3: Predictability as a function of L for A causes B and B causes A

References:

[1] Sugihara, G., May, R., Ye, H., Hsieh, C. H., Deyle, E., Fogarty, M., & Munch, S. (2012). Detecting causality in complex ecosystems. science338(6106), 496-500.

[2] McCracken, J. M., & Weigel, R. S. (2014). Convergent cross-mapping and pairwise asymmetric inference. Physical Review E90(6), 062903.

[3] Clark, A. T., Ye, H., Isbell, F., Deyle, E. R., Cowles, J., Tilman, G. D., & Sugihara, G. (2015). Spatial convergent cross mapping to detect causal relationships from short time series. Ecology96(5), 1174-1181.

Special thanks to @ecoquant who suggested to look into this topic!