MORDM Basics IV: Visualizing ROF-Storage Dynamics (finally)

The previous post described a simple, two-objective test case in which the city of Cary employed risk-of-failure (ROF) triggers as levers to adjust for its preferred tradeoff level between its objectives. The example given showed how ROF triggers allowed Cary to account for future uncertainty in its system inputs, thus enabling it to visualize how their risk appetite would affect their desired outcomes.

In meeting these objectives, different risk thresholds would have affected Cary’s response to extreme events such as floods and droughts, and its ability to fulfill demand. Simply analyzing the tradeoffs between objectives that result from a range of ROF trigger values only presents one side of the story. It is vital to visualize how these performance objectives and tradeoffs manifest in the system’s capacity (pun intended) to store enough water in times of scarcity, and by extension, its ability to fulfill its customers’ demand for water.

Using ROFs allow us to more concretely measure how the dynamics of both storage and demand fulfillment evolve and change over time for a given risk tolerance. In the long term, these dynamics will influence when and where new water infrastructure is built to cope with storage requirements and demand growth, but this is a topic for a future blog post. This week, we will focus on unpacking the dynamic evolution of storage and demand in response to different ROF trigger values.

As a quick refresher, our system is a water supply utility located in Cary, which is a city within the Research Triangle region in North Carolina (Trindade et al, 2017). Cary uses water-use restrictions when a weekly ROF exceeds the threshold of risk that Cary is willing to tolerate (α) during which only 50% of demand is met. More frequent water use restrictions help to maintain the reservoir levels and ensure reliability, which was defined in the previous blog post. However, the decision to implement restrictions (or not) will impact the storage levels of the system. With this in mind, we will first examine storage responds to the triggering of a water use restriction. For context, we consider a scenario in which Cary’s inflow timeseries is only 20% of the original levels. Figure 1 below shows the inflow, demand and storage timeseries for this scenario.

Figure 1: The hydrologic timeseries for Cary given that no water restrictions are implemented in a scenario where inflows are 20% of the original levels.

Cary’s challenge becomes apparent in Figure 1. While inflow decreases over time (fewer peaks), demand is steadily growing and has effectively tripled by the end of the period. This results in periods during which storage levels drop to zero, which occurs once past 2040. Also note that the frequency of low storage peaks have increased in the second half of the period. The following questions can thus be posed:

  1. How does the system’s ROF change with increasing demand and decreasing supply?
  2. How does risk tolerance affect the implementation of water-use restrictions during drought?
  3. How will the system reservoir levels respond to different levels of risk tolerance?
Figure 2: The length of the pink bars denote the nth-week during which the first water use restriction was implemented for a given α-value. This is an indicator of the responsiveness of the system to a drought, or decrease in storage levels. The blue line indicates the percent of storage filled with water.

To answer the first question, it is useful to identify how different values of α affect the first instance of a water-use restriction. Figure 2, generated using ‘rof_dynamics.py‘, demonstrates that lower risk tolerances result in earlier implementations of restrictions. This is reasonable, as an actor who more risk-averse will quickly implement water-use restrictions to maintain reliable levels of storage during a drought. However, an actor who is more willing to tolerate the change of low reservoir levels will delay implementing water use restrictions. The blue line juxtaposed on top of the bars indicates the inflows to the reservoir. After the first period of low flows between weeks 30-40, the plot shows that the amount of inflows do not recover, and is likely insufficient to fill the reservoir to initial levels. With a lower α, an actor is more likely to implement restrictions almost immediately after observing merely a few weeks of low inflows. In contrast, an actor who opts for a higher α will only resort to restrictions after seeing an extended period of low flows during which they can be more certain that restrictions are absolutely necessary.

Answering the second and third questions first require that periods of drought are more definitively quantified. To do this, the standardized streamflow indicator (SSI6) was used. The SSI6 is a method that identifies time periods during which the standardized inflow is less than the 6-month rolling mean (Herman et al, 2016). It detects a drought period when the value of the SSI6 < 0 for three consecutive months and SSI6 < -1 at least once during the three-month period. The juxtaposition of storage-restrictions and the periods of drought will allow us to see where restrictions were imposed and its impacts on reservoir levels for a given demand timeseries.

Figure 3 and Figure 4 are a visualization of how the system’s storage levels responds to drought (the red bars in the lower subplot) by implementing water-use restrictions (the dark red lines in the upper subplot) given α = 1% and α = 15% respectively. Predictably, restrictions coincide with periods of drought as defined by the SSI6. However, with a lower risk tolerance, period of restrictions are longer and more frequent. As Figure 3 shows, an actor with a lower risk tolerance may implement restrictions where only a slight risk of failure exists.

Figure 3: Storage dynamics given α=1%. (Upper subplot) The blue lines indicate the reservoir storage levels in billion gallons per week. The yellow lines are the weekly ROF values, or the likelihood that the percent of water stored will drop below 20% of the reservoir levels. The grey lines indicate where water use restrictions are implemented, and the red dashed line denotes α=2%. (Lower subplot) The zones are where droughts were detected using the SSI6 (Herman et al, 2016) method are highlighted in red.

Compared to α = 1%, an actor who is willing to tolerate higher ROF values (Figure 4 as an example) will implement restrictions less frequently and for shorter periods of time. Although this means that demands are less likely to get disrupted, this also puts water supplies at a higher risk of dropping to critical levels (<20%), as restrictions may not get implemented even during times of drought.

Figure 4: Storage dynamics given α=15%. (Upper subplot) The blue lines indicate the reservoir storage levels in billion gallons per week. The yellow lines are the weekly ROF values, or the likelihood that the percent of water stored will drop below 20% of the reservoir levels. The grey lines indicate where water use restrictions are implemented, and the red dashed line denotes α=15%. (Lower subplot) The zones are where droughts were detected using the SSI6 (Herman et al, 2016) method are highlighted in red.

There is one important thing to note when comparing Figures 3 and 4. When the periods water use restrictions coincide for both α-values (between 2040 and 2050), the actor with a lower tolerance implements water use restrictions at the beginning of both drought periods. This decision makes the biggest difference in terms of the reservoir storage levels. By implementing water use restrictions early and for a longer period of time, Cary’s reservoir levels are consistently kept at levels above 50% of full capacity (given full capacity of 7.45 BG). A different actor with higher risk tolerance resulted in water levels that dropped below the 30% of full capacity during periods of drought.

Although this seems undesirable, recall that the system is said to have failed if the capacity drops below 20% of full capacity. Herein lies the power of using an ROF metric – questions 2 and 3 can be answered by generating storage-restriction response figures as shown in the above figures, which allows an actor to examine the consequences of being varying levels of risk tolerance on their ability to fulfill demand while maintaining sufficient water levels. This ability can improve judgement on how much risk a utility can actually tolerate without adversely impacting the socioeconomic aspects of the systems dependent on a water supply utility. In addition, using ROFs enable a utility to better estimate when new infrastructure really needs to be built, instead of making premature investments as a result of unwarranted risk aversion.

To briefly summarize this blog post, we have shown how different risk tolerance levels affect the decisions made by an actor, and how these decisions in turn impact the system. Not shown here is the ability of an ROF to evolve over time given climate change and the building of new water supply infrastructure. In the next blog post, we will briefly discuss the role of ROFs in mapping out adaptation pathways for a utility, how ROFs form the basis of a dynamic and adaptive pathway and their associated operation policies, and connect this to the concept of the soft path (Gleick, 2002) in water supply management.

References

Gleick, P., 2002. Water management: Soft water paths. Nature, 418(6896), pp.373-373.

Herman, J., Zeff, H., Lamontagne, J., Reed, P. and Characklis, G., 2016. Synthetic Drought Scenario Generation to Support Bottom-Up Water Supply Vulnerability Assessments. Journal of Water Resources Planning and Management, 142(11), p.04016050.

Trindade, B., Reed, P., Herman, J., Zeff, H. and Characklis, G., 2017. Reducing regional drought vulnerabilities and multi-city robustness conflicts using many-objective optimization under deep uncertainty. Advances in Water Resources, 104, pp.195-209.

MORDM Basics II: Risk of Failure Triggers and Table Generation

Previously, we demonstrated the key concepts, application, and validation of synthetic streamflow generation. A historical inflow timeseries from the Research Triangle region was obtained, and multiple synthetic streamflow scenarios were generated and validated using the Kirsch Method (Kirsch et. al., 2013). But why did we generate these hundreds of timeseries? What is their value within the MORDM approach, and how do we use them?

These questions will be addressed in this blog post. Here, we will cover how risk of failure (ROF) triggers use these synthetic streamflow timeseries to dynamically assess a utility’s ability to meet its performance objectives on a weekly basis. Once more, we will be revisiting the Research Triangle test case.

Some clarification

Before proceeding, there are some terms we will be using frequently that require definition:

  1. Timeseries – Observations of a quantity (e.g.: precipitation, inflow) recorded within a pre-specified time span.
  2. Simulation – A set of timeseries (synthetic/historical) that describes the state of the world. In this test case, one simulation consists of a set of three timeseries: historical inflow and evaporation timeseries, and one stationary synthetic demand timeseries.
  3. State of the world (SOW) – The “smallest particle” to be observed, or one fully realized world that consists of the hydrologic simulations, the set deeply-uncertain (DU) variables, and the system behavior under different combinations of simulations and DU variables.
  4. Evaluation – A complete sampling of the SOW realizations. One evaluation can sample all SOWs, or a subset of SOWs.

About the ROF trigger

In the simplest possible description, risk of failure (ROF) is the probability that a system will fail to meet its performance objective(s). The ROF trigger is a measure of a stakeholder’s risk tolerance and its propensity for taking necessary action to mitigate failure. The higher the magnitude of the trigger, the more risk the stakeholder must be willing to face, and the less frequent an action is taken.

The ROF trigger feedback loop.

More formally, the concept of Risk-of-Failure (ROF) was introduced as an alternative decision rule to the more traditional Days-of-Supply-Remaining (DSR) and Take-or-Pay (TOP) approaches in Palmer and Characklis’ 2009 paper. As opposed to the static nature of DSR and TOP, the ROF method emphasizes flexibility by using rule-based logic in using near-term information to trigger actions or decisions about infrastructure planning and policy implementation (Palmer and Characklis, 2009).

Adapted from the economics concept of risk options analysis (Palmer and Characklis, 2009), its flexible, state-aware rules are time-specific instances, thus overcoming the curse of dimensionality. This flexibility also presents the possibility of using ROF triggers to account for decisions made by more than one stakeholder, such as regional systems like the Research Triangle.

Overall, the ROF trigger is state-aware, system-dependent probabilistic decision rule that is capable of reflecting the time dynamics and uncertainties inherent in human-natural systems. This ability is what allows ROF triggers to aid in identifying how short-term decisions affect long-term planning and vice versa. In doing so, it approximates a closed-loop feedback system in which decisions inform actions and the outcomes of the actions inform decisions (shown below). By doing so, the use of ROF triggers can provide system-specific alternatives by building rules off historical data to find triggers that are robust to future conditions.

ROF triggers for water portfolio planning

As explained above, ROF triggers are uniquely suited to reflect a water utility’s cyclical storage-to-demand dynamics. Due to their flexible and dynamic nature, these triggers can enable a time-continuous assessment (Trindade et. al., 2019) of:

  1. When the risks need to be addressed
  2. How to address the risk

This provides both operational simplicity (as stakeholders only need to determine their threshold of risk tolerance) and system planning adaptability across different timescales (Trindade et. al., 2019).

Calculating the ROF trigger value, α

Cary is located in the red box shown in the figure above
(source: Trindade et. al., 2019).

Let’s revisit the Research Triangle test case – here, we will be looking at the data from the town of Cary, which receives its water supply from the Jordan Lake. The necessary files to describe the hydrology of Cary can be found in ‘water_balance_files’ in the GitHub repository. It is helpful to set things up in this hypothetical scenario: the town of Cary would like to assess how their risk tolerance affects the frequency at which they need to trigger water use restrictions. The higher their risk tolerance, the fewer restrictions they will need to implement. Fewer restrictions are favored as deliberately limiting supply has both social and political implications.

We are tasked with determining how different risk tolerance levels, reflected by the ROF trigger value α, will affect the frequency of the utility triggering water use restrictions. Thus, we will need to take the following steps:

  1. The utility determines a suitable ROF trigger value, α.
  2. Evaluate the current risk of failure for the current week m based on the week’s storage levels. The storage levels are a function of the historical inflow and evaporation rates, as well as projected demands.
  3. If the risk of failure during m is at least α, water use restrictions are triggered. Otherwise, nothing is done and storage levels at week m+1 is evaluated.

Now that we have a basic idea of how the ROF triggers are evaluated, let’s dive in a little deeper into the iterative process.

Evaluating weekly risk of failure

Here, we will use a simple analogy to illustrate how weekly ROF values are calculated. Bernardo’s post here provides a more thorough, mathematically sound explanation on this method.

For now, we clarify a couple of things: first we have two synthetically-generated datasets for inflow and evaporation rates that are conditioned on historical weekly observations (columns) and SOWs (rows). We also have one synthetically-generated demand timeseries conditioned on projected demand growth rates (and yes, this is were we will be using the Kirsch Method previously explained). We will be using these three timeseries to calculate the storage levels at each week in a year.

The weekly ROFs are calculated as follows:

We begin on a path 52 steps from the beginning, where each step represents weekly demand, dj where week j∈[1, 52]

We also have – bear with me, now – a crystal ball that let’s us gaze into n-multiple different versions of past inflows and evaporation rates.

At step mj:

  1. Using the crystal ball, we look back into n-versions of year-long ‘pasts’ where each alternative past is characterized by:
    • One randomly-chosen annual historical inflow timeseries, IH beginning 52 steps prior to week mj
    • One randomly-chosen annual historical evaporation timeseries, EH beginning 52 steps prior to week mj
    • The chosen demand timeseries, DF beginning 52 steps prior to week mj
    • An arbitrary starting storage level 52 weeks prior to mj, S0
  2. Out of all the n-year-long pasts that we have gazed into, count the total number of times the storage level dropped to below 20% of maximum at least once, f.
  3. Obtain the probability that you might fail in the future (or ROF), pf = ROF =  f/n
  4. Determine if ROF > α.
  5. Take your next step:

This process is repeated for all the k-different hydrologic simulations.

Here, the “path” represents the projected demand timeseries, the steps are the individual weekly projected demands, and the “versions of the past” are the n-randomly selected hydrologic simulations that we have chosen to look into. It is important that n ≥ 50 for the ROF calculation to have at least 2% precision (Trindade et. al., 2019).

An example

For example, say you (conveniently) have 50 years of historical inflow and evaporation data so you choose n=50. You begin your ROF calculation in Week 52. For n=1, you:

  1. Select the demands from Week 0-51.
  2. Get the historical inflow and evaporation rates for Historical Year 1.
  3. Calculate the storage for each week, monitoring for failure.
  4. If failure is detected, increment the number of failures and move on to n=2. Else, complete the storage calculations for the demand timeseries.

This is repeated n=50 times, then pf is calculated for Week 52.

You then move on to Week 53 and repeat the previous steps using demands from Week 1-52. The whole process is completed once the ROFs in all weeks in the projected demand timeseries has been evaluated.

Potential caveats

However, this process raises two issues:

  1. The number of combinations of simulations and DU variables are computationally expensive
    • For every dj DF, n-simulations of inflows and evaporation rates must be run k-times, where k is the total number of simulations
    • This results in (n × k) computations
    • Next, this process has to be repeated for as many SOWs that exist (DU reevaluation). This will result in (n × k × number of DU variables) computations
  2. The storage values are dynamic and change as a function of DF, IH and EH

These problems motivate the following question: can we approximate the weekly ROF values given a storage level?

ROF Tables

To address the issues stated above, we generate ROF tables in which approximate ROF values for a given week and given storage level. To achieve this approximation, we first define storage tiers (storage levels as a percentage of maximum capacity). These tiers are substituted for S0 during each simulation.

Thus, for each hydrologic simulation, the steps are:

  1. For each storage tier, calculate the ROF for each week in the timeseries.
  2. Store the ROF for a given week and given storage level in an ROF table unique to the each of the k-simulations
  3. This associates one ROF value with a (dj, S0) pair

The stored values are then used during the DU reevaluation, where the storage level for a given week is approximated to its closest storage tier value, Sapprox in the ROF table, negating the need for repeated computations of the weekly ROF value.

The process of generating ROF tables can be found under rof_table_generator.py in the GitHub repository, the entirety of which can be found here.

Conclusion

Previously, we generated synthetic timeseries which were then applied here to evaluate weekly ROFs. We also explored the origins of the concept of ROF triggers. We also explained how ROF triggers encapsulate the dynamic, ever-changing risks faced by water utilities, thus providing a way to detect the risks and take adaptive and mitigating action.

In the next blog post, we will explore how these ROF tables can be used in tandem with ROF triggers to assess if Cary’s water utility will need to trigger water use restrictions. We will also dabble in varying the value of ROF triggers to assess how different risk tolerance levels, action implementation frequency, and individual values can affect a utility’s reliability by running a simple single-actor, two-objective test.

References

Kirsch, B. R., Characklis, G. W., & Zeff, H. B. (2013). Evaluating the impact of alternative hydro-climate scenarios on transfer agreements: Practical improvement for generating synthetic streamflows. Journal of Water Resources Planning and Management, 139(4), 396-406. doi:10.1061/(asce)wr.1943-5452.0000287

Palmer, R. N., & Characklis, G. W. (2009). Reducing the costs of meeting regional water demand through risk-based transfer agreements. Journal of Environmental Management, 90(5), 1703-1714. doi:10.1016/j.jenvman.2008.11.003

Trindade, B., Reed, P., & Characklis, G. (2019). Deeply uncertain pathways: Integrated multi-city regional water supply infrastructure investment and portfolio management. Advances in Water Resources, 134, 103442. doi:10.1016/j.advwatres.2019.103442