Over the last 7 years, I have been working on the development of an open source toolkit for supporting decision-making under deep uncertainty. This toolkit is known as the exploratory modeling workbench. The motivation for this name is that in my opinion all model-based deep uncertainty approaches are forms of exploratory modeling as first introduced by Bankes (1993). The design of the workbench has undergone various changes over time, but it has started to stabilize in the fall of 2016. This summer, I published a paper detailing the workbench (Kwakkel, 2017). There is an in depth example in the paper, but in a series of blogs I want to showcase the funtionality in some more detail.

The workbench is readily available through pip, but it requires ipyparallel and mpld3 (both available through conda), SALib (via pip), and optionality platypus (pip install directly from github repo).

## Adapting the DPS example from Rhodium

As a starting point, I will use the Direct Policy Search example that is available for Rhodium (Quinn et al 2017). I will adapt this code to work with the workbench. In this way, I can explain the workbench, as well as highlight some of the main differences between the workbench and Rhodium.

<br /># A function for evaluating our cubic DPS. This is based on equation (12) # from [1]. def evaluateCubicDPS(policy, current_value): value = 0 for i in range(policy["length"]): rbf = policy["rbfs"][i] value += rbf["weight"] * abs((current_value - rbf["center"]) / rbf["radius"])**3 value = min(max(value, 0.01), 0.1) return value # Construct the lake problem def lake_problem(policy, # the DPS policy b = 0.42, # decay rate for P in lake (0.42 = irreversible) q = 2.0, # recycling exponent mean = 0.02, # mean of natural inflows stdev = 0.001, # standard deviation of natural inflows alpha = 0.4, # utility from pollution delta = 0.98, # future utility discount rate nsamples = 100, # monte carlo sampling of natural inflows steps = 100): # the number of time steps (e.g., days) Pcrit = root(lambda x: x**q/(1+x**q) - b*x, 0.01, 1.5) X = np.zeros((steps,)) decisions = np.zeros((steps,)) average_daily_P = np.zeros((steps,)) reliability = 0.0 utility = 0.0 inertia = 0.0 for _ in range(nsamples): X[0] = 0.0 natural_inflows = np.random.lognormal( math.log(mean**2 / math.sqrt(stdev**2 + mean**2)), math.sqrt(math.log(1.0 + stdev**2 / mean**2)), size=steps) for t in range(1,steps): decisions[t-1] = evaluateCubicDPS(policy, X[t-1]) X[t] = (1-b)*X[t-1] + X[t-1]**q/(1+X[t-1]**q) + decisions[t-1] + natural_inflows[t-1] average_daily_P[t] += X[t]/float(nsamples) reliability += np.sum(X < Pcrit)/float(steps) utility += np.sum(alpha*decisions*np.power(delta,np.arange(steps))) inertia += np.sum(np.diff(decisions) > -0.01)/float(steps-1) max_P = np.max(average_daily_P) reliability /= float(nsamples) utility /= float(nsamples) inertia /= float(nsamples) return (max_P, utility, inertia, reliability)

The formulation of the decision rule assumes that `policy`

is a dict, which is composed of a set of variables generated either through sampling or through optimization. This is relatively straightforward to do in Rhodium, but not so easy to do in the workbench. In the workbench, a policy is a composition of policy levers, where each policy lever is either a range of real values, a range of integers, or an unordered set of categories. To adapt the DPS version of the lake problem to work with the workbench, we have to first replace the policy dict with the different variables explicitly.

def get_antropogenic_release(xt, c1, c2, r1, r2, w1): ''' Parameters ---------- xt : float polution in lake at time t c1 : float center rbf 1 c2 : float center rbf 2 r1 : float radius rbf 1 r2 : float radius rbf 2 w1 : float weight of rbf 1 note:: w2 = 1 - w1 ''' rule = w1*(abs(xt-c1/r1))**3+(1-w1)*(abs(xt-c2/r2))**3 at = min(max(rule, 0.01), 0.1) return at

Next, we need to adapt the lake_problem function itself to use this adapted version of the decision rule. This requires 2 changes: replace policy in the function signature of the lake_model function with the actual underlying parameters `c1`

, `c2`

, `r1`

, `r2`

, and `w1`

, and use these when calculating the anthropological pollution rate.

def lake_model(b=0.42, q=2.0, mean=0.02, stdev=0.001, alpha=0.4, delta=0.98, c1=0.25, c2=0.25, r1=0.5, r2=0.5, w1=0.5, nsamples=100, steps=100): Pcrit = root(lambda x: x**q/(1+x**q) - b*x, 0.01, 1.5) X = np.zeros((steps,)) decisions = np.zeros((steps,)) average_daily_P = np.zeros((steps,)) reliability = 0.0 utility = 0.0 inertia = 0.0 for _ in range(nsamples): X[0] = 0.0 natural_inflows = np.random.lognormal( math.log(mean**2 / math.sqrt(stdev**2 + mean**2)), math.sqrt(math.log(1.0 + stdev**2 / mean**2)), size=steps) for t in range(1,steps): decisions[t-1] = get_antropogenic_release(X[t-1], c1, c2, r1, r2, w1) X[t] = (1-b)*X[t-1] + X[t-1]**q/(1+X[t-1]**q) + decisions[t-1] + natural_inflows[t-1] average_daily_P[t] += X[t]/float(nsamples) reliability += np.sum(X < Pcrit)/float(steps) utility += np.sum(alpha*decisions*np.power(delta,np.arange(steps))) inertia += np.sum(np.diff(decisions) > -0.01)/float(steps-1) max_P = np.max(average_daily_P) reliability /= float(nsamples) utility /= float(nsamples) inertia /= float(nsamples) return (max_P, utility, inertia, reliability)

This version of the code can be combined with the workbench already. However, we can clean it up a bit more if we want to. Note how there are 2 for loops in the lake model. The outer loop generates stochastic realizations of the natural inflow, while the inner loop calculates the the dynamics of the system given a stochastic realization. The workbench can be made responsible for this outer loop.

A quick note on terminology is in order here. I have a background in transport modeling. Here we often use discrete event simulation models. These are intrinsically stochastic models. It is standard practice to run these models several times and take descriptive statistics over the set of runs. In discrete event simulation, and also in the context of agent based modeling, this is known as running replications. The workbench adopts this terminology and draws a sharp distinction between designing experiments over a set of deeply uncertain factors, and performing replications of each experiment to cope with stochastic uncertainty.

Some other notes on the code:

* To aid in debugging functions, it is good practice to make a function deterministic. In this case we can quite easily achieve this by including an optional argument for setting the seed of the random number generation.

* I have slightly changed the formulation of inertia, which is closer to the mathematical formulation used in the various papers.

* I have changes the for loop over t to get rid of virtually all the t-1 formulations

from __future__ import division # python2 import math import numpy as np from scipy.optimize import brentq def lake_model(b=0.42, q=2.0, mean=0.02, stdev=0.001, alpha=0.4, delta=0.98, c1=0.25, c2=0.25, r1=0.5, r2=0.5, w1=0.5, nsamples=100, steps=100, seed=None): '''runs the lake model for 1 stochastic realisation using specified random seed. Parameters ---------- b : float decay rate for P in lake (0.42 = irreversible) q : float recycling exponent mean : float mean of natural inflows stdev : float standard deviation of natural inflows alpha : float utility from pollution delta : float future utility discount rate c1 : float c2 : float r1 : float r2 : float w1 : float steps : int the number of time steps (e.g., days) seed : int, optional seed for the random number generator ''' np.random.seed(seed) Pcrit = brentq(lambda x: x**q/(1+x**q) - b*x, 0.01, 1.5) X = np.zeros((steps,)) decisions = np.zeros((steps,)) X[0] = 0.0 natural_inflows = np.random.lognormal( math.log(mean**2 / math.sqrt(stdev**2 + mean**2)), math.sqrt(math.log(1.0 + stdev**2 / mean**2)), size=steps) for t in range(steps-1): decisions[t] = get_antropogenic_release(X[t], c1, c2, r1, r2, w1) X[t+1] = (1-b)*X[t] + X[t]**q/(1+X[t]**q) + decisions[t] + natural_inflows[t] reliability = np.sum(X < Pcrit)/steps utility = np.sum(alpha*decisions*np.power(delta,np.arange(steps))) # note that I have slightly changed this formulation to retain # consistency with the equations in the papers inertia = np.sum(np.abs(np.diff(decisions)) < 0.01)/(steps-1) return X, utility, inertia, reliability

Now we are ready to connect this model to the workbench. This is fairly similar to how you would do it with Rhodium. We have to specify the uncertainties, the outcomes, and the policy levers. For the uncertainties and the levers, we can use real valued parameters, integer valued parameters, and categorical parameters. For outcomes, we can use either scalar, single valued outcomes or time series outcomes. For convenience, we can also explicitly control constants in case we want to have them set to a value different from their default value.

In this particular case, we are running the replications with the workbench. We still have to specify the descriptive statistics we would like to gather over the set of replications. For this, we can pass a function to an outcome. This function will be called with the results over the set of replications.

import numpy as np from ema_workbench import (RealParameter, ScalarOutcome, Constant, ReplicatorModel) model = ReplicatorModel('lakeproblem', function=lake_model) model.replications = 150 #specify uncertainties model.uncertainties = [RealParameter('b', 0.1, 0.45), RealParameter('q', 2.0, 4.5), RealParameter('mean', 0.01, 0.05), RealParameter('stdev', 0.001, 0.005), RealParameter('delta', 0.93, 0.99)] # set levers model.levers = [RealParameter("c1", -2, 2), RealParameter("c2", -2, 2), RealParameter("r1", 0, 2), RealParameter("r2", 0, 2), RealParameter("w1", 0, 1)] def process_p(values): values = np.asarray(values) values = np.mean(values, axis=0) return np.max(values) #specify outcomes model.outcomes = [ScalarOutcome('max_P', kind=ScalarOutcome.MINIMIZE, function=process_p), ScalarOutcome('utility', kind=ScalarOutcome.MAXIMIZE, function=np.mean), ScalarOutcome('inertia', kind=ScalarOutcome.MINIMIZE, function=np.mean), ScalarOutcome('reliability', kind=ScalarOutcome.MAXIMIZE, function=np.mean)] # override some of the defaults of the model model.constants = [Constant('alpha', 0.41), Constant('steps', 100)]

## Open exploration

Now that we have specified the model with the workbench, we are ready to perform experiments on it. We can use evaluators to distribute these experiments either over multiple cores on a single machine, or over a cluster using `ipyparallel`

. Using any parallelization is an advanced topic, in particular if you are on a windows machine. The code as presented here will run fine in parallel on a mac or Linux machine. If you are trying to run this in parallel using multiprocessing on a windows machine, from within a jupyter notebook, it won’t work. The solution is to move the `lake_model`

and `get_antropogenic_release`

to a separate python module and import the lake model function into the notebook.

Another common practice when working with the exploratory modeling workbench is to turn on the logging functionality that it provides. This will report on the progress of the experiments, as well as provide more insight into what is happening in particular in case of errors.

If we want to perform experiments on the model we have just defined, we can use the perform_experiments method on the evaluator, or the stand alone perform_experiments function. We can perform experiments over the uncertainties and/or over the levers. Any policy is evaluated over each of the scenarios. So if we want to use 100 scenarios and 10 policies, this means that we will end up performing 100 * 10 = 1000 experiments. By default, the workbench uses Latin hypercube sampling for both sampling over levers and sampling over uncertainties. However, the workbench also offers support for full factorial, partial factorial, and Monte Carlo sampling, as well as wrappers for the various sampling schemes provided by SALib.

from ema_workbench import (MultiprocessingEvaluator, ema_logging, perform_experiments) ema_logging.log_to_stderr(ema_logging.INFO) with MultiprocessingEvaluator(model) as evaluator: results = evaluator.perform_experiments(scenarios=10, policies=10)

## Directed Search

Similarly, we can easily use the workbench to search for a good candidate strategy. This requires that platypus is installed. If platypus is installed, we can simply use the optimize method. By default, the workbench will use $\epsilon$-NSGAII. The workbench can be used to search over the levers in order to find a good candidate strategy as is common in Many-Objective Robust Decision Making. The workbench can also be used to search over the uncertainties in order to find for example the worst possible outcomes and the conditions under which they appear. This is a form of worst case discovery. The optimize method takes an optional reference argument. This can be used to set the scenario for which you want to find good policies, or for setting the policy for which you want to find the worst possible outcomes. This makes implementing the approach suggested in Watson & Kasprzyk (2017) very easy.

with MultiprocessingEvaluator(model) as evaluator: results = evaluator.optimize(nfe=1000, searchover='levers', epsilons=[0.1,]*len(model.outcomes))

## Robust optimization

A third possibility is to perform robust optimization. In this case, the search will take place over the levers, but a given policy is than evaluated for a set of scenarios and the performance is defined over this set. To do this, we need to explicitly define robustness. For this, we can use the outcome object we have used before. In the example below we are defining robustness as the worst 10th percentile over the set of scenarios. We need to pass a `variable_name`

argument to explicitly link outcomes of the model to the robustness metrics.

import functools percentile10 = functools.partial(np.percentile, q=10) percentile90 = functools.partial(np.percentile, q=90) MAXIMIZE = ScalarOutcome.MAXIMIZE MINIMIZE = ScalarOutcome.MINIMIZE robustnes_functions = [ScalarOutcome('90th percentile max_p', kind=MINIMIZE, variable_name='max_P', function=percentile90), ScalarOutcome('10th percentile reliability', kind=MAXIMIZE, variable_name='reliability', function=percentile10), ScalarOutcome('10th percentile inertia', kind=MAXIMIZE, variable_name='inertia', function=percentile10), ScalarOutcome('10th percentile utility', kind=MAXIMIZE, variable_name='utility', function=percentile10)]

Given the specification of the robustness function, the remainder is straightforward and analogous to normal optimization.

<br />n_scenarios = 200 scenarios = sample_uncertainties(lake_model, n_scenarios) nfe = 100000 with MultiprocessingEvaluator(lake_model) as evaluator: robust_results = evaluator.robust_optimize(robustnes_functions, scenarios, nfe=nfe, epsilons=[0.05,]*len(robustnes_functions))

This blog has introduced the exploratory modeling workbench and has shown its basic functionality for sampling or searching over uncertainties and levers. In subsequent blogs, I will take a more in depth look at this funcitonality, as well as demonstrate how the workbench facilitates the entire Many-Objective Robust Decision Making process.