# Glossary of commonly used terms

I have recently started training with the group and coming from a slightly different research background I was unfamiliar with some a lot of the terminology. I thought it might be useful to put together a glossary of sorts containing the terms that someone new to this field of study might not intuitively understand. The idea is that when someone encounters an unfamiliar term while going through the training or reading through the material, they can come to the blog glossary and quickly Ctrl-F the term (yes, that is a keyboard shortcut used as a verb).

The definitions are not exhaustive, but links/references are provided so the reader can find additional material. The glossary is a work in progress, some definitions are still incomplete, but it will be regularly updated. I’m also probably the least qualified person in the group to provide the definitions, so any corrections/suggestions are more than welcome. If there’s any other term I’ve omitted and you feel should be included, please leave a comment so I can edit the post.

### Glossary

It is based on the classical random walk Metropolis algorithm and adapts continuously to the target distribution. 1

Akaike’s Information Criterion (AIC)

A measure of the relative quality of statistical models for a given set of data.2

AMALGAM

MOEA that applies a multi-algorithm search that combines the NSGAII, particle swarm optimization, differential evolution, and adaptive metropolis.3

Approximation set

The set of solutions output by a multi-objective evolutionary algorithm approximating the Pareto front.4

Archive

A secondary population used by many multi-objective evolutionary algorithms to store non-dominated solutions found through the generational process.5

Attainment

Attainment plot

Bayesian Information Criterion

Borg MOEA

Iterative search algorithm for multi-objective optimization. It combines adaptive operator selection with ε-dominance, adaptive population sizing and time continuation. 6

Classification and Regression Trees (CART)

Decision tree algorithms that can used for classification and regression analysis of predictive modeling problems.7

Closed vs. open loop control

Concavity

See Convexity.

Constraints

Restrictions imposed on the decision space by the particular characteristics of the system. They must be satisfied for a certain solution to be considered acceptable.5

Control map

Controllability

Refers to whether the parameterization choices have significant impacts on the success or failure for an algorithm.

Convergence

Progress towards the reference set.

Convexity

Geometrically, a function is convex is a line segment drawn from any point along function f to any other point along f lies on or above the graph of f. For optimization problems, a problem is convex if the objective function and all constraints are convex functions if minimizing, and concave is maximizing.

(The) Cube

The Reed Group’s computing cluster run by the Center for Advanced Computing (CAC) at Cornell. The cluster has 32 nodes, each with Dual 8-core E5-2680 CPUs @ 2.7 GHz and 128 GB of RAM. For more information see: https://www.cac.cornell.edu/wiki/index.php?title=THECUBE_Cluster

Decision space

The set of all decision variables.

Decision variables

The numerical quantities (variables) that are manipulated during the optimization process and they represent how our actions are encoded within the problem.5

Deterioration

When elements of a solution set at a given time are dominated by a solution set the algorithm maintained some time before.5

Differential evolution

Evolutionary algorithm designed to optimize problems over continuous domains.5

Direct policy search

Dominance resistance

The inability of an algorithm to produce offspring that dominates poorly performing, non-dominated members of the population. (See also Pareto dominance).8

DTLZ problems

A suite of representative test problems for MOEAs that for which the analytical solutions have been found.  The acronym is a combination of the first letters of the creators’ last names (Deb, Thiele, Laumanns, Zitzler). DTLZ problems have been used for benchmarking and diagnostics when evaluating the performance of MOEAs.

Dynamic memory

Elitism

Refers to a case where as evolution progresses, non-dominated solutions will not be lost in subsequent generations.

Epsilon (ε) dominance

When dominance is determined by use of a user-specified precision to simplify sorting. Pareto epsilon (ε) optimality and Pareto epsilon (ε) front are defined accordingly. (See also Pareto dominance).5

Epsilon (ε) dominance archive

Epsilon (ε)-box dominance archive

ε-MOEA

A steady-state MOEA, the first to incorporate ε-dominance archiving into its search process.

Epsilon (ε) indicator

Additive ε-indicator (ε+-indicator) (performance metric)

The smallest distance ε that the approximation set must be translated by in order to completely dominate the reference set.4

Epsilon (ε) progress

Equifinality

Evolutionary algorithms

A class of search and optimization algorithms inspired by processes of natural evolution.5

Multi-objective evolutionary algorithms (MOEAs)

Evolutionary algorithms used for solving multi-objective problems.5

Evolutionary operators

They operate on the population of an evolutionary algorithm attempting to generate solutions with higher and higher fitness.5

Mutation evolutionary operators

They perturb the decision variables of a single solution to look for improvement in its vicinity.

Recombination evolutionary operators

They combine decision variables from two or more solutions to create new solutions.

Selection evolutionary operators

They determine which solutions are allowed to proceed to the next cycle.

Executioner

A cross-language automation tool for running models. (See also Project Platypus).

Exploratory Modeling and Analysis (EMA)

A research methodology that uses computational experiments to analyze complex and uncertain systems.9

Data-driven exploratory modeling

Used to reveal implications of a data set by searching through an ensemble of models for instances that are consistent with the data.

Question-driven exploratory modeling

Searches over an ensemble of models believed to be plausible to answer a question of interest or illuminate policy choices.

Model-driven exploratory modeling

Investigates the properties of an ensemble of models without reference to a data set or policy question. It is rather a theoretical investigation into the properties of a class of models.

Feasible region

The set of all decision variables in the decision space that are feasible (i.e. satisfy all constraints).5

GDE3

Generational algorithms

A class of MOEAs that replace the entire population during each full mating, mutation, and selection iteration of the algorithm.5 (See also Steady-state algorithms).

Generational distance (performance metric)

The average distance from every solution in the approximation set to the nearest solution in the reference set.4

Gini index

A generalization of the binomial variance used in Classification and Regression Trees (CART). (See also Classification and Regression Trees (CART)).

High performance computing

Hypervolume (performance metric)

The volume of the space dominated by the approximation set.4

Inverted generational distance (performance metric)

The average distance from every solution in the reference set to the nearest solution in the approximation set.4

J3

A free desktop application for producing and sharing high-dimensional, interactive scientific visualizations. (See also Project Platypus).

Kernel density estimation

Latin Hypercube Sampling (LHS)

Stratified technique used to generate samples of parameter values.

Markov chain

Method of moments

MOEA Framework

A free and open source Java library for developing and experimenting with multi-objective evolutionary algorithms and other general-purpose optimization algorithms.4

Monte Carlo

Morris method

NSGA-II

The Non-dominated Sorting Genetic Algorithm-II. MOEA featuring elitism, efficient non-domination sorting, and parameter free diversity maintenance.10

ε-NSGA-II

A generational algorithm that uses e-dominance archiving, adaptive population sizing and time continuation.

Number of function evaluations (NFE)

Objectives

The criteria used to compare solutions in an optimization problem.

OMOPSO

A particle swarm optimization algorithm—the first to include e-dominance as a means to solve many-objective problems.11

Optimization

The process of identifying the best solution (or a set of best solutions) among a set of alternatives.

Multi-objective optimization

Multi-objective optimization employs two or more criteria to identify the best solution(s) among a set of alternatives

Intertemporal optimization

Parallel computing

Parametric generator

Pareto optimality

The notion that a solution is superior or inferior to another solution only when it is superior in all objectives or inferior in all objectives respectively.

Pareto dominance

A dominating solution is superior to another in all objectives. A dominated solution is inferior to another in all objectives. A non-dominated solution is superior in one objective but inferior in another.

Pareto front

Contains the objective values of all non-dominated solutions (in the objective function space).

Pareto optimal set

Contains the decision variables of all non-dominated solutions (in the decision variable space).

Particle swarm optimization

Population-based stochastic optimization technique where the potential solutions, called particles, fly through the problem space by following the current optimum particles.

Patient Rule Induction method (PRIM)

A rule induction algorithm.

Performance metrics

Procedures used to compare the performance of approximation sets.

Pointer

Population

The set of encoded solutions that are manipulated and evaluated during the application of an evolutionary algorithm.

Principle Component Analysis (PCA)

Project Platypus

A Free and Open Source Python Library for Multiobjective Optimization. For more information see: https://github.com/Project-Platypus

Reference set

The set of globally optimal solutions in an optimization problem.

Rhodium

Python Library for Robust Decision Making and Exploratory Modelling. (See also Project Platypus).

Robust Decision Making (RDM)

An analytic framework that helps identify potential robust strategies for a particular problem, characterize the vulnerabilities of such strategies, and evaluate trade-offs among them.12

Multi-objective robust decision making (MORDM)

An extension of Robust Decision Making (RDM) to explicitly include the use of multi-objective optimization to discover robust strategies and explore the trade-offs among multiple competing performance objectives.13

OpenMORDM

An open source implementation of MORDM with the tools necessary to perform a complete MORDM analysis.14 For more information see: https://github.com/OpenMORDM

Safe operating space

SALib

Seeding

Sobol sampling

Spacing (performance metric)

The uniformity of the spacing between the solutions in an approximation set.

SPEA2

MOEA that assigns a fitness value to each solution based on the number of competing solutions it dominates.

State of the world

A fundamental concept in decision theory which refers to a feature of the world that the agent/decision maker has no control over and is the origin of the agent’s uncertainty about the world.

A class of MOEAs that only replace one solution in the population during each full mating, mutation, and selection iteration of the algorithm. (See also Generational algorithms).

Time continuation

The injection of new solutions in the population to reinvigorate search.

Tournament

The set of candidate solutions selected randomly from a population.

Trace

Visual analytics

The rapid analysis of large datasets using interactive software that enables multiple connected views of planning problems.

1. Haario, H., Saksman, E. & Tamminen, J. An adaptive Metropolis algorithm. Bernoulli 7, 223–242 (2001).
2. Akaike, H. Akaike’s information criterion. in International Encyclopedia of Statistical Science 25–25 (Springer, 2011).
3. Vrugt, J. A. & Robinson, B. A. Improved evolutionary optimization from genetically adaptive multimethod search. Proc. Natl. Acad. Sci. 104, 708–711 (2007).
4. Hadka, D. Beginner’s Guide to the MOEA Framework. (CreateSpace Independent Publishing Platform, 2016).
5. Coello, C. A. C., Lamont, G. B. & Van Veldhuizen, D. A. Evolutionary algorithms for solving multi-objective problems. 5, (Springer, 2007).
6. Hadka, D. & Reed, P. Borg: An Auto-Adaptive Many-Objective Evolutionary Computing Framework. Evol. Comput. 21, 231–259 (2012).
7. Breiman, L. Classification and Regression Trees. (Wadsworth International Group, 1984).
8. Reed, P. M., Hadka, D., Herman, J. D., Kasprzyk, J. R. & Kollat, J. B. Evolutionary multiobjective optimization in water resources: The past, present, and future. Adv. Water Resour. 51, 438–456 (2013).
9. Bankes, S. Exploratory Modeling for Policy Analysis. Oper. Res. 41, 435–449 (1993).
10. Deb, K., Pratap, A., Agarwal, S. & Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. Evol. Comput. IEEE Trans. On 6, 182–197 (2002).
11. Sierra, M. R. & Coello, C. C. Improving PSO-based multi-objective optimization using crowding, mutation and e-dominance. in Evolutionary multi-criterion optimization 3410, 505–519 (Springer, 2005).
12. Lempert, R. J., Groves, D. G., Popper, S. W. & Bankes, S. C. A general, analytic method for generating robust strategies and narrative scenarios. Manag. Sci. 52, 514–528 (2006).
13. Kasprzyk, J. R., Nataraj, S., Reed, P. M. & Lempert, R. J. Many objective robust decision making for complex environmental systems undergoing change. Environ. Model. Softw. (2013). doi:10.1016/j.envsoft.2012.12.007
14. Hadka, D., Herman, J., Reed, P. & Keller, K. An open source framework for many-objective robust decision making. Environ. Model. Softw. 74, 114–129 (2015).

# Water Programming Blog Guide (Part 2)

Water Programming Blog Guide (Part 1)

This second part of the blog guide will cover the following topics:

1. Version control using git
2. Generating maps and working with spatial data in python
3. Reviews on synthetic streamflow and synthetic weather generation
4. Conceptual posts

## 1. Version Control using git

If you are developing code it’s worth the time to gain familiarity with git to maintain reliable and stable development.  Git allows a group of people to work together developing large projects minimizing the chaos when multiple people are editing the same files.   It is also valuable for individual projects as it allows you to have multiple versions of a project, show the changes that you have made over time and undo those changes if necessary.  For a quick introduction to git terminology and functionality, check out   The Intro to git Part 1: Local version control and  Intro to git Part 2: Remote Repositories  posts will guide you through your first git project (local or remote) while providing a set of useful commands.  Other specialized tips can be found in: Git branch in bash prompt and GitHub Pages. And if you are wondering how to use git with pycharm, you’ll find these couple of posts useful: A Guide to Using Git in PyCharm – Part 1A Guide to Using Git in PyCharm – Part 2

## 2. Generating maps and working with spatial data in python

To learn more about python’s capabilities on this subject,  this  lecture  summarizes key python libraries relevant for spatial analysis.  Also,  Julie and the Jons have documented their efforts when working with spatial data and with python’s basemap, leaving us with some valuable examples:

### Working with raster data

Python – Extract raster data value at a point

Python – Clip raster data with a shapefile

### Generating maps

Making Watershed Maps in Python

Plotting geographic data from geojson files using Python

### Generating map animations

Python makes the world go ’round

Making Movies of Time-Evolving Global Maps with Python

## 3. Reviews on synthetic streamflow and weather generation

We are lucky to have thorough reviews on synthetic weather and synthetic streamflow generation written by our experts Julie and Jon L.  The series on synthetic weather generation consists of five parts. Part I and Part II cover parametric and non-parametric methods, respectively. Part III covers multi-site generation.  Part IV discusses how to modify both parametric and non-parametric methods to simulate weather with climate change projections and finally Part V covers how to simulate weather with seasonal climate forecasts:

The synthetic streamflow review provides a historical perspective while answering key questions on “Why do we care about synthetic streamflow generation?  “Why do we use it in water resources planning and management? and “What are the different methods available?

Synthetic streamflow generation

## 4.  Conceptual posts

### Multi-objective evolutionary algorithms (MOEAs)

We frequently use multi-objective evolutionary algorithms due to their power and flexibility to solve multi-objective problems in water resources applications, so you’ll find sufficient documentation in the blog on basic concepts, applications and performance metrics:

MOEAs: Basic Concepts and Reading

You have a problem integrated into your MOEA, now what?

On constraints within MOEAs

MOEA Performance Metrics

### Many Objective Robust Decision Making (MORDM) and Problem framing

The next post discusses the MORDM framework which combines many objective evolutionary optimization, robust decision making, and interactive visual analytics to frame and solve many objective problems under uncertainty.  This is a valuable reading along with the references within.  The second post listed provides a systematic way of thinking about problem formulation and defines the key components of a many-objective problem:

Many Objective Robust Decision Making (MORDM): Concepts and Methods

“The Problem” is the Problem Formulation! Definitions and Getting Started

### Econometric analysis and handling multi-variate data

To close this second part of the blog guide, I leave you with a couple selected topics  from the Econometrics and Multivariate statistics courses at Cornell documented by Dave Gold:

A visual introduction to data compression through Principle Component Analysis

Dealing With Multicollinearity: A Brief Overview and Introduction to Tolerant Methods

# Using Borg in Parallel and Serial with a Python Wrapper – Part 2

This blog post is Part 2 of a two-part series that will demonstrate how I have coupled a pure Python simulation model with the Borg multi-objective evolutionary algorithm (MOEA). I recommend reading Part 1 of this series before you read Part 2. In Part 1, I explain how to get Borg and provide sample code showing how you can access Borg’s serial and/or parallelized (master-slave) implementations through a Python wrapper (borg.py). In Part 2, I provide details for more advanced simulation-optimization setups that require you pass additional information from the borg wrapper into the simulation model (the “function evaluation”) other than just decision variable values.

In Part 1, the example simulation model I use (PySedSim) is called through a function handle “Simulation_Caller” in the example_sim_opt.py file. Borg needs only this function handle to properly call the simulation model in each “function evaluation”. Borg’s only interaction with the simulation model is to pass the simulation model’s function handle (e.g., “Simulation_Caller”) the decision variables, and nothing else. In many circumstances, this is all you need.

However, as your simulation-optimization setup becomes more complex, in order for your simulation model (i.e., the function evaluation) to execute properly, you may need to pass additional arguments to the simulation model from Borg other than just the decision variables. For example, in my own use of Borg in a simulation-optimization setting, in order to do optimization I first import a variety of assumptions and preferences to set up a Borg-PySedSim run. Some of those assumptions and preferences are helpful to the simulation model (PySedSim) in determining how to make use of the decision variable values Borg feeds it. So, I would like to pass those relevant assumptions and preferences directly into the Borg wrapper (borg.py), so the wrapper can in turn pass them directly into the simulation model along with the decision variable values.

Before I show how to do this, let me provide a more concrete example of how/why I am doing this in my own research. In my current work, decision variable values represent parameters for a reservoir operating policy that is being optimized. The simulation model needs to know how to take the decision variable values and turn them into a useful operating policy that can be simulated. Some of this information gets imported in order to run Borg, so I might as well pass that information directly into the simulation model while I have it on hand, rather than importing it once again in the simulation model.

To do what I describe above, we just need to modify the two functions in the example_sim_opt.py module so that a new argument “additional_inputs” is passed from borg to the simulation handle.  Using my python code from blog post 1, I provide code below that is modified in the Simulation_Caller() function on lines 5, 21, 22 and 27; and in the Optimization() function on lines 55, 56 and 70. After that code, I then indicate how I modify the borg.py wrapper so it can accept this information.

import numpy as np
import pysedsim # This is your simulation model
import platform  # helps identify directory locations on different types of OS

'''
Purpose: Borg calls this function to run the simulation model and return multi-objective performance.

Note: You could also just put your simulation/function evaluation code here.

Args:
vars: A list of decision variable values from Borg
additional_inputs: A list of python data structures you want to pass from Borg into the simulation model.
Returns:
performance: policy's simulated objective values. A list of objective values, one value each of the objectives.
'''

borg_vars = vars  # Decision variable values from Borg

# Unpack lists of additional inputs from Borg (example assumes additional inputs is a python list with two items)

# Reformat decision variable values as necessary (.e.g., cast borg output parameters as array for use in simulation)
op_policy_params = np.asarray(borg_vars)
# Call/run simulation model with decision vars and additional relevant inputs, return multi-objective performance:
performance = pysedsim.PySedSim(decision_vars = op_policy_params, sim_addl_input_1 = borg_dict_1, sim_addl_input_2 = borg_dict_2)
return performance

def Optimization():

'''

Purpose: Call this method from command line to initiate simulation-optimization experiment

Returns:
--pareto approximate set file (.set) for each random seed
--Borg runtime file (.runtime) for each random seed

'''

import borg as bg  # Import borg wrapper

parallel = 1  # 1= master-slave (parallel), 0=serial

# The following are just examples of relevant MOEA specifications. Select your own values.
nSeeds = 25  # Number of random seeds (Borg MOEA)
num_dec_vars = 10  # Number of decision variables
n_objs = 6  # Number of objectives
n_constrs = 0  # Number of constraints
num_func_evals = 30000  # Number of total simulations to run per random seed. Each simulation may be a monte carlo.
runtime_freq = 1000  # Interval at which to print runtime details for each random seed
decision_var_range = [[0, 1], [4, 6], [-1,4], [1,2], [0,1], [0,1], [0,1], [0,1], [0,1], [0,1]]
epsilon_list = [50000, 1000, 0.025, 10, 13, 4]  # Borg epsilon values for each objective
borg_dict_1 = {'simulation_preferences_1': [1,2]}  # reflects data you want Borg to pass to simulation model
borg_dict_2 = {'simulation_preferences_2': [3,4]}  # reflects data you want Borg to pass to simulation model

# Where to save seed and runtime files
main_output_file_dir = 'E:\output_directory'  # Specify location of output files for different seeds
os_fold = Op_Sys_Folder_Operator()  # Folder operator for operating system
output_location = main_output_file_dir + os_fold + 'sets'

# If using master-slave, start MPI. Only do once.
if parallel == 1:
bg.Configuration.startMPI()  # start parallelization with MPI

# Loop through seeds, calling borg.solve (serial) or borg.solveMPI (parallel) each time
for j in range(nSeeds):
# Instantiate borg class, then set bounds, epsilon values, and file output locations
borg = bg.Borg(num_dec_vars, n_objs, n_constrs, Simulation_Caller, add_sim_inputs = [borg_dict_1, borg_dict_2])
borg.setBounds(*decision_var_range)  # Set decision variable bounds
borg.setEpsilons(*epsilon_list)  # Set epsilon values
# Runtime file path for each seed:
runtime_filename = main_output_file_dir + os_fold + 'runtime_file_seed_' + str(j+1) + '.runtime'
if parallel == 1:
# Run parallel Borg
result = borg.solveMPI(maxEvaluations='num_func_evals', runtime=runtime_filename, frequency=runtime_freq)

if parallel == 0:
# Run serial Borg
result = borg.solve({"maxEvaluations": num_func_evals, "runtimeformat": 'borg', "frequency": runtime_freq,
"runtimefile": runtime_filename})

if result:
# This particular seed is now finished being run in parallel. The result will only be returned from
# one node in case running Master-Slave Borg.
result.display()

# Create/write objective values and decision variable values to files in folder "sets", 1 file per seed.
f = open(output_location + os_fold + 'Borg_DPS_PySedSim' + str(j+1) + '.set', 'w')
f.write('#Borg Optimization Results\n')
f.write('#First ' + str(num_dec_vars) + ' are the decision variables, ' + 'last ' + str(n_objs) +
' are the ' + 'objective values\n')
for solution in result:
line = ''
for i in range(len(solution.getVariables())):
line = line + (str(solution.getVariables()[i])) + ' '

for i in range(len(solution.getObjectives())):
line = line + (str(solution.getObjectives()[i])) + ' '

f.write(line[0:-1]+'\n')
f.write("#")
f.close()

# Create/write only objective values to files in folder "sets", 1 file per seed. Purpose is so that
# the file can be processed in MOEAFramework, where performance metrics may be evaluated across seeds.
f2 = open(output_location + os_fold + 'Borg_DPS_PySedSim_no_vars' + str(j+1) + '.set', 'w')
for solution in result:
line = ''
for i in range(len(solution.getObjectives())):
line = line + (str(solution.getObjectives()[i])) + ' '

f2.write(line[0:-1]+'\n')
f2.write("#")
f2.close()

print("Seed %s complete") %j

if parallel == 1:
bg.Configuration.stopMPI()  # stop parallel function evaluation process

def Op_Sys_Folder_Operator():
'''
Function to determine whether operating system is (1) Windows, or (2) Linux

Returns folder operator for use in specifying directories (file locations) for reading/writing data pre- and
post-simulation.
'''

if platform.system() == 'Windows':
os_fold_op = '\\'
elif platform.system() == 'Linux':
os_fold_op = '/'
else:
os_fold_op = '/'  # Assume unix OS if it can't be identified

return os_fold_op


Next, you will need to acquire the Borg wrapper using the instructions I specified in my previous blog post. You will need to make only two modifications: (1) modify the Borg class in borg.py so it accepts the inputs you want to pass to the simulation; and (2) some additional internal accounting in borg.py to ensure those inputs are passed to the borg.py methods that deal with your function handle. I will address these two in order.

First, modify the Borg class in borg.py so it now accepts an additional input (I only show some of the borg.py code here, just to indicate where changes are being made):


class Borg:
def __init__(self, numberOfVariables, numberOfObjectives, numberOfConstraints, function, epsilons = None, bounds = None, directions = None, add_sim_inputs=None):

# add_sim_inputs is the new input you will pass to borg



Then, modify the portion of the borg.py wrapper where self.function is called, so it can accommodate any simulation inputs you have specified.


if add_pysedsim_inputs is None:
self.function = _functionWrapper(function, numberOfVariables, numberOfObjectives, numberOfConstraints, directions)
else:
# More simulation inputs are specified and can be passed to the simulation handle
self.function = _functionWrapper(function, numberOfVariables, numberOfObjectives, numberOfConstraints, directions, addl_inputs=add_sim_inputs)



After the above, the last step is to modify the _functionWrapper method in borg.py:


def _functionWrapper(function, numberOfVariables, numberOfObjectives, numberOfConstraints, directions=None, addl_inputs=None):
# addl_inputs will be passed into the simulation model
def innerFunction(v,o,c):
global terminate
try:
if addl_inputs is None:
result = function(*[v[i] for i in range(numberOfVariables)])
else:
result = function([v[i] for i in range(numberOfVariables)], addl_inputs)



# Using Borg in Parallel and Serial with a Python Wrapper – Part 1

Simulation and optimization are frequently used to solve complex water resources and environmental systems problems. By itself, a simulation model begs the question “what to simulate?” Similarly, by itself, an optimization model begs the question “is the solution really best?” For this reason, simulation and optimization models are frequently coupled.

This blog post is part 1 of a multi-part series that will demonstrate how I have coupled a pure Python simulation model with the multi-objective evolutionary optimization algorithm Borg. In this post, I will show how you can access Borg’s serial and/or parallelized (master-slave) implementations through a Python wrapper (borg.py).

Please see this previous blog post for some background about Borg, and how to obtain it. My instructions below assume you have access to the Borg files.

In the setup I will describe below, Borg parameterizes and iteratively refines solutions (e.g., reservoir operating policies) to a problem, optimizing them in response to their simulated performance with respect to multiple objectives.

You will need the following Borg files (see link above for how to download these):

• Serial (i.e., borg.c, libborg.so, etc.) and/or master-slave (i.e., borgms.c, libborgms.so, etc.) implementations of Borg, depending upon your ability to parallelize.
• Python wrapper for Borg (borg.py), which will allow you to to access Borg easily in Python.

You will need to create the following files yourself (I provide sample code below for these files):

• example_sim_opt.py—A python module that should contain two main functions:
1. A simulation caller, which takes decision variables and returns multi-objective performance. This function is called “Simulation_Caller” in the example_sim_opt.py code below.
2. An optimization function, which calls the Borg MOEA through its python wrapper borg.py. This borg.py wrapper provides extensive docstring documentation regarding required arguments, returned values, etc., so I do suggest reading through the wrapper if you have questions (e.g., about the python data types of arguments and returns).

Note that the file and function names above are just example names. You can name the above files whatever you want. Just be sure to modify the code I provide below to reflect the new names.

A sample of code for example_sim_opt.py is as follows:

import numpy as np
import pysedsim # This is your simulation model
import platform  # helps identify directory locations on different types of OS

def Simulation_Caller(vars):
'''
Purpose: Borg calls this function to run the simulation model and return multi-objective performance.

Note: You could also just put your simulation/function evaluation code here.

Args:
vars: A list of decision variable values from Borg

Returns:
performance: policy's simulated objective values. A list of objective values, one value each of the objectives.
'''

borg_vars = vars  # Decision variable values from Borg

# Reformat decision variable values as necessary (.e.g., cast borg output parameters as array for use in simulation)
op_policy_params = np.asarray(borg_vars)
# Call/run simulation model, return multi-objective performance:
performance = pysedsim.PySedSim(decision_vars = op_policy_params)
return performance

def Optimization():

'''

Purpose: Call this method from command line to initiate simulation-optimization experiment

Returns:
--pareto approximate set file (.set) for each random seed
--Borg runtime file (.runtime) for each random seed

'''

import borg as bg  # Import borg wrapper

parallel = 1  # 1= master-slave (parallel), 0=serial

# The following are just examples of relevant MOEA specifications. Select your own values.
nSeeds = 25  # Number of random seeds (Borg MOEA)
num_dec_vars = 10  # Number of decision variables
n_objs = 6  # Number of objectives
n_constrs = 0  # Number of constraints
num_func_evals = 30000  # Number of total simulations to run per random seed. Each simulation may be a monte carlo.
runtime_freq = 1000  # Interval at which to print runtime details for each random seed
decision_var_range = [[0, 1], [4, 6], [-1,4], [1,2], [0,1], [0,1], [0,1], [0,1], [0,1], [0,1]]
epsilon_list = [50000, 1000, 0.025, 10, 13, 4]  # Borg epsilon values for each objective

# Where to save seed and runtime files
main_output_file_dir = 'E:\output_directory'  # Specify location of output files for different seeds
os_fold = Op_Sys_Folder_Operator()  # Folder operator for operating system
output_location = main_output_file_dir + os_fold + 'sets'

# If using master-slave, start MPI. Only do once.
if parallel == 1:
bg.Configuration.startMPI()  # start parallelization with MPI

# Loop through seeds, calling borg.solve (serial) or borg.solveMPI (parallel) each time
for j in range(nSeeds):
# Instantiate borg class, then set bounds, epsilon values, and file output locations
borg = bg.Borg(num_dec_vars, n_objs, n_constrs, Simulation_Caller)
borg.setBounds(*decision_var_range)  # Set decision variable bounds
borg.setEpsilons(*epsilon_list)  # Set epsilon values
# Runtime file path for each seed:
runtime_filename = main_output_file_dir + os_fold + 'runtime_file_seed_' + str(j+1) + '.runtime'
if parallel == 1:
# Run parallel Borg
result = borg.solveMPI(maxEvaluations='num_func_evals', runtime=runtime_filename, frequency=runtime_freq)

if parallel == 0:
# Run serial Borg
result = borg.solve({"maxEvaluations": num_func_evals, "runtimeformat": 'borg', "frequency": runtime_freq,
"runtimefile": runtime_filename})

if result:
# This particular seed is now finished being run in parallel. The result will only be returned from
# one node in case running Master-Slave Borg.
result.display()

# Create/write objective values and decision variable values to files in folder "sets", 1 file per seed.
f = open(output_location + os_fold + 'Borg_DPS_PySedSim' + str(j+1) + '.set', 'w')
f.write('#Borg Optimization Results\n')
f.write('#First ' + str(num_dec_vars) + ' are the decision variables, ' + 'last ' + str(n_objs) +
' are the ' + 'objective values\n')
for solution in result:
line = ''
for i in range(len(solution.getVariables())):
line = line + (str(solution.getVariables()[i])) + ' '

for i in range(len(solution.getObjectives())):
line = line + (str(solution.getObjectives()[i])) + ' '

f.write(line[0:-1]+'\n')
f.write("#")
f.close()

# Create/write only objective values to files in folder "sets", 1 file per seed. Purpose is so that
# the file can be processed in MOEAFramework, where performance metrics may be evaluated across seeds.
f2 = open(output_location + os_fold + 'Borg_DPS_PySedSim_no_vars' + str(j+1) + '.set', 'w')
for solution in result:
line = ''
for i in range(len(solution.getObjectives())):
line = line + (str(solution.getObjectives()[i])) + ' '

f2.write(line[0:-1]+'\n')
f2.write("#")
f2.close()

print("Seed %s complete") %j

if parallel == 1:
bg.Configuration.stopMPI()  # stop parallel function evaluation process

def Op_Sys_Folder_Operator():
'''
Function to determine whether operating system is (1) Windows, or (2) Linux

Returns folder operator for use in specifying directories (file locations) for reading/writing data pre- and
post-simulation.
'''

if platform.system() == 'Windows':
os_fold_op = '\\'
elif platform.system() == 'Linux':
os_fold_op = '/'
else:
os_fold_op = '/'  # Assume unix OS if it can't be identified

return os_fold_op


The following is an example of how you would submit a batch script on a Linux cluster to run a parallelized simulation-optimization experiment using the example_sim_opt.py and borg.py files. Note that in the parallelized version, simulations (i.e., “function evaluations”) are being run in parallel by separate processors.

You would need the following two files:

1. example_sim_opt_caller.py. This is a python file that is used to call example_sim_opt.py
2. example_sim_opt_batch_script.pbs. This is batch script that runs example_sim_opt_caller.py in parallel on a cluster using open MPI.

Example code for example_sim_opt_caller.py:


'''
Purpose: To initiate the optimization process, which will iteratively call the simulation model.
'''

import example_sim_opt  # Import main optimization module that uses borg python wrapper

# Module within example
example_sim_opt.Optimization()



Example code for example_sim_opt_batch_script.pbs:


#PBS -l nodes=8:ppn=16
#PBS -l walltime=24:00:00
#PBS -j oe
#PBS -o pysedsim_output.out

cd $PBS_O_WORKDIR source /etc/profile.d/modules.sh module load openmpi-1.6.5-intel-x86_64/gnu module load python-2.7.5 mpirun python example_sim_opt_caller.py  You could then run the above batch script with a command such as: qsub example_sim_opt_batch_script.pbs # Visualization strategies for multidimensional data This is the first part of a series of blog posts on multidimensional data visualization strategies. The main objectives of this first part are: 1. Show you how to expand plotting capabilities by modifying matplotlib source code. 2. Generate a tailored 6-D Pareto front plot with completely customized legends. 3. Provide a glimpse of a recently developed Pareto front video repository in R. ## 1. Expanding matplotlib capabilities Keeping in mind that matplotlib is an opensource project developed in the contributors’ free time, there is no guarantee that features that contributors make will be added straightaway. In my case, I needed the marker rotation capabilities in a 3 D scatter plot. Luckily, someone already had figured out how to do so and started a pull request in the matplotlib github repository but this change has not yet been implemented. Since I couldn’t wait for the changes to happen, here’s the straightforward solution that I found: Here’s the link to the pull request that I am referring to. First, I located where Matplotlib lives in my computer, the path in my case is: C:/Python27/matplotlib Then, I located the files that the contributor changed. The files’ paths are circled in red in the following snippets of the pull request: I located those files in my local matplotlib folder, which in my case are: C:/Python27/matplotlib/axes/_axes.py C:/Python27/matplotlib/collections.py In the previous snippets, the lines of code that were added to the original script are highlighted in green and those that were removed are highlighted in red. Hence, to access the clean version I clicked on the view button and selected the entire script and copied and pasted it in my local matplotlib code. For this exercise I ended changing only a couple of scripts: the axes.py and the collections.py. NOTE: If you ever need to undertake this type of solution, make sure you paste the lines of code in the right places, do this part carefully. Also, it’s always a good idea to make backups of the original files in case something goes irreversibly wrong. Or you can always uninstall and install, no big deal. ## 2. Generate a tailored 6D Pareto front plot with customized legends. Matplotlib allows visualization of 5 objectives quite easily, but scaling to 6 or more objectives can be a bit tricky. So, lets walk through our 6 D plots in Matplotlib. We will learn how to do one of the following plots: ### Pie Day Plot: ### St. Patrick’s Day Plot: ### 2.1. Required libraries: The following are the only libraries that you’ll need. I import seaborn sometimes because it looks fancy but it’s totally unnecessary in this case, which is why it is commented out. import matplotlib as mpl from mpl_toolkits.mplot3d import Axes3D import numpy as np import matplotlib.pyplot as plt #import seaborn  ### 2.2. Importing data: The data file that I used consists of 6 space-separated columns, if your data has another delimiter you can just add it like so: data= np.loadtxt(‘sample_data.txt, delimiter=’,’). I am also multiplying the first five columns by -1 because I want to remove the negatives, this is specific to my data, you may not require to do so. data= np.loadtxt('sample_data.txt') #Organizing the data by objectives obj1 = data[:,0]*-1 obj2 = data[:,1]*-1 obj3 = data[:,2]*-1 obj4 = data[:,3]*-1 obj5 = data[:,4]*-1 obj6 = data[:,5]  ### 2.3. Object-based plotting: To allow more customization, we need to move to a more object-based way to make the plots. That is, storing elements of the plots in variables. &lt;span class=&quot;n&quot;&gt; fig = plt.figure() # create a figure object ax = fig.add_subplot(111, projection='3d') # create an axes object in the figure  ### 2.4. Setting marker options: Any mathtext symbol can be used as a marker. In order to use rotation to represent an additional objective it’s preferable if the marker has a single axis of symmetry so that the rotation is distinguishable. Here are some marker options: pie=r'$\pi$' #pie themed option arrow = u'$\u2193$' # Arrows clover=r'$\clubsuit$' #Saint Patrick's theme heart=r'$\heartsuit$' # Valentine's theme marker=pie #this is were you provide the marker options  More marker options can be found in : http://matplotlib.org/users/mathtext.html ### 2.4. Scatter 6D plot: The first three objectives are plotted in a 3-D scatter plot, in the x,y, and z axis respectively. The fourth objective is represented by color, the fifth by size and the sixth by rotation. Note that the rotation is scaled in degrees. This is the step were I had to modify matplotlib source code to enable the ‘angles’ option shown below. Also, it may be required to scale the size objective to have the desired marker size in your plot. You can also plot the ideal point by adding a second scatter plot specifying the ideal values for each objective. Finally, we assign the size objective “objs” and rotation objective “objr”, this will be useful later on when setting up the legend for these two objectives. rot_angle=180 #rotation angle multiplier scale=2000 #size objective multiplier #Plotting 6 objectives: im= ax.scatter(obj1, obj2, obj3, c=obj4, s= obj5*scale, marker=marker, angles=obj6*rot_angle, alpha=1, cmap=plt.cm.summer_r) ax.scatter(1,1,0, marker=pie, c='seagreen', s=scale, alpha=1) objs=obj5 #size objective objr=obj6 #rotation objective  ### 2.5. Main axis labels and limits: This is extremely straightforward, you can set the x,y, and z labels and specify their limits as follows: #Main axis labels: ax.set_xlabel('Objective 1') ax.set_ylabel('Objective 2') ax.set_zlabel('Objective 3') #Axis limits: plt.xlim([0,1]) plt.ylim([0,1]) ax.set_zlim3d(0, 1) ### 2.6. Color bar options: The colorbar limits and labels can also be specified, as shown in the code below. There are many colormap options in matplotlib, some of the most popular ones are: jet, hsv and spectral. As an example, if you want to change the colormap in the code shown in part 2.4, do cmap= plt.cm.hsv. To reverse the colormap attach an ‘_r ‘ like so: cmap= plt.cm.hsv_r. There is also a color brewer package for the more artistic plotter. # Set the color limits.. not necessary here, but good to know how. im.set_clim(0.0, 1.0) #Colorbar label: cbar = plt.colorbar(im) cbar.ax.set_ylabel('Objective 4') ### 2.6. Size and rotation legends: This is were it gets interesting. The first couple of lines get the labels for legend and chose which ones to display. This allows for much flexibility when creating the legends. As you can see in the code below, you can show markers that correspond to the maximum and the minimum objective values to orient the reader. You can assign the spacing between lines in the legend, the title, weather you want to frame your legend or not, the location in the figure, etc. Line 22 of the following code shows how to add more than one legend. There are many options for an entirely customized legend in the legend documentation which you can explore for more options. &lt;pre&gt;handles, labels = ax.get_legend_handles_labels() display = (0,1,2) #Code for size and rotation legends begins here for Objectives 5 and 6: min_size=np.amin(objs) max_size=np.amax(objs) #Custom size legend: size_max = plt.Line2D((0,1),(0,0), color='k', marker=marker, markersize=max_size,linestyle='') size_min = plt.Line2D((0,1),(0,0), color='k', marker=marker, markersize=min_size,linestyle='') legend1= ax.legend([handle for i,handle in enumerate(handles) if i in display]+[size_max,size_min], [label for i,label in enumerate(labels) if i in display]+[&quot;%.2f&quot;%(np.amax(objs)), &quot;%.2f&quot;%(np.amin(objs))], labelspacing=1.5, title='Objective 6', loc=1, frameon=True, numpoints=1, markerscale=1) markersize=15 #Custom rotation legend rotation_max = plt.Line2D((0,1),(0,0),color='k',marker=r'$\Uparrow$', markersize=15, linestyle='') rotation_min = plt.Line2D((0,1),(0,0),color='k', marker=r'$\Downarrow\$', markersize=15, linestyle='')
ax.legend([handle for i,handle in enumerate(handles) if i in display]+[rotation_max,rotation_min],
[label for i,label in enumerate(labels) if i in display]+[&quot;%.2f&quot;%(np.amax(objr)), &quot;%.2f&quot;%(np.amin(objr))], labelspacing=1.5, title='Objective 5',loc=2, frameon=True, numpoints=1, markerscale=1)

plt.show()

You can find the full code for the previous example in the following github repository:

https://github.com/JazminZatarain/Visualization-of-multidimensional-data/blob/master/paretoplot6d.py

## 3. Generate 6D Pareto front and runtime videos in R.

And last but not least, let me direct everyone to Calvin’s repository: https://github.com/calvinwhealton/ParetoFrontMovie.  Where  you can find the paretoMovieFront6D.R script which enables the exploration of  the evolution of a  6D Pareto front.   It is an extremely flexible tool and it has around 50 customization options to adapt your video or your plot to your visual needs, all you need is your runtime output, so check it out.  I made the tiniest contribution to this repository so I feel totally entitled to talk about it.   Here is a snippet of the video:

# On constraints within MOEAs

When thinking about setting up a problem formulation for an MOEA like Borg, it’s helpful to spend a minute (or 15!) talking about our friend the constraint.  I’ve written about problem formulations in the past, and we even have a video about problem formulations, but there’s a subtlety that I though it would be helpful to discuss again here!

## A comment about constraints in classical mathematical programming formulations

Classical mathematical programming formulations are chock-full of constraints.  Hundreds of constraints, even!  This is due to the fact that when you are setting up a linear program (LP) or something similar, the set of constraints allows the program to have some amount of realism.  The classical linear programming water allocation problem (for example, something you would find in the classic Revelle et al textbook)  is something like:

Decisions: water allocations to different actors in different periods of time

Objective: minimize cost of the allocation plan, or maximize the net benefits of the plan

Constraints:

1. Ensure that demands are met

2. Ensure that, at each supply node, inflow equals outflow

3. Ensure that the capacity of the conveyance system is not exceeded

4. Ensure that all decision variables are greater than or equal to zero

5. Keep the water quality values above or below some threshold

We have proposed 5 categories of constraints, and there may be even more.  But mathematically, this simple formulation above could have hundreds of constraints, if you have to solve each equation for a particular point in time, or at a particular spatial location.

This is perfectly fine!  If you are solving a LP, I encourage you to put more constraints in there because constraints are really the only way that you can make your solutions look more physically accurate.  LP theory will tell you that, if there is one or more optimal solutions, at least one of them will be at the intersection of two or more of the constraints.  So, the constraints are important (more on this later).

## How the simulation-optimization of MOEAs is different

The MOEA approach we talk about on this blog is a simulation-optimization approach.  What that means, is that instead of having a set of a few hundred constraints to model your problem, you are actually using a simulation model of the system to model it.  This is going to fundamentally change the way that your decisions and constraints are defined, and it may even change your objectives.  Let’s talk about each in turn, using our simple water allocation example as, well, as an example.

Decisions: It would probably make more sense to optimize rules about how the water is allocated, instead of actually treating each decision variable as a separate volume of water over time.  There are a number of reasons why this is.  In the formulation above, if each decision variable is an allocation of water in a particular period of time, once you start to plan over long planning horizons, the problem will get unwieldy, with thousands of variables.  Plus, if you say that decision variable x_83 is the allocation of water to user 5, at t = 3, then how are you to plan what happens if you don’t have accurate or deterministic data for what happens at t = 3?  But that’s really the purpose of another post.  We’re here to talk about:

Constraints: So, in the above formulation, constraints were designed to create physical reality to your system.  For example, if the decision variables are volumes of water, and you have a basic estimation of the concentration of some pollutant, you can create a constraint that says the number of kilograms of a pollutant should be less than a particular value.  In that fashion, you can think of hundreds of constraints, and we just talked about this, right?

In the simulation-optimization approach, though the simulation model will provide physical reality to the system, not the constraint set.  So it is better to treat constraints as limits on acceptable performance.  For example if you are calculating reliability, you can say that all solutions should have at least some lower threshold of reliability (say, 98%, for example).  Or, don’t have constraints at all and instead just handle the value as an objective!  The point of doing tradeoff analysis with a MOEA is to find diversity of solutions, after all.  If you find that the performance of the values is too poor, or that you have too many solutions, you can impose constraints next time.  Or, you can also change the epsilon resolution within Borg in order to change the number of points in the tradeoff analysis; see this paper by Kollat and Reed where they showed the same tradeoff set with different resolutions, for example.

Constraints can also help with the representation of decisions within your formulation.  After all, the most basic constraint in any LP is that xi >= 0, right?  But think about creative ways to change the representation of decision variables such that all of the algorithm’s solutions are feasible.  This will greatly help the search process!

For example, consider a problem where you have to create drought restriction plans at different levels.  In other words, imagine that the value of restriction at level 2 needed to be greater than the value at level 1.  Mathematically, you could set this up as a constraint (x2 >= x1), but it will be very difficult to meet this, especially with an initially random population within a MOEA.  Especially when you have, say, 5 or 10 or 100 of these levels that each need to be greater as you go up. So, instead, you can do something like this.  Let xi’ be the decision variable in the algorithm, and xi (without a prime) be the actual decision variable used in your model.  Then, let:

x1 = x1′

x2 = x1 + x2′

The algorithm actually searches on x1′ and x2′.  Now, if you set your bounds up correctly, the algorithm can never generate an infeasible solution, and you are spending all your computational resources within search actually finding good, feasible solutions instead of just trying to find a single set of feasible solutions with which to work.  Something similar to this was used in Zeff et al. (2014).

## Some comments on how constraint handling works within Borg

The concepts of epsilon non-domination, and non-domination, are used to determine whether solutions survive within the search process.  The general concept of non-domination is, informally, that a solution’s objective function values are better in one objective and at least as good as another solution in all other objectives.  But that’s not what we’re here to talk about!  We are here to talk about constraints, remember? 🙂

Well remember that the definition of a feasible and infeasible solution is the same in LP-land versus for us.  Feasible solutions meet all constraints, infeasible solutions violate one or more constraints. An infeasible solution is defined to not be acceptable to the decision maker.  This is an extremely important point to remember, so it is worth repeating!  An infeasible solution is unacceptable. Because of this, it’s up to you to set up the constraints to keep acceptable solutions, and throw away unacceptable (infeasible) solutions.

However, there is an important distinction between, say, an LP and what we do.  Remember the simplex algorithm for solving LPs?  What it does is intelligently finds a corner of the feasible space (defined by your constraint set), and then it basically walks through each of the corners, trying each one of them to determine if the corner is optimal.  Notice what’s going on here, though.  Most of the time, at least for a simple problem, it is easy to find a feasible solution in the simplex method, and then you are off to the races.  If you can’t find a feasible solution, you’re in a lot of trouble!

Similarly, you also need to find one feasible solution to get started with MOEA search…preferably you’ll have many many feasible solutions!  But what if it is difficult to find a feasible solution? This is going to impact your ability to find solutions, and if your function evaluation count, or search duration, is not long enough you can have a failed search.

So let’s get to the point of this section, which is explaining how constraint handling works within Borg.  As explained in Reed et al., most of the algorithms we’ve typically tested within this blogosphere use restricted tournament selection. There are three conditions where a solution can dominate b:

1. Both solutions are feasible, and based on objective function performance, a dominates b

2. The solution is feasible, and is infeasible.

3. Both and are infeasible, and the sum of a’s constraint violations are less than the sum of b‘s constraint violations

So as you can see, item 3 is very interesting!  What item 3 means, is that Borg or these other algorithms will actually keep some infeasible solutions in the population or archive while all solutions are infeasible.  This is important because it helps move the population toward eventually finding a feasible region of the decision space!  But, according to item 2, any solution that is feasible will beat an infeasible solution, which is consistent with the fact that infeasible solutions are unacceptable, and oh gosh I already said that a few times didn’t I!

By the way, you can also use penalty functions to handle constraints, instead of using this built in restricted tournament selection feature.  An example of that is in the Kollat and Reed paper that I already linked to.  Plus there are other ways to do this too!  By no means is this an exhaustive discussion, but I hope what we’ve talked about is helpful.  Anyway:

## Some informal recommendations on how to use constraints within Borg if you choose to use them

1. If you don’t have to use constraints, don’t use them!  Optimize without them and see how you do (see above).

2. Make the constraint violations that you’re reporting to Borg meaningful.  Notice item 3 in the restricted tournament selection actually does use the values that you report from constraint handling.  I’ve seen some folks just say: if infeasible, c = 100000000.  But that won’t actually help the restricted tournament mechanism move the set toward meaningful, feasible solutions.  For example let’s say that r gives you a value of reliability for a system, and reliability must be above capital R.  You could set your constraint violation to: c = R – r, which will give you the difference between your performance and what you are required to have.  If all of your solutions are infeasible initially, the algorithm will favor a solution that has, say, 1% reliability lower than R over a solution that has 20% reliability lower than R.

3. If you have an expensive simulation, skip the objective function calculations if the solution is infeasible.  The objective function values for an infeasible solution are not really used for anything, in restricted tournament selection.  So you can report the constraint violation, but don’t spend time calculating objective functions that will never be used.

Two quick (important) caveats here:  One, remember that infeasible solutions are unacceptable.  And the reason why I keep bringing this up is the fact that you can technically shoot yourself in the foot here, and impose constraints that are too strict, where a real decision maker may actually want solutions that violate your imposed constraint.  So be careful when you define them!  Two, if you do skip the objective function calculation be sure to still report a value of your objectives to Borg, even if that value is just a placeholder.  This is important, to make sure the Borg procedure keeps running.  If you have questions about these caveats please leave them in the comment box below.

## To conclude

This has been quite a long post so thanks for sticking with me til the end.  Hopefully you’ve learned something, I know I did!  We can continue the discussion on this blog (for the contributors) and on the comment boxes below (for everyone)!  Have a great day.

# Thoughts on using models with a long evaluation time within a Parallel MOEA framework

Here is the result of some discussions I’ve had lately on a new project of mine. I tried to make the insights as generic as possible, so you can apply this to your own work!

## Why will you need to parallelize your search?

Consider an engineering simulation that takes about 2.5 minutes on a brand new personal computer.  You’ve done some timing on your code, and you realize about  80% of the time is in the model’s actual calculation.  Sure, there may be some added benefit that you could gain by making the input and output to the model more efficient, etc.  But you’re not going to get the time down that much (you could argue the lowest amount of time the simulation would take is 2 minutes, removing all i/o overhead).  Let’s assume a  2.5 minute evaluation time and run some numbers:

In 60 minutes, we can do 24 evaluations.  Therefore, in 24 hours we can do 576 evaluations.

In some prior work, we saw that you could get a very good answer to the problem in about 80,000 function evaluations.  This new problem may actually be harder than the old one!  So I am guessing we would like to do at least 80,000 function evaluations per trial here.  You can see that it would take much longer than a day to do a search run (in fact, it would take 138 days, which is longer than we have to run a single job on the computer).

Let’s say we were able to cut down the computational time by 50% (which would be a huge undertaking), then we would only gain 2x speedup of time, and it would take 69 days.

Not to mention the fact that really long jobs that have a duration in the order of days, take a long time to queue on the system.

As a final final thought about this, we could concievably cut down the number of function evaluations.  But my guess is we would have to (at the very least) run about 10,000 function evaluations, which would take 17 days.  And, it would probably require a lot more justification than we’re prepared to give.  Right now, our argument is that we look at the Pareto approximate set and cut the search off when it stops improving.  If we cut down to 10,000 function evaluations, that argument would no longer be valid.

## Running the numbers for parallelized search

Borg has been shown to to have scalable performance for 10’s of 1000’s of cores, in a forthcoming paper by Dave Hadka.

Scalable means two things.  First, just thinking about scalability in parallel computing, perfect scalability means that with N processors, you can do the work N times faster.  What takes 10 hours on a single pc, would take 1 hour on 10 nodes.  Computer scientists rarely get perfect scalability, mainly because of the overhead of passing messages back and forth or coordinating processes (more on that later).  In parallel MOEA work, scalability must also include the quality of the solutions you are getting out of the search.  It is very possible MOEA search will fail, so if the search does poorly, having the former definition of scalability doesn’t make much sense.  So Dave et al. have developed ways to measure scalability that also take into account solution quality.

But, to make a long story short, let’s assume perfect scalability and run the numbers again.

We can do 24 evaluations in one hour.  But now, scaling to 512 processors, we can do 24*512 = 12,288 evaluations in one hour, and the calculations of run time become similar to our last project.  We will get a halfway decent answer in 1 hour, and we can run the same amount of computation in 8 hours or so.  In fact, using the max queue time of 24 h, we could get 294,912 evaluations done in one day.

## Parallel Computing 101

Shared Memory: You are likely reading this on a shared memory machine as we speak.  In shared memory, there are N processors, and each of them shares access to the same memory.  There are some programming paradigms such as a system called ‘pthreads’ which are used for these systems.  The challenge of shared memory programming is preventing so-called “race” conditions.  That is, Processor 1 tries to edit a variable, and Processor 2 is working on the same variable.

Although there are multi-core chips on the processor, they are not used as such, and we are dealing with…

Distributed Memory: Each of the N processors in a distributed memory system has its own memory.  Processor 1 never “sees” what Processor 2 is doing unless Processor 2 shares what she is doing with Processor 1.  The paradigm of programming here is called Message Passing Interface (MPI).  A processor only receives information from another processor if a ‘message’ is received from somewhere.  The challenge is that coordinating the messages can be tricky, and all processors work at different rates, so you could have a situation where Processor 1 is waiting for slowpoke Processor 8 to do his job, and he never sends the message, and the program locks.  This happens when all processors are homogeneous, but you can imagine it’s even worse when you have heterogeneous processors (i.e., one is faster than the other).

An additional challenge of MPI programming is that all processors see the same filesystem.  Two processors can’t edit the same file at once, without massive, horrible errors.  Keep this in mind in the next section.

## Some pseudocode of the algorithm process

Parallel MOEAs can work in a number of different configurations, but the important one for us is called Master-Worker.  Basically, there is one and only one Master or Coordinator, and then a whole bunch of workers.  The Master’s job is to do the search calculations.  The Workers’ job is to evaluate the simulation model — take decision variable values and calculate objectives.  The Workers have no other job, and the workers cannot “see” what the others do.

Code to implement a Parallel MOEA is described in the following sections.

main()

The main program calls several functions.  First, you need code that initializes your own problem.  For example, read in the input data that doesn’t change over the run (a function called something like, myProblemInit). The algorithm is set up, for example, to set the random seed, and define the algorithm parameters.

Finally, the algorithm is run, with a pointer to a function that contains your problem (see myProblem below)

myProblem(obj, vars, consts)

Ok, so now you need to write your own version of the “problem” function.  There may be only one task to do, or multiple.  For example, if you need to perform some kind of transformation on the decision variables you can do that here.  But otherwise, all you’re doing is actually calling the real simulation model.  Perhaps this simulation code comes from somewhere else… it’s for a version of the model that has never been attached to an optimization framework before.  It looks like:

callSim(obj, vars, consts)

Here, you may very well have an executable call to a completely external program that runs your simulation.  Maybe that external program even has a separate wrapper attached to it, in Python or MATLAB.  Steps in such a function will probably look like this:

1. Note that parameters from initialization are already in memory.  A lot of parameters don’t change from evaluation to evaluation.
2. Write input files, such as a file called inputData.dat
3. Call an external program that does the calculations
4. Interpret the output file, which is called something like outputData
5. Assign the objective values to the variable, obj which is passed by pointer into this function.

## Hey, wait, there’s a problem with steps 2 and 4 above!

Ahh, good.  You realized it.  If we recall from our discussion of MPI on clusters… all processors share the same filesystem!  So each worker is going to be creating a file called basicReaction.well, and all of them are going to be editing the same file at the same time, which cannot work.

The solution is to create a separate input and output file for each processor to ensure that this filesystem bug doesn’t happen to us.  MPI allows this because of a built-in function called MPI_Comm_Rank which tells each processor who he is in the world (existential, isn’t it?)

You can use the processor’s rank directly in the myProblem function above, and that way the processor will always use his own unique file and not confuse data from the other processors.

## Conclusion

I hope this has been helpful!  Increased computing power with technologies like clusters and cloud computing are opening up more and more types of problems for many objective analysis.  With a little bit of skill in writing system calls and wrappers, you can use tools like Borg to do all sorts of fun things.  Just be careful to understand the unique properties of your parallel system when creating the code for your project.  Feel free to add comments and questions in the comments below!