I have recently started training with the group and coming from a slightly different research background I was unfamiliar with some a lot of the terminology. I thought it might be useful to put together a glossary of sorts containing the terms that someone new to this field of study might not intuitively understand. The idea is that when someone encounters an unfamiliar term while going through the training or reading through the material, they can come to the blog glossary and quickly Ctrl-F the term (yes, that is a keyboard shortcut used as a verb).
The definitions are not exhaustive, but links/references are provided so the reader can find additional material. The glossary is a work in progress, some definitions are still incomplete, but it will be regularly updated. I’m also probably the least qualified person in the group to provide the definitions, so any corrections/suggestions are more than welcome. If there’s any other term I’ve omitted and you feel should be included, please leave a comment so I can edit the post.
Adaptive metropolis algorithm
It is based on the classical random walk Metropolis algorithm and adapts continuously to the target distribution. 1
Akaike’s Information Criterion (AIC)
A measure of the relative quality of statistical models for a given set of data.2
MOEA that applies a multi-algorithm search that combines the NSGAII, particle swarm optimization, differential evolution, and adaptive metropolis.3
The set of solutions output by a multi-objective evolutionary algorithm approximating the Pareto front.4
A secondary population used by many multi-objective evolutionary algorithms to store non-dominated solutions found through the generational process.5
Bayesian Information Criterion
Iterative search algorithm for multi-objective optimization. It combines adaptive operator selection with ε-dominance, adaptive population sizing and time continuation. 6
Classification and Regression Trees (CART)
Decision tree algorithms that can used for classification and regression analysis of predictive modeling problems.7
Closed vs. open loop control
Restrictions imposed on the decision space by the particular characteristics of the system. They must be satisfied for a certain solution to be considered acceptable.5
Refers to whether the parameterization choices have significant impacts on the success or failure for an algorithm.
Progress towards the reference set.
Geometrically, a function is convex is a line segment drawn from any point along function f to any other point along f lies on or above the graph of f. For optimization problems, a problem is convex if the objective function and all constraints are convex functions if minimizing, and concave is maximizing.
The Reed Group’s computing cluster run by the Center for Advanced Computing (CAC) at Cornell. The cluster has 32 nodes, each with Dual 8-core E5-2680 CPUs @ 2.7 GHz and 128 GB of RAM. For more information see: https://www.cac.cornell.edu/wiki/index.php?title=THECUBE_Cluster
The set of all decision variables.
The numerical quantities (variables) that are manipulated during the optimization process and they represent how our actions are encoded within the problem.5
When elements of a solution set at a given time are dominated by a solution set the algorithm maintained some time before.5
Evolutionary algorithm designed to optimize problems over continuous domains.5
Direct policy search
The inability of an algorithm to produce offspring that dominates poorly performing, non-dominated members of the population. (See also Pareto dominance).8
A suite of representative test problems for MOEAs that for which the analytical solutions have been found. The acronym is a combination of the first letters of the creators’ last names (Deb, Thiele, Laumanns, Zitzler). DTLZ problems have been used for benchmarking and diagnostics when evaluating the performance of MOEAs.
Refers to a case where as evolution progresses, non-dominated solutions will not be lost in subsequent generations.
Epsilon (ε) dominance
When dominance is determined by use of a user-specified precision to simplify sorting. Pareto epsilon (ε) optimality and Pareto epsilon (ε) front are defined accordingly. (See also Pareto dominance).5
Epsilon (ε) dominance archive
Epsilon (ε)-box dominance archive
A steady-state MOEA, the first to incorporate ε-dominance archiving into its search process.
Epsilon (ε) indicator
Additive ε-indicator (ε+-indicator) (performance metric)
The smallest distance ε that the approximation set must be translated by in order to completely dominate the reference set.4
Epsilon (ε) progress
A class of search and optimization algorithms inspired by processes of natural evolution.5
Multi-objective evolutionary algorithms (MOEAs)
Evolutionary algorithms used for solving multi-objective problems.5
They operate on the population of an evolutionary algorithm attempting to generate solutions with higher and higher fitness.5
Mutation evolutionary operators
They perturb the decision variables of a single solution to look for improvement in its vicinity.
Recombination evolutionary operators
They combine decision variables from two or more solutions to create new solutions.
Selection evolutionary operators
They determine which solutions are allowed to proceed to the next cycle.
A cross-language automation tool for running models. (See also Project Platypus).
Exploratory Modeling and Analysis (EMA)
A research methodology that uses computational experiments to analyze complex and uncertain systems.9
Data-driven exploratory modeling
Used to reveal implications of a data set by searching through an ensemble of models for instances that are consistent with the data.
Question-driven exploratory modeling
Searches over an ensemble of models believed to be plausible to answer a question of interest or illuminate policy choices.
Model-driven exploratory modeling
Investigates the properties of an ensemble of models without reference to a data set or policy question. It is rather a theoretical investigation into the properties of a class of models.
The set of all decision variables in the decision space that are feasible (i.e. satisfy all constraints).5
A class of MOEAs that replace the entire population during each full mating, mutation, and selection iteration of the algorithm.5 (See also Steady-state algorithms).
Generational distance (performance metric)
The average distance from every solution in the approximation set to the nearest solution in the reference set.4
A generalization of the binomial variance used in Classification and Regression Trees (CART). (See also Classification and Regression Trees (CART)).
High performance computing
Hypervolume (performance metric)
The volume of the space dominated by the approximation set.4
Inverted generational distance (performance metric)
The average distance from every solution in the reference set to the nearest solution in the approximation set.4
A free desktop application for producing and sharing high-dimensional, interactive scientific visualizations. (See also Project Platypus).
Kernel density estimation
Latin Hypercube Sampling (LHS)
Stratified technique used to generate samples of parameter values.
Method of moments
A free and open source Java library for developing and experimenting with multi-objective evolutionary algorithms and other general-purpose optimization algorithms.4
The Non-dominated Sorting Genetic Algorithm-II. MOEA featuring elitism, efficient non-domination sorting, and parameter free diversity maintenance.10
A generational algorithm that uses e-dominance archiving, adaptive population sizing and time continuation.
Number of function evaluations (NFE)
The criteria used to compare solutions in an optimization problem.
A particle swarm optimization algorithm—the first to include e-dominance as a means to solve many-objective problems.11
The process of identifying the best solution (or a set of best solutions) among a set of alternatives.
Multi-objective optimization employs two or more criteria to identify the best solution(s) among a set of alternatives
The notion that a solution is superior or inferior to another solution only when it is superior in all objectives or inferior in all objectives respectively.
A dominating solution is superior to another in all objectives. A dominated solution is inferior to another in all objectives. A non-dominated solution is superior in one objective but inferior in another.
Contains the objective values of all non-dominated solutions (in the objective function space).
Pareto optimal set
Contains the decision variables of all non-dominated solutions (in the decision variable space).
Particle swarm optimization
Population-based stochastic optimization technique where the potential solutions, called particles, fly through the problem space by following the current optimum particles.
Patient Rule Induction method (PRIM)
A rule induction algorithm.
Procedures used to compare the performance of approximation sets.
The set of encoded solutions that are manipulated and evaluated during the application of an evolutionary algorithm.
Principle Component Analysis (PCA)
A Free and Open Source Python Library for Multiobjective Optimization. For more information see: https://github.com/Project-Platypus
Radial basis function
The set of globally optimal solutions in an optimization problem.
Python Library for Robust Decision Making and Exploratory Modelling. (See also Project Platypus).
Robust Decision Making (RDM)
An analytic framework that helps identify potential robust strategies for a particular problem, characterize the vulnerabilities of such strategies, and evaluate trade-offs among them.12
Multi-objective robust decision making (MORDM)
An extension of Robust Decision Making (RDM) to explicitly include the use of multi-objective optimization to discover robust strategies and explore the trade-offs among multiple competing performance objectives.13
An open source implementation of MORDM with the tools necessary to perform a complete MORDM analysis.14 For more information see: https://github.com/OpenMORDM
Safe operating space
Spacing (performance metric)
The uniformity of the spacing between the solutions in an approximation set.
MOEA that assigns a fitness value to each solution based on the number of competing solutions it dominates.
State of the world
A fundamental concept in decision theory which refers to a feature of the world that the agent/decision maker has no control over and is the origin of the agent’s uncertainty about the world.
A class of MOEAs that only replace one solution in the population during each full mating, mutation, and selection iteration of the algorithm. (See also Generational algorithms).
The injection of new solutions in the population to reinvigorate search.
The set of candidate solutions selected randomly from a population.
The rapid analysis of large datasets using interactive software that enables multiple connected views of planning problems.
More information on the concepts
- Haario, H., Saksman, E. & Tamminen, J. An adaptive Metropolis algorithm. Bernoulli 7, 223–242 (2001).
- Akaike, H. Akaike’s information criterion. in International Encyclopedia of Statistical Science 25–25 (Springer, 2011).
- Vrugt, J. A. & Robinson, B. A. Improved evolutionary optimization from genetically adaptive multimethod search. Proc. Natl. Acad. Sci. 104, 708–711 (2007).
- Hadka, D. Beginner’s Guide to the MOEA Framework. (CreateSpace Independent Publishing Platform, 2016).
- Coello, C. A. C., Lamont, G. B. & Van Veldhuizen, D. A. Evolutionary algorithms for solving multi-objective problems. 5, (Springer, 2007).
- Hadka, D. & Reed, P. Borg: An Auto-Adaptive Many-Objective Evolutionary Computing Framework. Evol. Comput. 21, 231–259 (2012).
- Breiman, L. Classification and Regression Trees. (Wadsworth International Group, 1984).
- Reed, P. M., Hadka, D., Herman, J. D., Kasprzyk, J. R. & Kollat, J. B. Evolutionary multiobjective optimization in water resources: The past, present, and future. Adv. Water Resour. 51, 438–456 (2013).
- Bankes, S. Exploratory Modeling for Policy Analysis. Oper. Res. 41, 435–449 (1993).
- Deb, K., Pratap, A., Agarwal, S. & Meyarivan, T. A fast and elitist multiobjective genetic algorithm: NSGA-II. Evol. Comput. IEEE Trans. On 6, 182–197 (2002).
- Sierra, M. R. & Coello, C. C. Improving PSO-based multi-objective optimization using crowding, mutation and e-dominance. in Evolutionary multi-criterion optimization 3410, 505–519 (Springer, 2005).
- Lempert, R. J., Groves, D. G., Popper, S. W. & Bankes, S. C. A general, analytic method for generating robust strategies and narrative scenarios. Manag. Sci. 52, 514–528 (2006).
- Kasprzyk, J. R., Nataraj, S., Reed, P. M. & Lempert, R. J. Many objective robust decision making for complex environmental systems undergoing change. Environ. Model. Softw. (2013). doi:10.1016/j.envsoft.2012.12.007
- Hadka, D., Herman, J., Reed, P. & Keller, K. An open source framework for many-objective robust decision making. Environ. Model. Softw. 74, 114–129 (2015).