This three part series is an overview of the algorithm diagnostics I performed in my Lake Problem study with the hope that readers may apply the steps to any problem of interest. All of the source code for my study, including the scripts used for the diagnostics can be found at https://github.com/VictoriaLynn/Lake-Problem-Diagnostics.

The first step to using the MOEAFramework for comparative algorithm diagnostics was to create the simulation model on which I would be assessing algorithm performance. The Lake Problem was written in C++. The executable alone could be used for optimization with Borg and I created a java stub to connect the problem to the MOEAFramework. (https://github.com/VictoriaLynn/Lake-Problem-Diagnostics/blob/master/Diagnostic-Source/myLake4ObjStoch.java). Additional information on this aspect of a comparative study can be found in examples 4 and 5 for the MOEAFramework (http://moeaframework.org/examples.html) and in Chapter 5 of the manual. I completed the study using version 2.1, which was the newest at the time. I used the all in one executable instead of the source code although I compiled my simulation code within the examples subfolder of the source code.

Once I had developed an appropriate simulation model to represent my problem, I could begin the diagnostic component of my study. I first chose algorithms of interest and determined the range of parameters from which I would like to sample. To determine parameter ranges, I consulted Table 1 of the 2013 AWR article by Reed et al.

Reed, P., et al. Evolutionary Multiobjective Optimization in Water Resources: The Past, Present, and Future. (Editor Invited Submission to the 35th Anniversary Special Issue), Advances in Water Resources, 51:438-456, 2013.

Example parameter files and the ones I used for my study can be found at https://github.com/VictoriaLynn/Lake-Problem-Diagnostics/tree/master/Diagnostic-Source/params. Once I had established parameter files for sampling, I found chapter 8 of the MOEAFramework manual to be incredibly useful. Below I walk through the steps I took in generating approximations of the Pareto optimal front for my problem across multiple seeds, algorithms, and parameterizations. All of the commands have been consolidated into the file Lake_Problem_Comparative_Study.sh on Github, but I had many separate files during my study, which will be separated into steps here. It may have been possible to automate the whole process, but I liked breaking it up into separate scripts to make sure I checked that the output made sense after each step.

**Step 1: Generate Parameter Samples** To generate parameter samples for each algorithm, I used the following code, which I kept in a file called sample_parameters.sh. I ran all .sh scripts using the general command *sh script_name.sh*.

NSAMPLES=500 METHOD=Latin PROBLEM=myLake4ObjStoch ALGORITHMS=(Borg MOEAD eMOEA NSGAII eNSGAII GDE3) JAVA_ARGS="-cp MOEAFramework-2.1-Demo.jar" # Generate the parameter samples echo -n "Generating parameter samples..." for ALGORITHM in ${ALGORITHMS[@]} do java ${JAVA_ARGS} \ org.moeaframework.analysis.sensitivity.SampleGenerator \ --method ${METHOD} --n ${NSAMPLES} --p ${ALGORITHM}_params.txt \ --o ${ALGORITHM}_${METHOD} done

**Step 2: Optimize the problem using algorithms of interest** This step had two parts: optimization with Borg and optimization with the MOEAFramework algorithms. To optimize using Borg, one needs to request Borg at http://borgmoea.org/. This is the only step that needs to be completed outside of the MOEAFramework. I then used the following script to generate approximations to the Pareto front for all 500 samples and 50 random seeds. The –l and –u flags indicate upper and lower bounds for decision variable values. Fortunately, it should soon be possible to type one value and specify the number of variables with that bound instead of typing all 100 values as shown here.

#!/bin/bash #50 random seeds NSEEDS=50 PROBLEM=myLake4ObjStoch ALGORITHM=Borg SEEDS=$(seq 1 ${NSEEDS}) for SEED in ${SEEDS} do NAME=${ALGORITHM}_${PROBLEM}_${SEED} PBS="\ #PBS -N ${NAME}\n\ #PBS -l nodes=1\n\ #PBS -l walltime=96:00:00\n\ #PBS -o output/${NAME}\n\ #PBS -e error/${NAME}\n\ cd \$PBS_O_WORKDIR\n\ ./BorgExec -v 100 -o 4 -c 1 \ -l 0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0 \ -u 0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.1 \ -e 0.01,0.01,0.0001,0.0001 -p Borg_params.txt -i Borg_Latin -s ${SEED} -f ./sets/${ALGORITHM}_${PROBLEM}_${SEED}.set -- ./LakeProblem4obj_control " echo -e $PBS | qsub done

Optimization with the MOEAFramework allowed me to submit jobs for all remaining algorithms and seeds with one script as shown below. In my study, I actually submitted epsilon dominance algorithms (included –e flag) and point dominance algorithms (did not include –e flag) separately; however, it is my understanding that it would have been fine to submit jobs for all algorithms with the epsilon flag, especially since I converted all point dominance approximations to the Pareto front to epsilon dominance when generating reference sets.

#!/bin/bash NSEEDS=50 PROBLEM=myLake4ObjStoch ALGORITHMS=(MOEAD GDE3 NSGAII eNSGAII eMOEA) SEEDS=$(seq 1 ${NSEEDS}) JAVA_ARGS="-cp MOEAFramework-2.1-Demo.jar" set -e for ALGORITHM in ${ALGORITHMS[@]} do for SEED in ${SEEDS} do NAME=${ALGORITHM}_${PROBLEM}_${SEED} PBS="\ #PBS -N ${NAME}\n\ #PBS -l nodes=1\n\ #PBS -l walltime=96:00:00\n\ #PBS -o output/${NAME}\n\ #PBS -e error/${NAME}\n\ cd \$PBS_O_WORKDIR\n\ java ${JAVA_ARGS} org.moeaframework.analysis.sensitivity.Evaluator -p ${ALGORITHM}_params.txt -i ${ALGORITHM}_Latin -b ${PROBLEM} -a ${ALGORITHM} -e 0.01,0.01,0.0001,0.0001 -s ${SEED} -o ./sets/${NAME}.set" echo -e $PBS | qsub done done

**Step 3: Generate combined approximation set for each algorithm and Global reference set** Next, I generated a reference set for each algorithm’s performance. This was useful as it made it easier to generate the global reference set for all algorithms across all seeds and parameterizations and it allowed me to calculate a percent contribution for each algorithm to the global reference set. Below is the script for the algorithm reference sets:

#!/bin/bash NSAMPLES=50 NSEEDS=50 METHOD=Latin PROBLEM=myLake4ObjStoch ALGORITHMS=( NSGAII GDE3 eNSGAII MOEAD eMOEA Borg) JAVA_ARGS="-cp MOEAFramework-2.1-Demo.jar" set -e # Generate the combined approximation sets for each algorithm for ALGORITHM in ${ALGORITHMS[@]} do echo -n "Generating combined approximation set for ${ALGORITHM}..." java ${JAVA_ARGS} \ org.moeaframework.analysis.sensitivity.ResultFileMerger \ -b ${PROBLEM} -e 0.01,0.01,0.0001,0.0001 -o ./SOW4/reference/${ALGORITHM}_${PROBLEM}.combined \ ./SOW4/sets/${ALGORITHM}_${PROBLEM}_*.set echo "done." done In the same file, I added the following lines to generate the global reference set while running the same script. # Generate the reference set from all combined approximation sets echo -n "Generating reference set..." java ${JAVA_ARGS} org.moeaframework.util.ReferenceSetMerger \ -e 0.01,0.01,0.0001,0.0001 -o ./SOW4/reference/${PROBLEM}.reference ./SOW4/reference/*_${PROBLEM}.combined > /dev/null echo "done."

If one wants to keep the decision variables associated with the reference set solutions, it is possible to use org.moeaframework.analysis.sensitivity.ResultFileMerger on all of the pertinent .set files.

A final option for reference sets is to generate local reference sets for each parameterization of each algorithm. This was done with the following script:

#!/bin/bash NSEEDS=50 ALGORITHMS=( GDE3 eMOEA Borg NSGAII eNSGAII MOEAD) PROBLEM=myLake4ObjStoch SEEDS=$(seq 1 ${NSEEDS}) # Evaluate all algorithms for all seeds for ALGORITHM in ${ALGORITHMS[@]} do java -cp MOEAFramework-2.1-Demo.jar org.moeaframework.analysis.sensitivity.ResultFileSeedMerger -d 4 -e 0.01,0.01,0.0001,0.0001 \ --output ./SOW4/${ALGORITHM}_${PROBLEM}.reference ./SOW4/objs/${ALGORITHM}_${PROBLEM}*.obj done

Part 2 of this post walks through my calculation of metrics.

Pingback: Algorithm Diagnostics Walkthrough using the Lake Problem as an example (Part 2 of 3: Calculate metrics for Analysis) Tori Ward | Water Programming: A Collaborative Research Blog

Pingback: Water Programming Blog Guide (3) – Water Programming: A Collaborative Research Blog