# Algorithm Diagnostics Walkthrough using the Lake Problem as an example (Part 2 of 3: Calculate metrics for Analysis) Tori Ward

This post continues from Part 1, which provided examples of using the MOEAFramework to generate Pareto approximate fronts for a comparative diagnostic study.

Once one has finished generating all of the approximate fronts and respective reference sets one hopes to analyze, metrics may be calculated within the MOEAFramework. I calculated metrics for both my local reference sets and all of my individual approximations of the Pareto front. The metrics for the individual approximations were averaged for each parameterization across all seeds to determine the expected performance for a single seed.

Calculate Metrics

Local Reference Set Metrics

```#!/bin/bash

NSAMPLES=50
NSEEDS=50
METHOD=Latin
PROBLEM=myLake4ObjStoch
ALGORITHMS=( NSGAII GDE3 eNSGAII MOEAD eMOEA Borg)

SEEDS=\$(seq 1 \${NSEEDS})
JAVA_ARGS="-cp MOEAFramework-2.1-Demo.jar"
set -e

for ALGORITHM in \${ALGORITHMS[@]}
do
NAME=\${ALGORITHM}_\${PROBLEM}
PBS="\
#PBS -N \${NAME}\n\
#PBS -l nodes=1\n\
#PBS -l walltime=96:00:00\n\
#PBS -o output/\${NAME}\n\
#PBS -e error/\${NAME}\n\
cd \\$PBS_O_WORKDIR\n\
java \${JAVA_ARGS} \
org.moeaframework.analysis.sensitivity.ResultFileEvaluator \
--b \${PROBLEM} --i ./SOW4/\${ALGORITHM}_\${PROBLEM}.reference \
--r ./SOW4/reference/\${PROBLEM}.reference --o ./SOW4/\${ALGORITHM}_\${PROBLEM}.localref.metrics"
echo -e \$PBS | qsub
done

```

Individual Set Metrics

```#!/bin/bash

NSAMPLES=50
NSEEDS=50
METHOD=Latin
PROBLEM=myLake4ObjStoch
ALGORITHMS=( NSGAII GDE3 eNSGAII MOEAD eMOEA Borg)

SEEDS=\$(seq 1 \${NSEEDS})
JAVA_ARGS="-cp MOEAFramework-2.1-Demo.jar"
set -e

for ALGORITHM in \${ALGORITHMS[@]}
do
for SEED in \${SEEDS}
do
NAME=\${ALGORITHM}_\${PROBLEM}_\${SEED}
PBS="\
#PBS -N \${NAME}\n\
#PBS -l nodes=1\n\
#PBS -l walltime=96:00:00\n\
#PBS -o output/\${NAME}\n\
#PBS -e error/\${NAME}\n\
cd \\$PBS_O_WORKDIR\n\
java \${JAVA_ARGS} \
org.moeaframework.analysis.sensitivity.ResultFileEvaluator \
--b \${PROBLEM} --i ./SOW4/sets/\${ALGORITHM}_\${PROBLEM}_\${SEED}.set \
--r ./SOW4/reference/\${PROBLEM}.reference --o ./SOW4/metrics/\${ALGORITHM}_\${PROBLEM}_\${SEED}.metrics"
echo -e \$PBS | qsub
done
done

```

Average Individual Set Metrics across seeds for each parameterization

```#!/bin/bash
#PBS -l nodes=1:ppn=1
#PBS -N moeaevaluations
#PBS -j oe
#PBS -l walltime=96:00:00

cd "\$PBS_O_WORKDIR"

NSAMPLES=50
NSEEDS=50
METHOD=Latin
PROBLEM=myLake4ObjStoch
ALGORITHMS=( NSGAII GDE3 eNSGAII MOEAD eMOEA Borg)

SEEDS=\$(seq 1 \${NSEEDS})
JAVA_ARGS="-cp MOEAFramework-2.1-Demo.jar"
set -e

# Average the performance metrics across all seeds
for ALGORITHM in \${ALGORITHMS[@]}
do
echo -n "Averaging performance metrics for \${ALGORITHM}..."
java \${JAVA_ARGS} \
org.moeaframework.analysis.sensitivity.SimpleStatistics \
-m average --ignore -o ./metrics/\${ALGORITHM}_\${PROBLEM}.average ./metrics/\${ALGORITHM}_\${PROBLEM}_*.metrics
echo "done."
done

```

At the end of this script, I also calculated the set contribution I mentioned earlier by including the following lines.

```# Calculate set contribution
echo ""
echo "Set contribution:"
java \${JAVA_ARGS} org.moeaframework.analysis.sensitivity.SetContribution \
-e 0.01,0.01,0.001,0.01 -r ./reference/\${PROBLEM}.reference ./reference/*_\${PROBLEM}.combined

```

Part 3 covers using the MOEAFramework for further analysis of these metrics.