From part 1 and part 2 of this tutorial we evaluated the DTLZ2, 3 objective problem. We generated a combined reference set between all runs of Borg and NSGAII and calculated the average metrics for each parameter sample. Now we are ready to analyze our results.

**1) Generating control maps**

Control maps help gain insight on the algorithmic performance by projecting hypervolume levels for different combinations of population sizes and NFE.

Required files:

☑ NSGAII_DTLZ2_3.average and/or NSGAII_DTLZ2_3_LHS.metrics

☑ Borg_DTLZ2_3.average and/or Borg_DTLZ2_3_LHS.metrics

☑ NSGAII_Latin

☑ Borg_Latin

The difference between the .average and the .metrics files is that one represents the averaged metrics across the 15 random seed trials for each parameter sample whereas the latter contains the metrics obtained by the reference set of each parameter sample. You can use these file interchangeably, depending on what you want to see, either the reference set or the average performance.

The following matlab script generates control maps from your latin hypercube samples and .metrics or .average files. Here I used NSGAII average metrics. You can also generate control maps for Borg by changing the file names in lines 8 and 9.

%Hypervolume control maps %projecting the hypervolume value %relative to population size and NFE clear all clc parameters=load('NSGAII_Latin'); metrics= load('NSGAII_DTLZ2_3.average'); steps = 3; % compute average of points within each grid sum = zeros(steps+1, steps+1); count = zeros(steps+1, steps+1); entries = min(size(parameters, 1), size(metrics, 1));% parameter samples lbNFE= 10000; ubNFE= 200000; lbPopulationSize=10; ubPopulationSize=1000; for i=1:entries index1 = round(steps * ((parameters(i, 1) - lbPopulationSize) / (ubPopulationSize - lbPopulationSize))) + 1; %take each point and assign it to one of the boxes of the grid index2 = round(steps * ((parameters(i, 2) - lbNFE) / (ubNFE - lbNFE))) + 1; sum(index1, index2) = sum(index1, index2) + metrics(i, 1); count(index1, index2) = count(index1, index2) + 1; end Z = zeros(steps+1, steps+1); for i=1:steps+1 for j=1:steps+1 if (count(i, j) > 0) Z(i, j) = sum(i, j) / count(i, j); end end end Z = Z'; refSetHV= 0.46622538877930825; Z = Z/refSetHV; X = zeros(1, steps+1); Y = zeros(1, steps+1); for i=1:steps+1 X(1, i) = (ubPopulationSize - lbPopulationSize)*((i-1)/steps) + lbPopulationSize; % converts the indexes into real nfe and popsize values Y(1, i) = (ubNFE - lbNFE)*((i-1)/steps) + lbNFE; end % generate contour plot hold on; cmin=0; cmax=1; [C, h] = contourf(X, Y, Z,50000); caxis([cmin cmax]) set(h, 'LineColor', 'none'); xlabel('Population Size') ylabel('NFE') title('Borg') % adjust colormap so that blue indicates best and values are normalized cmap = colormap(flipud(jet)); hold on

The control map for NSGAII looks like this:

For Borg we obtain something like this:

In the axis we have the range of population sizes and NFEs specified on our latin hypercube sampling. The color scale depicts the normalized hypervolume obtained for each combination of these two parameters. Dark blue means high performance, alternatively dark red means poor performance. We could conclude that both algorithms have good controlability for this test case. We would need to carry another analysis using the analytical reference set, as opposed to using a set generated by the combination of our two tested algorithms. Now that you know the procedure, you can carry that analysis on your own..

**2) Generating attainment plots**

Required files:

☑ NSGAII_DTLZ2_3.average and/or NSGAII_DTLZ2_3_LHS.metrics

☑ Borg_DTLZ2_3.average and/or Borg_DTLZ2_3_LHS.metrics

The attainment plots enable us to see how effectively an algorithm approximates the best known Pareto front, and to see how likely it is to attain high levels of performance. The following matlab code provides attainment plots of generational distance, epsilon indicator and hypervolume for NSGAII. You can substitute NSGAII with Borg in line 4 to obtain Borg’s attainment plots.

clc clear all m= load('NSGAII_DTLZ2_3.average'); %Best Metric values Borg %Best GD BestGD=1-min(m(:,2)); %Best Epsilon normalized Bestepsilon=1-min(m(:,5)); % Best hypervolume Besthypervol=max(m(:,1)); %Attainment %Normalized metrics % Hypervolume RFHV=0.48; %Reference set hypervolume NormalizedHV= Besthypervol/RFHV(1); % Besthypervolume/reference_set k= 1:-0.01:0.0; % k is the threshold n=1; %Hypervolume count=0; count1=0; count2=0; algorithms=1; percent_attainment=zeros(length(algorithms),length(k)); percent_attainmentgd=zeros(length(algorithms),length(k)); percent_attainmentei=zeros(length(algorithms),length(k)); for l=1:length(algorithms) % load file for n=1:length(k) for i= 1:length(m(:,2)) if (m(i,1)/RFHV) >= k(n) count=count+1; percent_attainment(l,n)=(count/length(m(:,1))); end if (1-m(i,2)) >= k(n) count1=count1+1; percent_attainmentgd(l,n)=(count1/length(m(:,2))); end if (1-m(i,5)) >= k(n) count2=count2+1; percent_attainmentei(l,n)=(count2/length(m(:,2))); end end count=0; count1=0; count2=0; end end percent_attainment= {percent_attainment', percent_attainmentgd', percent_attainmentei'}; Best_metrics=[NormalizedHV, BestGD, Bestepsilon]; Names={'Hypervolume' 'Generational Distance' 'Epsilon Indicator'}; l= [2,4,6]; for i=1:length(percent_attainment) subplot(1,3,i) imagesc(1, k, percent_attainment{i}) set(gca,'ydir','normal') colormap(flipud(gray)) hold on scatter(1, Best_metrics(i), 80, 'k', 'filled') set(gca,'XTick') xlabel(Names(i)) ylabel('Probability of Best Metric Value'); if i==2 title('Attainment probabilities for NSGAII, DTLZ2_3') legend('Best metric value across all runs') end end

You should obtain something like this for NSGAII:

and for Borg:

The previous plots give us a sense on how reliable these algorithms are. The black circle on top of each bar is the best overall metric for a single seed run. The gray shading is the probability of attaining a given fraction of the best metric value, these fractions are given in the vertical axis. Ideal performance would be depicted as a completely black bar and a circle at 1.

**3) Reference Set Contribution**

Required files:

☑ DTLZ2_combined.pf

☑ NSGAII_DTLZ2_3.reference

☑ Borg_DTLZ2_3.reference

☑ MOEAFramework-2.0-Executable.jar

From your terminal, navigate your working directory and make sure that you have the required files. Type the following command to obtain the reference set contribution.

$ java -cp MOEAFramework-2.0-Executable.jar org.moeaframework.analysis.sensitivity.SetContribution --reference DTLZ2_3_combined.pf NSGAII_DTLZ2_3.reference Borg_DTLZ2_3.reference NSGAII_DTLZ2_3.reference 0.1696113074204947 Borg_DTLZ2_3.reference 0.8303886925795053

From the combined reference set, Borg contributed with ≈ 83% and NSGAII contributed with ≈ 17% of the total solutions. This analysis gets more interesting when we have more algorithms to compare. In upcoming tutorials we’ll test more MOEAs and look at ways to visualize their Pareto front approximation and their reference set contribution.

Go back to ← MOEA diagnostics for a simple test case (Part 2/3)

Pingback: MOEA diagnostics for a simple test case (Part 2/3) | Water Programming: A Collaborative Research Blog

Pingback: Water Programming Blog Guide (3) – Water Programming: A Collaborative Research Blog