# You have a problem integrated into your MOEA, now what?

A lot of the material here has covered steps that you’ll need to do to “get started” using MOEAs — basic programming, some discussions about what a MOEA is and what it does and so forth.

The purpose of this post is to briefly discuss some additional questions/issues you should think about as you’re generating tradeoffs for your problem.  The discussion focuses on how to do some analysis “by hand” by writing codes for the public version of epsilon-NSGAII (or another algorithm).  While the new MOEAFramework coming out of our group will automate some of these processes, it’s still a good idea to know how they work or what you need to do when you’re running a study.

# Random Seed Analysis

The initial population and operators (crossover, mutation) depend on random number generators.  Therefore, there’s a chance you could “get lucky” and generate really good individuals in the first generation.  Or it just so happens that the crossover and mutation cause you to find great solutions really quickly.  How can you ensure that your algorithm results are not just dependent on “luck”?  Replicate the experiment!

Most random number generators allow you to “seed” them, which means that they will provide a predictable but “random” string of numbers after you get started with the initial seed.  This is great for testing things in computer science, since if you repeat a model run it will give exactly the same performance every time you run it, given that the random seed is the same and that the random number generator controls all stochasticity in the model.

# Scripting to support random seed analysis, experimentation

Oh great, you’ve told me to replicate my entire optimization run!  How do I do this efficiently without going crazy over so many output files?  I highly recommend to write scripts in your favorite language (bash, C++, python, etc.) to automate the process of copying files for random seed analysis or parameterization.  Some frameworks will do this for you automatically.  But if they don’t, here’s pseudocode for what you need to do:

• Input number of seeds or number of algorithm runs etc.  Say we’re doing 50 replicates of the same MOEA test
• Create 50 folders in your job folder.  In C++, loop from i = 1:50 on something like: command_string = ‘mkdir ‘ + num2str(i); system(command_string);
• Now, copy the files you need to each of these folders.  Again, do this in a loop.
• If you need to, modify the input files for each run in each folder.  For example, you can replace the random seed number by your loop iterator (i) in each file.
• Finally, submit each job!  One note about using the system command in C++ in a linux environment — the system command actually spawns a new prompt every time you call it, so if you need to use “cd”, place the cd in the same string as the command you need to call.  More simply here’s an example:
This will not work:
system(‘cd myFolder’);
system(‘qsub myScript.sh’);
because after the first command, the system forgot that the “cd” was ever called.
This, however, will work:
system(‘cd myFolder; qsub myScript.sh’);
If you do 50 random seed replicates of the algorithm, you’ll have 50 approximations to the Pareto set as an output of the process.  You can sort the results together to get a reference set for your problem.  More on that in a later post.

# How long do I run the algorithm?

For an analytical test problem, it’s easy to see “when you’re done” because you know what the optimal value or tradeoff surface is.  Real-world problems don’t often have set stopping points, though, so we need to creatively think about how long to run the solver.  Animations using AeroVis or other software can help with this.  Basically, look at each generation or subsequent generations at a certain interval to see search “progress” as a function of time.  Does the algorithm stall?  Or do you quickly find better performance than you’ve ever seen?  Is the progress meaningful?  For example, if you’re solving for cost and your budget is 100 units, but the difference in costs across the results is only, say, 3, you may not find that performance improvement significant.

# Should I change my formulation?

In Kasprzyk et al. [2012], we extended some work in operations research that showed that problem formulations are constantly changing as we solve problems and learn more about our systems.  In a very basic way, solving a problem with an initial formulation allows you to figure out which objectives are important and which ones you may want to remove.  Some questions:

• Do all objectives conflict with one another?  If there is a “collapse” in dimensions, you can satisfy two or more objectives with the same point.
• But, that point may perform really poorly with respect to another objective.  You may not necessarily want to get rid of objectives that collapse, but you could use this as a lesson for which objectives are most important in your system.
• Should you add objectives to increase fidelity in the system?  For example, if you use capital cost, you may also want to consider operating cost, or some other measure of efficiency for the system.

# Parallelizing the algorithm

To use an example we’re working on currently — there is an infrastructure model that takes about 7 seconds to run per evaluation.  This will severely limit our ability to do long searches if we don’t have a way to parallelize the results.

Using our example as a point of discussion, some thoughts:

• We’re using a very loosely coupled simulation model within the algorithm.  The algorithm generates an input file, spawns a system shell that runs an executable model, and then reads objectives from the executable at each function evaluation.
• Parallelization strategies often discuss whether the memory is shared between worker nodes or not.  MPI, “message passing interface” is typically used for scientific codes in which each worker node has its own memory and must pass messages with data between each other.
• But, on most parallel systems the filesystem is the same.  So it’s more difficult to lock ASCII input files from each processor, since each processor theoretically could access the files at any time.
• So, a simple way to handle this is to let each processor have its own “scratch” space in which it can create new input files, run its own copy of the executable, let the executable create output files, and then read the output files.  The traditional MPI evolutionary algorithm can still run, the only difference is that each worker node will have its own copy of the data to work from.
• We can create folders with the data in it, and each folder is then named with the index of the worker processor that will be using the data.