Transitioning from R to Python with Spyder

I am currently in the midst of transitioning from R, my preferred programming language, over to Python. While the syntax of the languages is similar, I was quickly overwhelmed with the choices of Integrated Development Environments (IDEs) and text editors for Python. Choosing an IDE is a deeply personal choice and the one you choose depends on your skill level and programming needs. I have tried out many different IDEs, and, for the purpose of making a smooth, intuitive, transition from R to Python, the best one for me was Spyder (Scientific Python Development Environment). In this blog post, I will give an overview of the Spyder environment and step through some of its functionalities.

Installation

The easiest way to install Spyder is through a Python Scientific Distribution found here. There are three options, but I chose to install Anaconda which gives you the core Python language, over 100 main Python libraries, and Spyder. It is an incredibly efficient way to get everything you need in just one download and works for both Windows and Mac. Once this is installed, you can open Spyder immediately.

 Environment

Figure 1: Spyder Environment

The first aspect that I like about Spyder is how similar it looks to RStudio and Matlab, as shown in Figure 1. This made the transition very easy for me. As shown in Figure 2, the Spyder environment is comprised of a collection of panes which can be repositioned by dragging if a different format is more intuitive to the user. To see which panes are open, click View->Panes. The most useful panes will already be open by default. You can choose to keep either the console or the IPython console. This is a matter of preference and I chose to use the regular console.

Figure 2: Choosing Panes

At the top of the screen is your directory, which, by default, is set to the folder which contains Anaconda. You can change it to your preferred location on your computer by clicking the folder icon next to the drop down arrow.

Editor

The leftmost pane is the editor which is where code can be written. The Spyder editor has features such as syntax coloring and real-time code analysis. By default, a temporary script, temp.py, will be open. Go ahead and save this in your current directory. Make sure that the file shown in the gray bar matches your directory (shown in Figure 3).

Figure 3: Setting a Directory

Let’s write a simple script to test out the environment (shown in Figure 4).

Figure 4: Sample Script and Run Settings

Click the green arrow at the top of the screen to run the script. A box will pop up with Run Settings. Make sure the working directory is correct and click “Run.” If you just want to run a certain section of the script, you can highlight that section and click the second green arrow with the blue and orange box.

Console

The results from the script will appear in the console, which is my bottom right pane. The user can also execute a command directly in this console.

Figure 5: Console

Object Inspector/Variable Explorer/File Explorer

The last major aspect of the environment is the top right pane, which is a comprised of three tabs. The first tab is the object inspector, which is analogous to RStudio’s “help” tab. You can search for information on libraries, functions, modules, and classes.

Figure 6: Object Inspector

The second tab is the variable explorer, which is the same as RStudio’s “Environment” tab. This tab conveniently shows the type, size, and value of your variables. The results from our test script are shown in Figure 7.

Figure 7: Variable Explorer

Finally, the last tab is a file explorer which lists out all of the files and folder in your the current directory.

Debugging with Spyder

The Python debugger, pdb, is partly integrated into Spyder. The debugging tools are located in blue, adjacent to the green “run” buttons. By double-clicking specific lines in the code, the user can set breakpoints where the debugger will stop and results from the debugger are displayed in the console.

Figure 8: Debugging Tools

 

Those are the main components of Spyder! As you can see, it is a fairly uncomplicated and intuitive IDE. Hopefully this overview will make the transition from R or Matlab to Python much easier. Go forth and conquer!

 

Advertisements
Jupyter Notebook: A “Hello World” Overview

Jupyter Notebook: A “Hello World” Overview

Jupyter Notebook: Overview

When first learning Python, I was introduced to Jupyter Notebook as an extremely effective IDE for group-learning situations. I’ve since used this browser-based interactive shell for homework assignments, data exploration and visualization, and data processing. The functionality of Jupyter Notebook extends well past simple development and showcasing of code as it can be used with almost any Python library (except for animated figures right before a deadline). Jupyter Notebook is my go-to tool when I am writing code on the go.

As a Jupyter Notebook martyr, I must point out that Jupyter Notebooks can be used for almost anything imaginable. It is great for code-oriented presentations that allow for running live code, timing of lines of code and other magic functions, or even just sifting through data for processing and visualization. Furthermore, if documented properly, Jupyter Notebook can be used as an easy guide for stepping people through lessons. For example, check out the structure of this standalone tutorial for NumPy—download and open it in Jupyter Notebook for the full experience. In a classroom setting, Jupyter Notebook can utilize nbgrader to create quizzes and assignments that can be automatically graded. Alas, I am still trying to figure out how to make it iron my shirt.

1_XMB2FXE3sN4FTwaO8dMMeA

A Sample Jupyter Notebook Presentation (credit: Matthew Speck)

One feature of Jupyter Notebook is that it can be used for a web application on a server-client structure to allow for users to interact remotely via ssh or http. In an example is shown here, you can run Julia on this website even if it is not installed locally. Furthermore, you can use the Jupyter Notebook Viewer to share notebooks online.  However, I have not yet delved into these areas as of yet.

For folks familiar with Python libraries through the years, Jupyter Notebook evolved from IPython and has overtaken its niche. Notably, it can be used for over 40 languages—the original intent was to create an interface for Julia, Python and R, hence Ju-Pyt-R— including Python, R, C++, and more. However, I have only used it for Python and each notebook kernel will run in a single native language (although untested workaround exist).

Installing and Opening the Jupyter Notebook Dashboard

While Jupyter Notebook comes standard with Anaconda, you can easily install it via pip or by checking out this link.

As for opening and running Jupyter Notebook, navigate to the directory (in this case, I created a directory in my username folder titled ‘Example’) you want to work out of in your terminal (e.g. Command Prompt in Windows, Terminal in MacOS) and run the command ‘jupyter notebook’.

command_prompt_start

Opening the Command Prompt

Once run, the following lines appear in your terminal but are relatively unimportant. The most important part is being patient and waiting for it to open in your default web browser—all mainstream web browsers are supported, but I personally use Chrome.

 

 

If at any time you want to exit Jupyter Notebook, press Ctrl + C twice in your terminal to immediately shut down all running kernels (Windows and MacOS). Note that more than one instance of Jupyter Notebook can be running by utilizing multiple terminals.

Creating a Notebook

Once Jupyter Notebook opens in your browser, you will encounter the dashboard. All files and subdirectories will be visible on this page and can generally be opened or examined.

jupyter_start

Initial Notebook Dashboard Without Any Files

If you want to create a shiny new Notebook to work in, click on ‘New’ and select a new Notebook in the language of your choice (shown below). In this case, only Python 3 has been installed and is the only option available. Find other language kernels here.

jupyter_open

Opening a New Notebook

Basic Operations in Jupyter Notebook

Once opened, you will find an untitled workbook without a title or text. To edit the title, simply left-click on ‘Untitled’ and enter your name of choice.

jupyter_new

Blank Jupyter Notebook

To write code, it is the same as writing a regular Python script in any given text editor. You can divide your code into separate sections that are run independently instead of running the entire script again. However, when importing libraries and later using them, you must run the corresponding lines to import them prior to using the aforementioned libraries.

To run code, simply press Shift + Enter while the carat—the blinking text cursor—is in the cell.

jupyter_hello_world

Jupyter Notebook with Basic Operations

After running any code through a notebook, the file is automatically backed up in a hidden folder in your working directory. Note that you cannot directly open the notebook (IPYNB File) by double-clicking on the file. Rather, you must reopen Jupyter Notebook and access it through the dashboard.

jupyter_folder_windows

Directory where Sample Jupyter Notebook Has Been Running

As shown below, you can easily generate and graph data in line. This is very useful when wanting to visualize data in addition to modifying a graphic (e.g. changing labels or colors). These graphics are not rendered at the same DPI as a saved image or GUI window by default but can be changed by modifying matplotlib’s rcParams.

jupyter_graphic

Example Histogram in Jupyter Notebook

Conclusion

At this point, there are plenty of directions you can proceed. I would highly suggest exploring some of the widgets available which include interesting interactive visualizations. I plan to explore further applications in future posts, so please feel free to give me a yell if you have any ideas.

Directed search with the Exploratory Modeling workbench

This is the third blog in a series showcasing the functionality of the Exploratory Modeling workbench. Exploratory modeling entails investigating the way in which uncertainty and/or policy levers map to outcomes. To investigate these mappings, we can either use sampling based strategies (open exploration) or optimization based strategies (directed search) In the first blog, I gave a general overview of the workbench and showed briefly how both investigation strategies can be done. In the second blog, I demonstrated the use of the workbench for open exploration in substantial more detail. In this third blog, I will demonstrate in more detail how to use the workbench for directed search. Like in the previous two blog post, I will use the DPS version of the lake problem.

For optimization, the workbench relies on platypus. You can easily install the latest version of platypus from github using pip

pip install git+https://github.com/Project-Platypus/Platypus.git

By default, the workbench will use epsilon NSGA2, but all the other algorithms available within platypus can be used as well.

Within the workbench, optimization can be used in three ways:
* Search over decision levers for a reference scenario
* Robust search: search over decision levers for a set of scenarios
* worst case discovery: search over uncertainties for a reference policy

The search over decision levers or over uncertainties relies on the specification of the direction for each outcome of interest defined on the model. It is only possible to use ScalarOutcome objects for optimization.

Search over levers

Directed search is most often used to search over the decision levers in order to find good candidate strategies. This is for example the first step in the Many Objective Robust Decision Making process. This is straightforward to do with the workbench using the optimize method.

from ema_workbench import MultiprocessingEvaluator, ema_logging

ema_logging.log_to_stderr(ema_logging.INFO)

with MultiprocessingEvaluator(model) as evaluator:
    results = evaluator.optimize(nfe=10000, searchover='levers', 
                                 epsilons=[0.1,]*len(model.outcomes),
                                 population_size=50)

the result from optimize is a DataFrame with the decision variables and outcomes of interest. The latest version of the workbench comes with a pure python implementation of parallel coordinates plot built on top of matplotlib. It has been designed with the matplotlib and seaborn api in mind. We can use this to quickly visualize the optimization results.

from ema_workbench.analysis import parcoords

paraxes = parcoords.ParallelAxes(parcoords.get_limits(results), rot=0)
paraxes.plot(results, color=sns.color_palette()[0])
paraxes.invert_axis('max_P')
plt.show()

Note how we can flip an axis using the invert_axis method. This eases interpretation of the figure because the ideal solution in this case would be a straight line for the four outcomes of interest at the top of the figure.

output_8_1

Specifying constraints

In the previous example, we showed the most basic way for using the workbench to perform many-objective optimization. However, the workbench also offers support for constraints and tracking convergence. Constrains are an attribute of the optimization problem, rather than an attribute of the model as in Rhodium. Thus, we can pass a list of constraints to the optimize method. A constraint can be applied to the model input parameters (both uncertainties and levers), and/or outcomes. A constraint is essentially a function that should return the distance from the feasibility threshold. The distance should be 0 if the constraint is met.

As a quick demonstration, let’s add a constraint on the maximum pollution. This constraint applies to the max_P outcome. The example below specifies that the maximum pollution should be below 1.

from ema_workbench import MultiprocessingEvaluator, ema_logging, Constraint

ema_logging.log_to_stderr(ema_logging.INFO)

constraints = [Constraint("max pollution", outcome_names="max_P",
                          function=lambda x:max(0, x-1))]

with MultiprocessingEvaluator(model) as evaluator:
    results = evaluator.optimize(nfe=1000, searchover='levers', 
                                 epsilons=[0.1,]*len(model.outcomes),
                                 population_size=25, constraints=constraints)

tracking convergence

To track convergence, we need to specify which metric(s) we want to use and pass these to the optimize method. At present the workbench comes with 3 options: Hyper volume, Epsilon progress, and a class that will write the archive at each iteration to a separate text file enabling later processing. If convergence metrics are specified, optimize will return both the results as well as the convergence information.

from ema_workbench import MultiprocessingEvaluator, ema_logging
from ema_workbench.em_framework.optimization import (HyperVolume,
                                                     EpsilonProgress, )
from ema_workbench.em_framework.outcomes import Constraint

ema_logging.log_to_stderr(ema_logging.INFO)

# because of the constraint on pollution, we can specify the 
# maximum easily
convergence_metrics = [HyperVolume(minimum=[0,0,0,0], maximum=[1,1,1,1]),
                       EpsilonProgress()]
constraints = [Constraint("max pollution", outcome_names="max_P",
                          function=lambda x:max(0, x-1))]

with MultiprocessingEvaluator(model) as evaluator:
    results_ref1, convergence1 = evaluator.optimize(nfe=25000, searchover='levers', 
                                    epsilons=[0.05,]*len(model.outcomes),
                                    convergence=convergence_metrics,
                                    constraints=constraints,
                                    population_size=100)

We can visualize the results using parcoords as before, while the convergence information is in a DataFrame making it also easy to plot.

fig, (ax1, ax2) = plt.subplots(ncols=2, sharex=True)
ax1.plot(convergence1.epsilon_progress)
ax1.set_xlabel('nr. of generations')
ax1.set_ylabel('$\epsilon$ progress')
ax2.plot(convergence1.hypervolume)
ax2.set_ylabel('hypervolume')
sns.despine()
plt.show()

output_16_0

Changing the reference scenario

Up till now, we have performed the optimization for an unspecified reference scenario. Since the lake model function takes default values for each of the deeply uncertain factors, these values have been implicitly assumed. It is however possible to explicitly pass a reference scenario that should be used instead. In this way, it is easy to apply the extended MORDM approach suggested by Watson and Kasprzyk (2017).

To see the effects of changing the reference scenario on the values for the decision levers found through the optimization, as well as ensuring a fair comparison with the previous results, we use the same convergence metrics and constraints from the previous optimization. Note that the constraints are in essence only a function, and don’t retain optimization specific state, we can simply reuse them. The convergence metrics, in contrast retain state and we thus need to re-instantiate them.

from ema_workbench import Scenario

reference = Scenario('reference', **dict(b=.43, q=3,mean=0.02, 
                                         stdev=0.004, delta=.94))
convergence_metrics = [HyperVolume(minimum=[0,0,0,0], maximum=[1,1,1,1]),
                       EpsilonProgress()]

with MultiprocessingEvaluator(model) as evaluator:
    results_ref2, convergence2 = evaluator.optimize(nfe=25000, searchover='levers', 
                                  epsilons=[0.05,]*len(model.outcomes),
                                  convergence=convergence_metrics,
                                  constraints=constraints,
                                  population_size=100, reference=reference)

comparing results for different reference scenarios

To demonstrate the parcoords plotting functionality in some more detail, let’s combine the results from the optimizations for the two different reference scenarios and visualize them in the same plot. To do this, we need to first figure out the limits across both optimizations. Moreover, to get a better sense of which part of the decision space is being used, let’s set the limits for the decision levers on the basis of their specified ranges instead of inferring the limits from the optimization results.

columns = [lever.name for lever in model.levers]
columns += [outcome.name for outcome in model.outcomes]
limits = {lever.name: (lever.lower_bound, lever.upper_bound) for lever in 
           model.levers}
limits = dict(**limits, **{outcome.name:(0,1) for outcome in model.outcomes})
limits = pd.DataFrame.from_dict(limits)
# we resort the limits in the order produced by the optimization
limits = limits[columns] 

paraxes = parcoords.ParallelAxes(limits, rot=0)
paraxes.plot(results_ref1, color=sns.color_palette()[0], label='ref1')
paraxes.plot(results_ref2, color=sns.color_palette()[1], label='ref2')
paraxes.legend()
paraxes.invert_axis('max_P')
plt.show()

output_22_0.png

Robust Search

The workbench also comes with support for many objective robust optimization. In this case, each candidate solution is evaluated over a set of scenarios, and the robustness of the performance over this set is calculated. This requires specifying 2 new pieces of information:
* the robustness metrics
* the scenarios over which to evaluate the candidate solutions

The robustness metrics are simply a collection of ScalarOutcome objects. For each one, we have to specify which model outcome(s) it uses, as well as the actual robustness function. For demonstrative purposes, let’s assume we are use a robustness function using descriptive statistics: we want to maximize the 10th percentile performance for reliability, inertia, and utility, while minimizing the 90th percentile performance for max_P.

We can specify our scenarios in various ways. The simplest would be to pass the number of scenarios to the robust_optimize method. In this case for each generation a new set of scenarios is used. This can create noise and instability in the optimization. A better option is to explicitly generate the scenarios first, and pass these to the method. In this way, the same set of scenarios is used for each generation.

If we want to specify a constraint, this can easily be done. Note however, that in case of robust optimization, the constrains will apply to the robustness metrics instead of the model outcomes. They can of course still apply to the decision variables as well.

import functools
from ema_workbench import Constraint, MultiprocessingEvaluator
from ema_workbench import Constraint, ema_logging
from ema_workbench.em_framework.optimization import (HyperVolume,
                                                     EpsilonProgress)
from ema_workbench.em_framework.samplers import sample_uncertainties

ema_logging.log_to_stderr(ema_logging.INFO)

percentile10 = functools.partial(np.percentile, q=10)
percentile90 = functools.partial(np.percentile, q=90)

MAXIMIZE = ScalarOutcome.MAXIMIZE
MINIMIZE = ScalarOutcome.MINIMIZE
robustnes_functions = [ScalarOutcome('90th percentile max_p', kind=MINIMIZE, 
                             variable_name='max_P', function=percentile90),
                       ScalarOutcome('10th percentile reliability', kind=MAXIMIZE, 
                             variable_name='reliability', function=percentile10),
                       ScalarOutcome('10th percentile inertia', kind=MAXIMIZE, 
                             variable_name='inertia', function=percentile10),
                       ScalarOutcome('10th percentile utility', kind=MAXIMIZE, 
                             variable_name='utility', function=percentile10)]

def constraint(x):
    return max(0, percentile90(x)-10)

constraints = [Constraint("max pollution", 
                          outcome_names=['90th percentile max_p'],
                          function=constraint)]
convergence_metrics = [HyperVolume(minimum=[0,0,0,0], maximum=[10,1,1,1]),
                       EpsilonProgress()]
n_scenarios = 10
scenarios = sample_uncertainties(model, n_scenarios)

nfe = 10000

with MultiprocessingEvaluator(model) as evaluator:
    robust_results, convergence = evaluator.robust_optimize(robustnes_functions, 
                            scenarios, nfe=nfe, constraints=constraints,
                            epsilons=[0.05,]*len(robustnes_functions),
                            convergence=convergence_metrics,)
fig, (ax1, ax2) = plt.subplots(ncols=2)
ax1.plot(convergence.epsilon_progress.values)
ax1.set_xlabel('nr. of generations')
ax1.set_ylabel('$\epsilon$ progress')
ax2.plot(convergence.hypervolume)
ax2.set_ylabel('hypervolume')
sns.despine()
plt.show()

output_25_0.png

paraxes = parcoords.ParallelAxes(parcoords.get_limits(robust_results), rot=45)
paraxes.plot(robust_results)
paraxes.invert_axis('90th percentile max_p')
plt.show()

output_26_0.png

Search over uncertainties: worst case discovery

Up till now, we have focused on optimizing over the decision levers. The workbench however can also be used for worst case discovery (Halim et al, 2016). In essence, the only change is to specify that we want to search over uncertainties instead of over levers. Constraints and convergence works just as in the previous examples.

Reusing the foregoing, however, we should change the direction of optimization of the outcomes. We are no longer interested in finding the best possible outcomes, but instead we want to find the worst possible outcomes.

# change outcomes so direction is undesirable
minimize = ScalarOutcome.MINIMIZE
maximize = ScalarOutcome.MAXIMIZE

for outcome in model.outcomes:
    if outcome.kind == minimize:
        outcome.kind = maximize
    else:
        outcome.kind = minimize

We can reuse the reference keyword argument to perform worst case discovery for one of the policies found before. So, below we select solution number 9 from the pareto approximate set. We can turn this into a dict and instantiate a Policy objecti.

from ema_workbench import Policy

policy = Policy('9', **{k:v for k, v in results_ref1.loc[9].items()
                        if k in model.levers})

with MultiprocessingEvaluator(model) as evaluator:
    results = evaluator.optimize(nfe=1000, searchover='uncertainties', 
                                 epsilons=[0.1,]*len(model.outcomes),
                                 reference=policy)

Visualizing the results is straightforward using parcoords.

paraxes = parcoords.ParallelAxes(parcoords.get_limits(results), rot=0)
paraxes.plot(results)
paraxes.invert_axis('max_P')
plt.show()

output_30_0.png

Closing remarks

This blog showcased the functionality of the workbench for applying search based approaches to exploratory modelling. We specifically looked at the use of many-objective optimization for searching over the levers or uncertainties, as well as the use of many-objective robust optimization. This completes the overview of the functionality available in the workbench. In the next blog, I will put it all together to show how the workbench can be used to perform Many Objective Robust Decision Making.

Animations (1/2)

Animations (1/2)

In the first part of this two-part post we’ll learn to use different tools and techniques to visualize and save animations (second part here). This first part focuses on VisIt, an open-source visualization software developed by the Lawrence Livermore Laboratory – and many others (…that’s an advantage of being open-source). There are a lot of cool things about VisIt, and you can read about that here or here, and probably in future posts. But for today, we’ll just learn how to use VisIt to play and save animations. VisIt supports over a hundred file formats, and this post focuses on reading spatially distributed and time-varying data stored in the netCDF format (more about netCDF here).

Let us assume we have VisIt installed and the VisIt path has been added to our bash profile. Then we just need to cd into the directory where our data are stored, and type visit in the command line to launch the program. VisIt comes with two windows, one for manipulating files and the data stored in them, and another where plots are drawn (it is possible to draw more plots in additional windows, but in this tutorial we’ll stick to one window). Both windows are visible in the screenshot below.

visit

It may be difficult to see, but the left-hand window is the space for managing files and plots. The active “Plot” is also displayed. Here the active file is “air.2m.2000.nc” where nc is th extension for netCDF files. The plot is a “Pseudocolor” of that data, the equivalent of “contourf” in Matlab or Python – Matplotlib. It is plotted in the right-hand window. It is the worldwide distribution of daily air temperature (in Kelvin) with a 1 degree resolution for January 1, 2000 on the left pane (Data from NCEP). Note the toolbar above the plot: it contains “play / stop” buttons that can allow us to play the 365 other days of year 2000 (the cursor is on “stop” in the screenshot). We are going to save the animation of the whole year with the VisIt movie wizard.

But first, let us change the specifications of our plot. In the left-hand window, we go to PlotAtts – > Pseudocolor (recall that is the plot type that we have). We can change the colorbar, set minima and maxima so that the colorbar does not change for each day of the year when we save the movie, and even make the scale skewed so we see more easily the differences in temperatures in non-polar regions where people live (on the image above the temperature in the middle of the colorbar is 254.9 K, which corresponds to -18.1C or roughly -1F). The screenshot below shows our preferences. We can click “Apply” for the changes to take effect, and exit the window.

preferences

Now we can save the movie. We go to File – > Save Movie and follow the VisIt Movie Wizard. You will find it very intuitive and easy. We save a “New Simple Movie” in DVIX (other formarts are available), with 20 frames (i.e. 20 days) per second. The resulting movie is produced within 2 minutes, but since WordPress does not support DIVX unless I am willing to pay for premium access, I had to convert the result as a GIF. To be honest, this is a huge letdown as the quality suffers compared to a video (unless I want to upload a 1GB monster). But the point of this post is that VisIt is really easy to use, not that you should convert its results into GIF.

blogpost

In this animation it is easy to see the outline of land masses as they get either warmer or cooler than the surrounding oceans! Videos would look neater: you can try them for yourself! (at home, wherever you want).

It is actually possible to command VisIt using Python scripts, but I haven’t mastered that yet so it will be a tale for another post.

Using the Exploratory Modelling Workbench

Over the last 7 years, I have been working on the development of an open source toolkit for supporting decision-making under deep uncertainty. This toolkit is known as the exploratory modeling workbench. The motivation for this name is that in my opinion all model-based deep uncertainty approaches are forms of exploratory modeling as first introduced by Bankes (1993). The design of the workbench has undergone various changes over time, but it has started to stabilize in the fall of 2016. This summer, I published a paper detailing the workbench (Kwakkel, 2017). There is an in depth example in the paper, but in a series of blogs I want to showcase the funtionality in some more detail.

The workbench is readily available through pip, but it requires ipyparallel and mpld3 (both available through conda), SALib (via pip), and optionality platypus (pip install directly from github repo).

Adapting the DPS example from Rhodium

As a starting point, I will use the Direct Policy Search example that is available for Rhodium (Quinn et al 2017). I will adapt this code to work with the workbench. In this way, I can explain the workbench, as well as highlight some of the main differences between the workbench and Rhodium.

<br /># A function for evaluating our cubic DPS. This is based on equation (12)
# from [1].
def evaluateCubicDPS(policy, current_value):
    value = 0

for i in range(policy["length"]):
    rbf = policy["rbfs"][i]
    value += rbf["weight"] * abs((current_value - rbf["center"]) / rbf["radius"])**3
    value = min(max(value, 0.01), 0.1)
    return value

# Construct the lake problem
def lake_problem(policy, # the DPS policy
                 b = 0.42, # decay rate for P in lake (0.42 = irreversible)
                 q = 2.0, # recycling exponent
                 mean = 0.02, # mean of natural inflows
                 stdev = 0.001, # standard deviation of natural inflows
                 alpha = 0.4, # utility from pollution
                 delta = 0.98, # future utility discount rate
                 nsamples = 100, # monte carlo sampling of natural inflows
                 steps = 100): # the number of time steps (e.g., days)
    Pcrit = root(lambda x: x**q/(1+x**q) - b*x, 0.01, 1.5)
    X = np.zeros((steps,))
    decisions = np.zeros((steps,))
    average_daily_P = np.zeros((steps,))
    reliability = 0.0
    utility = 0.0
    inertia = 0.0

    for _ in range(nsamples):
        X[0] = 0.0

        natural_inflows = np.random.lognormal(
                math.log(mean**2 / math.sqrt(stdev**2 + mean**2)),
                math.sqrt(math.log(1.0 + stdev**2 / mean**2)),
                size=steps)

        for t in range(1,steps):
            decisions[t-1] = evaluateCubicDPS(policy, X[t-1])
            X[t] = (1-b)*X[t-1] + X[t-1]**q/(1+X[t-1]**q) + decisions[t-1] + natural_inflows[t-1]
            average_daily_P[t] += X[t]/float(nsamples)

        reliability += np.sum(X < Pcrit)/float(steps) 
        utility += np.sum(alpha*decisions*np.power(delta,np.arange(steps)))
        inertia += np.sum(np.diff(decisions) > -0.01)/float(steps-1)

    max_P = np.max(average_daily_P)
    reliability /= float(nsamples)
    utility /= float(nsamples)
    inertia /= float(nsamples)

    return (max_P, utility, inertia, reliability)

The formulation of the decision rule assumes that policy is a dict, which is composed of a set of variables generated either through sampling or through optimization. This is relatively straightforward to do in Rhodium, but not so easy to do in the workbench. In the workbench, a policy is a composition of policy levers, where each policy lever is either a range of real values, a range of integers, or an unordered set of categories. To adapt the DPS version of the lake problem to work with the workbench, we have to first replace the policy dict with the different variables explicitly.

def get_antropogenic_release(xt, c1, c2, r1, r2, w1):
    '''
    Parameters
    ----------
    xt : float
    polution in lake at time t
    c1 : float
    center rbf 1
    c2 : float
    center rbf 2
    r1 : float
    radius rbf 1
    r2 : float
    radius rbf 2
    w1 : float
    weight of rbf 1

    note:: w2 = 1 - w1

    '''

    rule = w1*(abs(xt-c1/r1))**3+(1-w1)*(abs(xt-c2/r2))**3
    at = min(max(rule, 0.01), 0.1)
    return at

Next, we need to adapt the lake_problem function itself to use this adapted version of the decision rule. This requires 2 changes: replace policy in the function signature of the lake_model function with the actual underlying parameters c1, c2, r1, r2, and w1, and use these when calculating the anthropological pollution rate.

def lake_model(b=0.42, q=2.0, mean=0.02, stdev=0.001, alpha=0.4, delta=0.98,
               c1=0.25, c2=0.25, r1=0.5, r2=0.5, w1=0.5, nsamples=100,
               steps=100):
    Pcrit = root(lambda x: x**q/(1+x**q) - b*x, 0.01, 1.5)
    X = np.zeros((steps,))
    decisions = np.zeros((steps,))
    average_daily_P = np.zeros((steps,))
    reliability = 0.0
    utility = 0.0
    inertia = 0.0

    for _ in range(nsamples):
        X[0] = 0.0

        natural_inflows = np.random.lognormal(
                math.log(mean**2 / math.sqrt(stdev**2 + mean**2)),
                math.sqrt(math.log(1.0 + stdev**2 / mean**2)),
                          size=steps)

        for t in range(1,steps):
            decisions[t-1] = get_antropogenic_release(X[t-1], c1, c2, r1, r2, w1)
            X[t] = (1-b)*X[t-1] + X[t-1]**q/(1+X[t-1]**q) + decisions[t-1] + natural_inflows[t-1]
            average_daily_P[t] += X[t]/float(nsamples)

        reliability += np.sum(X < Pcrit)/float(steps)
        utility += np.sum(alpha*decisions*np.power(delta,np.arange(steps)))
        inertia += np.sum(np.diff(decisions) > -0.01)/float(steps-1)

    max_P = np.max(average_daily_P)
    reliability /= float(nsamples)
    utility /= float(nsamples)
    inertia /= float(nsamples)

    return (max_P, utility, inertia, reliability)

This version of the code can be combined with the workbench already. However, we can clean it up a bit more if we want to. Note how there are 2 for loops in the lake model. The outer loop generates stochastic realizations of the natural inflow, while the inner loop calculates the the dynamics of the system given a stochastic realization. The workbench can be made responsible for this outer loop.

A quick note on terminology is in order here. I have a background in transport modeling. Here we often use discrete event simulation models. These are intrinsically stochastic models. It is standard practice to run these models several times and take descriptive statistics over the set of runs. In discrete event simulation, and also in the context of agent based modeling, this is known as running replications. The workbench adopts this terminology and draws a sharp distinction between designing experiments over a set of deeply uncertain factors, and performing replications of each experiment to cope with stochastic uncertainty.

Some other notes on the code:
* To aid in debugging functions, it is good practice to make a function deterministic. In this case we can quite easily achieve this by including an optional argument for setting the seed of the random number generation.
* I have slightly changed the formulation of inertia, which is closer to the mathematical formulation used in the various papers.
* I have changes the for loop over t to get rid of virtually all the t-1 formulations

 

from __future__ import division # python2
import math
import numpy as np
from scipy.optimize import brentq

def lake_model(b=0.42, q=2.0, mean=0.02, stdev=0.001, alpha=0.4,
               delta=0.98, c1=0.25, c2=0.25, r1=0.5, r2=0.5,
               w1=0.5, nsamples=100, steps=100, seed=None):
    '''runs the lake model for 1 stochastic realisation using specified
       random seed.

    Parameters
    ----------
    b : float
    decay rate for P in lake (0.42 = irreversible)
    q : float
    recycling exponent
    mean : float
    mean of natural inflows
    stdev : float
    standard deviation of natural inflows
    alpha : float
    utility from pollution
    delta : float
    future utility discount rate
    c1 : float
    c2 : float
    r1 : float
    r2 : float
    w1 : float
    steps : int
    the number of time steps (e.g., days)
    seed : int, optional
    seed for the random number generator
    '''
    np.random.seed(seed)

    Pcrit = brentq(lambda x: x**q/(1+x**q) - b*x, 0.01, 1.5)
    X = np.zeros((steps,))
    decisions = np.zeros((steps,))

    X[0] = 0.0

    natural_inflows = np.random.lognormal(
                math.log(mean**2 / math.sqrt(stdev**2 + mean**2)),
                math.sqrt(math.log(1.0 + stdev**2 / mean**2)),
                size=steps)

    for t in range(steps-1):
        decisions[t] = get_antropogenic_release(X[t], c1, c2, r1, r2, w1)
        X[t+1] = (1-b)*X[t] + X[t]**q/(1+X[t]**q) + decisions[t] + natural_inflows[t]

    reliability = np.sum(X < Pcrit)/steps
    utility = np.sum(alpha*decisions*np.power(delta,np.arange(steps)))

    # note that I have slightly changed this formulation to retain
    # consistency with the equations in the papers
    inertia = np.sum(np.abs(np.diff(decisions)) < 0.01)/(steps-1)
    return X, utility, inertia, reliability

Now we are ready to connect this model to the workbench. This is fairly similar to how you would do it with Rhodium. We have to specify the uncertainties, the outcomes, and the policy levers. For the uncertainties and the levers, we can use real valued parameters, integer valued parameters, and categorical parameters. For outcomes, we can use either scalar, single valued outcomes or time series outcomes. For convenience, we can also explicitly control constants in case we want to have them set to a value different from their default value.

In this particular case, we are running the replications with the workbench. We still have to specify the descriptive statistics we would like to gather over the set of replications. For this, we can pass a function to an outcome. This function will be called with the results over the set of replications.

import numpy as np
from ema_workbench import (RealParameter, ScalarOutcome, Constant,
                           ReplicatorModel)

model = ReplicatorModel('lakeproblem', function=lake_model)
model.replications = 150

#specify uncertainties
model.uncertainties = [RealParameter('b', 0.1, 0.45),
                       RealParameter('q', 2.0, 4.5),
                       RealParameter('mean', 0.01, 0.05),
                       RealParameter('stdev', 0.001, 0.005),
                       RealParameter('delta', 0.93, 0.99)]

# set levers
model.levers = [RealParameter("c1", -2, 2),
                RealParameter("c2", -2, 2),
                RealParameter("r1", 0, 2),
                RealParameter("r2", 0, 2),
                RealParameter("w1", 0, 1)]

def process_p(values):
    values = np.asarray(values)
    values = np.mean(values, axis=0)
    return np.max(values)

#specify outcomes
model.outcomes = [ScalarOutcome('max_P', kind=ScalarOutcome.MINIMIZE,
                                function=process_p),
                  ScalarOutcome('utility', kind=ScalarOutcome.MAXIMIZE,
                                function=np.mean),
                  ScalarOutcome('inertia', kind=ScalarOutcome.MINIMIZE,
                                function=np.mean),
                  ScalarOutcome('reliability', kind=ScalarOutcome.MAXIMIZE,
                                function=np.mean)]

# override some of the defaults of the model
model.constants = [Constant('alpha', 0.41),
                   Constant('steps', 100)]

Open exploration

Now that we have specified the model with the workbench, we are ready to perform experiments on it. We can use evaluators to distribute these experiments either over multiple cores on a single machine, or over a cluster using ipyparallel. Using any parallelization is an advanced topic, in particular if you are on a windows machine. The code as presented here will run fine in parallel on a mac or Linux machine. If you are trying to run this in parallel using multiprocessing on a windows machine, from within a jupyter notebook, it won’t work. The solution is to move the lake_model and get_antropogenic_release to a separate python module and import the lake model function into the notebook.

Another common practice when working with the exploratory modeling workbench is to turn on the logging functionality that it provides. This will report on the progress of the experiments, as well as provide more insight into what is happening in particular in case of errors.

If we want to perform experiments on the model we have just defined, we can use the perform_experiments method on the evaluator, or the stand alone perform_experiments function. We can perform experiments over the uncertainties and/or over the levers. Any policy is evaluated over each of the scenarios. So if we want to use 100 scenarios and 10 policies, this means that we will end up performing 100 * 10 = 1000 experiments. By default, the workbench uses Latin hypercube sampling for both sampling over levers and sampling over uncertainties. However, the workbench also offers support for full factorial, partial factorial, and Monte Carlo sampling, as well as wrappers for the various sampling schemes provided by SALib.

from ema_workbench import (MultiprocessingEvaluator, ema_logging,
                           perform_experiments)
ema_logging.log_to_stderr(ema_logging.INFO)

with MultiprocessingEvaluator(model) as evaluator:
    results = evaluator.perform_experiments(scenarios=10, policies=10)

Directed Search

Similarly, we can easily use the workbench to search for a good candidate strategy. This requires that platypus is installed. If platypus is installed, we can simply use the optimize method. By default, the workbench will use $\epsilon$-NSGAII. The workbench can be used to search over the levers in order to find a good candidate strategy as is common in Many-Objective Robust Decision Making. The workbench can also be used to search over the uncertainties in order to find for example the worst possible outcomes and the conditions under which they appear. This is a form of worst case discovery. The optimize method takes an optional reference argument. This can be used to set the scenario for which you want to find good policies, or for setting the policy for which you want to find the worst possible outcomes. This makes implementing the approach suggested in Watson & Kasprzyk (2017) very easy.

with MultiprocessingEvaluator(model) as evaluator:
    results = evaluator.optimize(nfe=1000, searchover='levers',
                                 epsilons=[0.1,]*len(model.outcomes))

Robust optimization

A third possibility is to perform robust optimization. In this case, the search will take place over the levers, but a given policy is than evaluated for a set of scenarios and the performance is defined over this set. To do this, we need to explicitly define robustness. For this, we can use the outcome object we have used before. In the example below we are defining robustness as the worst 10th percentile over the set of scenarios. We need to pass a variable_name argument to explicitly link outcomes of the model to the robustness metrics.

import functools

percentile10 = functools.partial(np.percentile, q=10)
percentile90 = functools.partial(np.percentile, q=90)

MAXIMIZE = ScalarOutcome.MAXIMIZE
MINIMIZE = ScalarOutcome.MINIMIZE
robustnes_functions = [ScalarOutcome('90th percentile max_p', kind=MINIMIZE,
                                     variable_name='max_P', function=percentile90),
                       ScalarOutcome('10th percentile reliability', kind=MAXIMIZE,
                                     variable_name='reliability', function=percentile10),
                       ScalarOutcome('10th percentile inertia', kind=MAXIMIZE,
                                     variable_name='inertia', function=percentile10),
                       ScalarOutcome('10th percentile utility', kind=MAXIMIZE,
                                     variable_name='utility', function=percentile10)]

Given the specification of the robustness function, the remainder is straightforward and analogous to normal optimization.

<br />n_scenarios = 200
scenarios = sample_uncertainties(lake_model, n_scenarios)
nfe = 100000

with MultiprocessingEvaluator(lake_model) as evaluator:
    robust_results = evaluator.robust_optimize(robustnes_functions, scenarios,
                                               nfe=nfe, epsilons=[0.05,]*len(robustnes_functions))

This blog has introduced the exploratory modeling workbench and has shown its basic functionality for sampling or searching over uncertainties and levers. In subsequent blogs, I will take a more in depth look at this funcitonality, as well as demonstrate how the workbench facilitates the entire Many-Objective Robust Decision Making process.

Simple tricks to make your C/C++ code run faster

Us, engineers with no formal computer science training, for a myriad of good reasons tend to favor friendly programming languages such as Python over evil C/C++. However, when our application is performance sensitive and we cannot wait for results sitting in our chairs for as long as a Python code would require us to, we are sometimes forced to us C/C++. Why then not making the most of it when it this is the case?

Here are some simple tips to make your C/C++ code run even faster, and how to get some advice about further performance improvements. The last idea (data locality) is transferable to Python and other languages.

Most important trick

Improve your algorithm. Thinking if there is a simpler way of doing what you coded may reduce your algorithm’s complexity (say, from say n3 to n*log(n)), which would:

  • yield great speed-up when you have lots of data or needs to run a computation several times in a row, and
  • make you look smarter.

Compiler flags

First and foremost, the those who know what they are doing — compiler developers — do the work for you by calling the compiler with the appropriate flags. There are an incredible amount of flags you can call for that, but I would say that the ones you should have on whenever possible are -O3 and –march=native.

The optimization flags (-O1 to -O3, the latter more aggressive than the former) will perform a series of modification on your code behind the scenes to speed it up, some times by more than an order of magnitude. The issue is that this modifications may eventually make your code behave differently than you expected, so it’s always good to do a few smaller runs with -O0 and -O3 and compare their results before getting into production mode.

The –march=native flag will make the compiler fine tune your code to the processor it is being compiled on (conversely, –march=haswell would fine tune it to haswell architectures, for example). This is great if you are only going to run your code on your own machine or in another machine known to have a similar architecture, but if you try to run the binary on a machine with a different architecture, specially if it is an older one, you may end up with illegal instruction errors.

Restricted pointer array

When declaring a pointer array which you are sure will not be subject to pointer aliasing — namely there will be no other pointer pointing to the same memory address –, you can declare that pointer as a restrict pointer, as below:

  • GCC: double* __restrict__ my_pointer_array
  • Intel compiler: double* restrict my_pointer_array

This will let the compiler know that it can change order of certain operations involving my_pointer_array to make your code faster without changing some read/write order that may change your results. If you want to use the restricted qualifier with the intel compiler the flag -restrict must be passed as an argument to the compiler.

Aligned pointer array

By aligning an array, you are making sure the data is going to lie in the ideal location in memory for the processor to fetch it and perform the calculations it needs based on that array. To help your compiler optimize your data alignment, you need to (1) align your array when it is declared by a specific number of bytes and (2) tell your the compiler the array is aligned when using it to perform calculations — the compiler has no idea whether or not arrays received as arguments in function are aligned. Below are examples in C, although the same ideas apply to C++ as well.

GCC

#include <stdio.h>
#include <omp.h>

#define SIZE_ARRAY 100000
#define ALIGN 64

void sum(double *__restrict__ a, double *__restrict__ b, double *__restrict__ c, int n) {
    a = (double*) __builtin_assume_aligned(a, ALIGN);
    b = (double*) __builtin_assume_aligned(b, ALIGN);
    c = (double*) __builtin_assume_aligned(c, ALIGN);
    for (int i = 0; i < n; ++i)
        c[i] = a[i] + b[i];
}

int main(void) {

    double a[SIZE_ARRAY] __attribute__((aligned(ALIGN )));
    double b[SIZE_ARRAY] __attribute__((aligned(ALIGN )));
    double c[SIZE_ARRAY] __attribute__((aligned(ALIGN )));

    for (int i = 0; i < SIZE_ARRAY; ++i) {
        a[i] = 5.;
        b[i] = 2.;
    }
    double start_time = omp_get_wtime();
    sum(a, b, c, SIZE_ARRAY);
    double time = omp_get_wtime() - start_time;
    printf("%0.6fs", time);
}

Intel compiler

#include <stdio.h>
#include <omp.h>

#define SIZE_ARRAY 100000
#define ALIGN 64

void sum(double* restrict a, double* restrict b, double* restrict c, int n) {
    __assume_aligned(a, ALIGN);
    __assume_aligned(b, ALIGN);
    __assume_aligned(c, ALIGN);
    for (int i = 0; i < n; ++i)
        c[i] = a[i] + b[i];
}

int main(void) {

    __declspec(align(ALIGN )) float a[SIZE_ARRAY];
    __declspec(align(ALIGN )) float b[SIZE_ARRAY];
    __declspec(align(ALIGN )) float c[SIZE_ARRAY];

    for (int i = 0; i < SIZE_ARRAY; ++i) {
        a[i] = 5.;
        b[i] = 2.;
    }
    double start_time = omp_get_wtime();
    sum(a, b, c, SIZE_ARRAY);
    double time = omp_get_wtime() - start_time;

    printf("%0.6fs", time);
}

Edit: In a comment to this post, Krister Walfridsson not only caught an issue with my GCC code, for which mention I thank him, but he also shows the differences in machine code generated with and without alignment.

Data Locality

Computers are physical things, which means that data is physically stored and needs to be physically moved around in memory and between cache and processor in order to be used in calculations. This means that, if your data is stored all over the place in memory — e.g. in multiple pointer arrays in different parts of memory –, the processor will have to reach out to several parts of memory to fetch all your data before performing any computations. By having the data intelligently laid out in memory you ensure all data required for each computation is stored close to each other and in cache at the same time, which becomes even more important if your code uses too much data to fit in the cache at once.

In order to making your processor’s life easier, it is a good idea to ensure that all data required for a calculation step is close together. For example, if a given computation required three arrays of fixed sizes, it is always a good idea to merge them into one long array, as in the example below for the Intel compiler.

#include <stdio.h>
#include <omp.h>

#define SIZE_ARRAY 100000
#define ALIGN 64

void sum(double* restrict a, double* restrict b, double* restrict c, int n) {
    __assume_aligned(a, ALIGN);
    __assume_aligned(b, ALIGN);
    __assume_aligned(c, ALIGN);
    for (int i = 0; i < n; ++i)
        c[i] = a[i] + b[i];
}

int main(void) {

    __declspec(align(ALIGN )) float abc[3 * SIZE_ARRAY];

    for (int i = 0; i < 2 * SIZE_ARRAY; ++i) {
        a[i] = 5.;
        b[i] = 2.;
    }
    double start_time = omp_get_wtime();
    sum(abc, abc + SIZE_ARRAY, abc + 2 * ARRAY, SIZE_ARRAY);
    double time = omp_get_wtime() - start_time;

    printf("%0.6fs", time);
}

or even, since c[i] depends only on b[i] and a[i], we can have the values of a, b and c intercalated to assure that all computations will be performed on data that is right next to each other in memory:

#include <stdio.h>
#include <omp.h>

#define SIZE_ARRAY 100000
#define ALIGN 64
#define STRIDE 3

void sum(double* restrict abc, int n, int stride) {
    __assume_aligned(abc, ALIGN);

    for (int i = 0; i < n; i += stride)
        abc[i+2] = abc[i] + abc[i+1];
}

int main(void) {
    __declspec(align(ALIGN )) double abc[3 * SIZE_ARRAY];

    for (int i = 0; i < 3 * SIZE_ARRAY; i += STRIDE) {
        abc[i] = 5.;
        abc[i+1] = 2.;
                                            }
    double start_time = omp_get_wtime();
    sum(abc, SIZE_ARRAY, STRIDE );
    double time = omp_get_wtime() - start_time;

    printf("%0.6fs", time);
}

Conclusion

According a class project in which we had to write C code to perform matrix multiplication, the improvements suggested should may improve the performance of your code by 5 or 10 times. Also, the idea of data locality can be transferred to other languages, such as Python.

Generating Interactive Visuals in R

Visuals vs. Visual Analytics 

How do visuals differ from visual analytics? In a scientific sense, a visual is a broad term for any picture, illustration, or graph that can be used to convey an idea. However, visual analytics is more than just generating a graph of complex data and handing it to a decision maker. Visual analytic tools help create graphs that allow the user to interact with the data, whether that involves manipulating a graph in three-dimensional space or allowing users to filter or brush for solutions that match certain criteria. Ultimately, visual analytics seeks to help in making decisions as fast as possible and to “enable learning through continual [problem] reformulation” (Woodruff et al., 2013) by presenting large data sets in an organized way so that the user can better recognize patterns and make inferences.

My goal with this blog post is to introduce two R libraries that are particularly useful to develop interactive graphs that will allow for better exploration of a three-dimensional space. I have found that documentation on these libraries and potential errors was sparse, so this post will consolidate my hours of Stack Overflow searching into a step-by-step process to produce beautiful graphs!

R Libraries

            Use rgl to create a GIF of a 3D graph

Spinning graphs can be especially useful to visualize a 3D Pareto front and a nice visualization for a Power Point presentation. I will be using an example three-objective Pareto set from Julie’s work on the Red River Basin for this tutorial. The script has been broken down and explained in the following sections.


#Set working directory
setwd("C:/Users/Me/Folder/Blog_Post_1")

#Read in in csv of pareto set
data=read.csv("pareto.csv")

#Create three vectors for the three objectives
hydropower=data$WcAvgHydro
deficit=data$WcAvgDeficit
flood=data$WcAvgFlood

In this first block of code, the working directory is set, the data set is imported from a CSV file, and each column of the data frame is saved as a vector that is conveniently named. Now we will generate the plot.


#call the rgl library
library(rgl)

#Adjust the size of the window
par3d(windowRect=c(0,0,500,500))

If the rgl package isn’t installed on your computer yet, simply type install.packages(“rgl”) into the console. Otherwise, use the library function in line 2 to call the rgl package. The next line of code is used to adjust the window that the graph will pop up in. The default window is very small and as such, the movie will have a small resolution if the window is not adjusted!


#Plot the set in 3D space

plot3d(hydropower,deficit,flood,col=brewer.pal(8,"Blues"), size=2, type='s', alpha=0.75)

Let’s plot these data in 3D space. The first three components of the plot3d function are the x,y, and z vectors respectively. The rest of the parameters are subject to your personal preference. I used the Color Brewer (install package “RColorBrewer”) to color the data points in different blue gradients. The first value is the number of colors that you want, and the second value is the color set. Color Brewer sets can be found here: http://www.datavis.ca/sasmac/brewerpal.html. My choice of colors is random, so I opted not to create a color scale. Creating a color scale is more involved in rgl. One option is to split your data into classes and to use legend3d and the cut function to cut your legend into color levels. Unfortunately, there simply isn’t an easy way to create a color scale in rgl. Finally, I wanted my data points to be spheres, of size 2, that were 50% transparent, which is specified with type, size, and alpha respectively. Plot3d will open a window with your graph. You can use your mouse to rotate it.

Now, let’s make  a movie of the graph. The movie3d function requires that you install ImageMagick, a software that allows you to create a GIF from stitching together multiple pictures. ImageMagick also has cool functionalities like editing, resizing, and layering pictures. It can be installed into your computer through R using the first two lines of code below. Make sure not to re-run these lines once ImageMagick is installed.  Note that ImageMagick doesn’t have to be installed in your directory, just on your computer.


require(installr)
install.ImageMagick()

#Create a spinning movie of your plot
movie3d(spin3d(axis = c(0, 0, 1)), duration = 20,
 dir = getwd())

Finally, the last line of code is used to generate the movie. I have specified that I want the plot to spin about the z axis, specified a duration (you can play around with the number to see what suits your data), and that I want the movie to be saved in my current working directory. The resulting GIF is below. If the GIF has stopped running, reload the page and scroll down to this section again.

movie.gif

I have found that creating the movie can be a bit finicky and the last step is where errors usually occur. When you execute your code, make sure that you keep the plot window open while ImageMagick stitches together the snapshots otherwise you will get an error. If you have errors, please feel free to share because I most likely had them at one point and was able to ultimately fix them.

Overall, I found this package to be useful for a quick overview of the 3D space, but I wasn’t pleased with the way the axes values and titles overlap sometimes when the graph spins. The way to work around this is to set the labels and title to NULL and insert your own non-moving labels and title when you add the GIF to a PowerPoint presentation.

            Use plotly to create an interactive scatter

I much prefer the plotly package to rgl for the aesthetic value, ease of creating a color scale, and the ability to mouse-over points to obtain coordinate values in a scatter plot. Plotly is an open source JavaScript graphing library but has an R API. The first step is to create a Plotly account at: https://plot.ly/. Once you have confirmed your email address, head to https://plot.ly/settings/api to get an API key. Click the “regenerate key” button and you’ll get a 20 character key that will be used to create a shareable link to your chart. Perfect, now we’re ready to get started!

setwd("C:/Users/Me/Folder/Blog_Post_1")

library(plotly)
library(ggplot2)

#Set environment variables

Sys.setenv("plotly_username"="rg727")
Sys.setenv("plotly_api_key"="insert key here")

#Read in pareto set data

pareto=read.csv("ieee_synthetic_thinned.csv")

Set the working directory, install the relevant libraries, set the environment variables and load in the data set. Be sure to insert your API key. You will need to regenerate a new key every time you make a new graph. Also, note that your data must be in the form of a data frame for plotly to work.

#Plot your data

plot= plot_ly(pareto, x = ~WcAvgHydro, y = ~WcAvgDeficit, z = ~WcAvgFlood, color = ~WcAvgFlood, colors = c('#00d6f7', '#ebfc2f'))
add_markers()

#Add axes titles
layout(title="Pareto Set", scene = list(xaxis = list(title = 'Hydropower'),yaxis = list(title = 'Deficit'), zaxis = list(title = 'Flood')))
#call the name of your plot to appear in the viewer
plot

To correctly use the plotly command, the first input needed is the data frame, followed by the column names of the x,y, and z columns in the data frame. Precede each column name with a “~”.

I decided that I wanted the colors to scale with the value of the z variable. The colors were defined using color codes available at http://www.color-hex.com/. Use the layout function to add a main title and axis labels. Finally call the name of your plot and you will see it appear in the viewer at the lower right of your screen.If your viewer shows up blank with only the color scale, click on the viewer or click “zoom”. Depending on how large the data set is, it may take some time for the graph to load.


#Create a link to your chart and call it to launch the window
chart_link = api_create(plot, filename = "public-graph")
chart_link

Finally create the chart link using the first line of code above and the next line will launch the graph in Plotly. Copy and save the URL and anyone with it can access your graph, even if they don’t have a Plotly account. Play around with the cool capabilities of my Plotly graph, like mousing over points, rotating, and zooming!

https://plot.ly/~rg727/1/

 

Sources: 

http://www.sthda.com/english/wiki/a-complete-guide-to-3d-visualization-device-system-in-r-r-software-and-data-visualization

https://plot.ly/r/3d-scatter-plots/

Woodruff, M.J., Reed, P.M. & Simpson, T.W. Struct Multidisc Optim (2013) 48: 201. https://doi.org/10.1007/s00158-013-0891-z

James J. Thomas and Kristin A. Cook (Ed.) (2005). Illuminating the Path: The R&D Agenda for Visual Analytics National Visualization and Analytics Center.