Get your research workflow on

I have done some research on research workflow and that includes interviewing some of my peers at Cornell grad school to get a sense of what increases their productivity and what are their strategies for accomplishing long-term research goals.  In addition to this, I also gathered good advice from my PI, who is the most ultra-efficient human that I know.  Had I taken the following advice, I would’ve written this blog post a week ago.

The Get your research workflow on series, consists of two parts:

Part 1 covers General research workflow tips and Part 2. Setting up your technical workflow for people training in with the Decision Analytics crew (a.k.a. the best crew in town).

General research workflow tips

Disclosure: some of the contents of this list may be easier said than done.

First of all, a research workflow can be very personal and it is definitely tailored to each person’s requirements, personality and interests,  but here are some general categories that I think every researcher can relate to:

Taking notes, organizing and reflecting on ideas

I was gifted with the memory of a tuna fish, so I need to take notes for everything.  Unfortunately, taking notes with paper notebooks resulted in disaster for me in the past, it was very hard to  keep information organized, and occasionally my notebooks would either disappear or get  coffee stains all over.   Luckily, my office mate Dave, introduced me to the best application for note taking ever: Evernote,  this app allows you to keep your notes categorized, so you can keep the information that you need indexed and searchable across every single platform you have, that means that you can keep your notes synchronized  with your smartphone, laptop, desktop, etc, and have it accessible anywhere you go.

In addition, the Evernote web clipper tool allows you to save and categorize articles or webpages within your notes and make annotations on them.  Additionally, you can tag your notes, this is useful if you have notes that could fit into multiple notebooks.  You can also share  and invite people to edit notes and you can connect it with Google drive.  I would probably still flock to Google docs or Dropbox Paper for collaborative documents, but for personal notes, I prefer the Evernote interface.   There’s no limit on the amount of notebooks that you can have.  I’ve found this app very useful for brainstorming and developing ideas, I also use it to keep a research log to track my research progress.

Reading journal papers and reference management 

Keeping up with the scientific literature can be very challenging specially with the overwhelming amount of journal papers out there, but you can make things manageable for yourself if you find a reference manager that allows you to build a library that makes it easy to find, add, organize, read, prioritize and annotate papers  that you can later cite.  Additionally, you may want to set up smart notifications about new papers on topics that interest you, and get notified via e-mail. A couple of popular free and open source reference managers that allow you to do the previous are Zotero and Mendeley,  also Endnote basic, its free but you would need to upgrade to Endnote Desktop for unlimited storage.  These reference managers also allow you to export BibTex files for its integration with LaTeX.  You can check out the Grand Reference Management Comparison  table for all the reference management software available out there.

In addition to reference manager software,  a couple of popular subscription-based multidisciplinary databases are Web of Science and Scopus, they  differ from Google scholar,  by the fact that these are human curated databases, they are selected by scholarly and quality criteria by literature review committees, and they let you build connections between topics.

Finally,  I came across this article on How to keep up with the scientific literature, where a number of scientists were interviewed on the subject, and they all agree that it can be overwhelming but it is key to stay up to date with the literature as its the only way to contextualize your work and identify the knowledge gaps.  The article provided advice on how to prioritize what to read despite the overwhelming amount of scientific literature.

 

Time management and multi-tasking

This is my Achilles heel, and its a skill that requires a lot of practice and discipline.   Sometimes, progress in research can seem hard to accomplish, specially when you are involved in several projects, dealing with hard deadlines, taking many classes, TA-ing, or you’re simply busy being a socialité,  but there are several tricks to be on top of research while avoiding getting overwhelmed in a multi-tasking world.   Some, or most of this tips came from a time-manager master-mind:

Tip # 1.  Schedule everything and time everything

Schedule everything from hard, set-in-stone deadlines to casual meetings, that way you’ll know for sure how much time you’ll have to spare on different projects and you can block time for those projects in a weekly basis.  Keep track of the time that you spend on different projects/tasks.   There’s a very popular app among 3 economists, Julie’s brother and my friend Justyna called be focused that allows you to manage tasks and time them.  You can use it to keep track, for instance, of the time it takes you to read a paper,  some people use it to time the time it takes them to write a paper till completion, right now I’m tracking the amount of time its taking me to write this blogpost.  Timing everything will allow you to get better at predicting the time it will take you to accomplish something and reflect on how you can improve.   I always tend to underestimate my timings but this app is giving me a reality check.. very annoying.

Tip # 2. Different mindsets for different time slots

When your schedule is full of small time gaps, fill them doing tasks that involve less concentration, probably reading, answering e-mails, organizing yourself, and leave larger time slots for the most creative and challenging part of your work.

Also, a general recommendation of multi-tasking is don’t do it,  trying to do multiple things at once can hurt your productivity, instead,  block times to carry specific tasks, were you focus on that one thing, and then move on to the next. Remember to prioritize and tackle the most important thing first.

Tip #4. Visualize long-term research goals and work backwards

Picture what you want to accomplish in a year or in a semester, and work your way backwards, till you refine the accomplishments of each month, each week and each day to hit your long-term target.  Setup to-do lists for the week and for the day.

Tip #3. Set aside time for new skills that you want to acquire

Even if you set aside one or two hours a week devoted to that skill that you want to develop, it will pay off, you’ll have come a long way at the end of the year.  Challenge yourself and continue to develop new skills.

Tip #5. Don’t leave e-mails sitting in your inbox

There are a couple of strategies for this, you can either allocate time each day specifically for replying to e-mails or you can tackle each e-mail as it comes.   If it’s something that will require you more time, move it to a special list, if it’s a meeting, put it in your calendar, if it’s for reference, save it. No matter what your strategy is,  take action on an e-mail as soon as you read it.

Collaborative work

Some tools for collaborative work :

Overleaf– for writing LaTeX files collaboratively and visualizing the changes live, the platform has several journal templates and can track changes easily.

Github– platform for collaborative code development and management.

Slack – organize conversations with teams, and organize your collaborative workflow

A final recommendation is to have a consistent and intuitive organization of your research.  Document everything, and have reproducible code.  If you get hit by a bus and your colleagues are able to continue research were you left off in less than a week, then you’re in good shape, organization-wise.

I hope this helps, let me know if there are some crucial topics that I missed, I can always come back and edit.

Special thanks to all of my grad/postdoc friends that participated in the brief research workflow interview.

 

 

 

 

Advertisements

Water Programming Blog Guide (3)

This is the final post on the Water Programming Blog Guide.  I encourage you to also look at the Water Programming Blog Guide (Part I) and Water Programming Blog Guide (Part 2) for a different set of topics.  In this post, the following topics will be covered:

  1. Visualization and figure editing
  2. Tutorials
  3. Training videos
  4. LaTex
  5. Reference management
  6. NetCDF
  7. Miscellaneous topics: Optimization and Statistics, Hydrologic Modeling, Global change assessment model

1.  Visualization and figure editing

Visualization is very important in our field,  its a good way to analyze tradeoffs and even interact with the system at hand.  We want to represent multi-dimensional data sets in an informative and appealing way , and the methods for doing are are well documented in our blog, we even have a brief historical overview  for visualizing multi-dimensional , along with useful resources to create either quick and interactive plots  or publication ready ones:

Parallel coordinate plots

Creating parallel axes plots

New web-based multi-objective data interactive visualization tools

Saving d3.parcoords to SVG

The Linked Parallel Coordinate Plot with Linked Heatmap

Alluvial Plots

Multidimensional data and color palettes

Visualization strategies for multidimensional data

Colorbrewer: Color palettes for your figures

Aerovis

AeroVis Documentation

AeroVis: Reproducing the DTLZ1 Animation

AeroVis: Turning a Glyph Plot Into a Figure

Code Sample: Modify AeroVis files for Matlab

Figure editing

Scientific figures in Illustrator

9 basic skills for editing and creating vector graphics in Illustrator

Other Resources

Data Visualization Resources

Plotting Code added to Lake Problem Diagnostics Repository

2. Tutorials

In this section you can find a series of training tutorials.  They consist on short test studies that emulate some of the larger studies that our group members have published,  and it will give you a good idea on how to use some of the tools so you can implement them on your own.  I also recommend looking at the Glossary of commonly used terms to get  familiar with the terminology on multi-objective optimization, decision analytics, systems analysis and water resources and some other lingo that we use in the research group that may not be evident to new members.

MOEA runtime analysis

Runtime dynamics with serial Borg

Random Seed Analysis for the Borg MOEA using DTLZ2, 3 objective instance

MOEA diagnostics

MOEA diagnostics for a simple test case (Part 1/3) ,  (Part 2/3) , (Part 3/3)

Algorithm Diagnostics Walkthrough using the Lake Problem as an example (Part 1 of 3: Generate Pareto approximate fronts) ,  (Part 2 of 3: Calculate metrics for Analysis) , (Part 3 of 3: Metrics-based analysis of algorithm performance)

Determining Whether Differences in Diagnostic Results are Statistically Significant (Part 1) , (Part 2)

3. Training videos

The following videos were developed mainly for MOEAFramework  training , they are not necessarily in sequence but they cover key topics that will enable you to use the MOEAFramework capabilities within your own problems.

Training Video: External problems in MOEAFramework

Training video: MOEAFramework GUI

Training video: Java file for external problems

Training video: Elements of Problem Formulation

Training video: MOEAframework Sobol Sensitivity Analysis

Training video: MOEAFramework, submitting multiple random seeds

Training video: Cluster job submission basics

4.  LaTeX

LaTex helps the quality and aesthetics of documents; also,  as a researcher LaTex gives you the opportunity to focus on the parts of your work that  really matter since it saves you the headache of formatting.   In our blog we have a variety of posts that will help you setup LaTex, have consistent fonts across figures and text within your document, use online platforms to work on LaTex documents collaboratively, and easily track changes from one document to another, this last feature is extremely handy for revisions.

Beginner’s LaTeX Guide

Setup for LaTeX and Sublime

Embedding figure text into a Latex document

Overleaf: LaTeX Collaboration and Publication

latexdiff: “track changes” for LaTeX

5. Reference management

Reference management can be a graduate student’s best friend.  It plays an important role in increasing your productivity by maintaining good record of the articles that you commonly use  and by keeping track of encountered literature.  In our blog you can find useful resources and tips to take advantage of reference managers and on how to populate efficiently your references.

PDFExtract: Get a list of BibTeX references from a scholarly PDF

RSS Feeds for Water Resources Journals

Writing a Paper in Markdown Using Pandoc

Zotero introduction (video)

Web-based Free Options for Bibliography Management and LaTeX Editing

6. NetCDF

The NetCDF website describes this data form better than I can:

NetCDF (network Common Data Form) is a set of interfaces for array-oriented data access and a freely distributed collection of data access libraries for C, Fortran, C++, Java, and other languages. The netCDF libraries support a machine-independent format for representing scientific data. Together, the interfaces, libraries, and format support the creation, access, and sharing of scientific data.

Source: NetCDF

We also have plenty of documentation on handling NetCDF from previous group members:

Using HDF5/zlib Compression in NetCDF4

Use python cf and esgf-python-client package to interact with ESGF data portal

Basic NetCDF processing in Matlab

Reducing size of NetCDF spatial data with list representation

Plotting a map of NetCDF data with Matplotlib/Basemap

7.  Miscellaneous topics

Finally, this is a recompilation of interesting topics that contributors to the blog have come across and have used in their own research and/or classes.

Fitting Multivariate Normal Distributions

Solving non-linear problems using linear programming

Hydrologic modeling

Updated rainfall-runoff models

Diffusion in a 2D Box

Global Change Assessment Model

Porting GCAM onto a UNIX cluster

 

 

 

Dynamic memory allocation in C++

To have success creating C++ programs, it’s essential to have a good understanding of the workings and implementation of dynamic memory,  which is the “art” of performing manual memory management.  When developing code in C++, you may not always know how much memory you will need before your program runs.  For instance, the size of an array may be unknown until you execute the program.

Introduction to pointers

In order to understand dynamic memory allocation, we must first talk about pointers. A pointer is essentially a variable that stores the address in memory of another variable. The pointer data type is a long hexadecimal number representing the memory address.   This address can be accessed using an ampersand (&) exemplified in the following script:

//Accessing a pointer:
int age=43;
cout << &age << endl;
// The output would be something like: 0x7fff567ecb68

Since a pointer is also a variable, it requires declaration before being used .   The declaration consists on giving it a name, hopefully following the good code style convention and declaring the data type that it “points to” using an asterisk (*) like so:

//Declaring pointers of different data types:
int *integerPointer;
double *doublePointer;
float *floatPointer;
char *charPointer;

The pointer’s operators

In summary, the two pointer operators are: address-of operator(&) which returns the memory address and contents-of operator (*) which returns the value of the variable located at that address.

// Example of pointer operators:
float variable=25.6;
float *pointer;
pointer= &variable;
cout << variable << endl; //outputs 25.6, the variable’s value
cout << pointer << endl; //outputs 0x7fff5a774b68, the variable’s location in memory
cout << *pointer << endl; // outputs 25.6, value of the variable stored in that location

This last operator is also called deference operator which enables you to access directly the variable the pointer points to, which you can then use for regular operations:

float width = 5.0;
float length = 10.0;
float area;
float *pWidth = &width;
float *pLength = &length;

//Both of the following operations are equivalent
area = *pWidth * *pLength;
area = width * length;
//the output for both would be 50.

Deferencing the pointers *pWidth and *pLength represents exactly the same as the variables width and length, respectively.

Memory allocation in C++

Now that you have some basic understanding of pointers, we can talk about memory allocation in C++.  Memory in C++ is divided in tow parts: the stack and the heap.  All variables declared inside the function use memory from the stack whereas unused memory that can be used to allocate memory dynamically is stored in the heap.

You may not know in advance how much memory you need to store information in a variable. You can allocate memory within the heap of a variable of a determined data type using the new operator like so:

new float;

This operator allocates the memory necessary for storing a float on the heap and returns that address. This address can also be stored in a pointer, which can then be deferenced to access the variable:

float *pointer = new float; //requesting memory
*pointer = 12.0; //store value
cout << *pointer << endl; //use value
delete pointer;// free up memory
// this is now a dangling pointer
pointer= new float // reuse for new address

Here, the pointer is stored in the stack as a local variable, and holds the allocated address in the heap as its value. The value of 12.0 is then stored at the address in the heap. You can then use this value for other operations. Finally, the delete statement releases the memory when it’s no longer needed; however, it does not delete the pointer since it was stored in the stack. These type of pointers that point to non-existent memory are called dangling pointers and can be reused.

Dynamic memory and arrays

Perhaps the most common use of dynamic memory allocation are arrays. Here’s a brief example of the syntax:

int *pointer= NULL; // initialized pointer
pointer= new int[10] // request memory
delete[]pointer; //delete array pointed to by pointer

The NULL pointer has a value of zero, you can declare a null pointer when you do not have the address to be assigned.

Finally, to allocate and release memory for multi-dimensional arrays, you basically use an array of pointers to arrays, it sounds confusing but you can do this using the following  sample method:

int row = 3;
int col = 4;
double **p  = new double* [row]; // Allocate memory for rows

// Then allocate memory for columns
for(int i = 0; i < col; i++) {
    p[i] = new double[col];
}

//Release memory
for(int i = 0; i < row; i++) {
   delete[] p[i];
}
delete [] p;

I hope this quick overview provides a starting point on tackling your  C++ memory allocation challenges.

Water Programming Blog Guide (Part 2)

Water Programming Blog Guide (Part 1)

This second part of the blog guide will cover the following topics:

  1. Version control using git
  2. Generating maps and working with spatial data in python
  3. Reviews on synthetic streamflow and synthetic weather generation
  4. Conceptual posts

1. Version Control using git

If you are developing code it’s worth the time to gain familiarity with git to maintain reliable and stable development.  Git allows a group of people to work together developing large projects minimizing the chaos when multiple people are editing the same files.   It is also valuable for individual projects as it allows you to have multiple versions of a project, show the changes that you have made over time and undo those changes if necessary.  For a quick introduction to git terminology and functionality, check out  Getting Started: Git and GitHub. The Intro to git Part 1: Local version control and  Intro to git Part 2: Remote Repositories  posts will guide you through your first git project (local or remote) while providing a set of useful commands.  Other specialized tips can be found in: Git branch in bash prompt and GitHub Pages. And if you are wondering how to use git with pycharm, you’ll find these couple of posts useful: A Guide to Using Git in PyCharm – Part 1A Guide to Using Git in PyCharm – Part 2

2. Generating maps and working with spatial data in python

To learn more about python’s capabilities on this subject,  this  lecture  summarizes key python libraries relevant for spatial analysis.  Also,  Julie and the Jons have documented their efforts when working with spatial data and with python’s basemap, leaving us with some valuable examples:

Working with raster data

Python – Extract raster data value at a point

Python – Clip raster data with a shapefile

Using arcpy to calculate area-weighted averages of gridded spatial data over political units (Part 1) , (Part 2)

Generating maps

Making Watershed Maps in Python

Plotting geographic data from geojson files using Python

Generating map animations

Python makes the world go ’round

Making Movies of Time-Evolving Global Maps with Python

3. Reviews on synthetic streamflow and weather generation

We are lucky to have thorough reviews on synthetic weather and synthetic streamflow generation written by our experts Julie and Jon L.  The series on synthetic weather generation consists of five parts. Part I and Part II cover parametric and non-parametric methods, respectively. Part III covers multi-site generation.  Part IV discusses how to modify both parametric and non-parametric methods to simulate weather with climate change projections and finally Part V covers how to simulate weather with seasonal climate forecasts:

Synthetic Weather Generation: Part I , Part II , Part III , Part IV , Part V

The synthetic streamflow review provides a historical perspective while answering key questions on “Why do we care about synthetic streamflow generation?  “Why do we use it in water resources planning and management? and “What are the different methods available?

Synthetic streamflow generation

4.  Conceptual posts

Multi-objective evolutionary algorithms (MOEAs)

We frequently use multi-objective evolutionary algorithms due to their power and flexibility to solve multi-objective problems in water resources applications, so you’ll find sufficient documentation in the blog on basic concepts, applications and performance metrics:

MOEAs: Basic Concepts and Reading

You have a problem integrated into your MOEA, now what?

On constraints within MOEAs

MOEA Performance Metrics

Many Objective Robust Decision Making (MORDM) and Problem framing

The next post discusses the MORDM framework which combines many objective evolutionary optimization, robust decision making, and interactive visual analytics to frame and solve many objective problems under uncertainty.  This is a valuable reading along with the references within.  The second post listed provides a systematic way of thinking about problem formulation and defines the key components of a many-objective problem:

Many Objective Robust Decision Making (MORDM): Concepts and Methods

“The Problem” is the Problem Formulation! Definitions and Getting Started

Econometric analysis and handling multi-variate data

To close this second part of the blog guide, I leave you with a couple selected topics  from the Econometrics and Multivariate statistics courses at Cornell documented by Dave Gold:

A visual introduction to data compression through Principle Component Analysis

Dealing With Multicollinearity: A Brief Overview and Introduction to Tolerant Methods

 

Water Programming Blog Guide (Part I)

The Water Programming blog continues to expand collaboratively through contributors’ learning experiences and their willingness to share their knowledge in this blog.  It now covers a wide variety of topics ranging from quick programming tips to comprehensive literature reviews pertinent to water resources research and multi-objective optimization.  This post intends to provide guidance to new, and probably to current users by bringing to light what’s available in the blog and by developing a categorization of topics.

This first post will cover:

Software requirements

1.Programming languages and IDEs

2.Frameworks of optimization, sensitivity analysis and decision support

3.The Borg MOEA

4.Parallel computing

Part II)  will focus  on version control, spatial data and maps, conceptual posts and literature reviews.  And finally Part III)  will cover visualization and figure editing, LaTex and literature management, tutorials and miscellaneous research and programming tricks.

*Note to my fellow bloggers: Feel free to disagree and make suggestions on the categorization of your posts, also your thoughts on facilitating an easier navigation through the blog are very welcomed.  For current contributors,  its always suggested to tag and categorize your post, you can also follow the guidelines in Some WordPress Tips to enable a better organization of our blog.  Also, if you see a 💡, it means that a blog post idea has been identified.

Software Requirements

If you are new to the group and would like to know what kind of software you require to get started with research.  Joe summed it up pretty well in his  New Windows install? Here’s how to get everything set up post, where he points out all the installations that you should have on your desktop.  Additionally, you can find some guidance on:  Software to Install on Personal Computers and Software to Install on Personal Computers – For Mac Users!. You may also want to check out  What do you need to learn?  if you are entering the group.  These posts are subject to update 💡 but they are a good starting point.

1. Programming languages and Integrated Development Environments (IDEs)

Dave Hadka’s Programming Language Overview provides a summary of key differences between the C, C++, Python and Java.  The programming tips found in the blog cover Python, C, C++,  R and Matlab, there are also some specific instances were Java is used which will be discussed in section 2.  I’ll give some guidance on how to get started on each of these programming languages and point out some useful resources in the blog.

1.1. Python

Python is  a very popular programming language in our group so there’s sufficient guidance and resources available in the blog.  Download is available here, also some online tutorials that I really recommend to get you up to speed with Python are:  learn python the hard waypython for everybody and codeacademy.   Additionally,  stackoverflow is a useful resource for specific tasks.  The python resources available in our blog are outlined as follows:

Data analysis and organization

Data analysis libraries that you may want to check out are  scipy,   numpy   and pandas. Here are some related posts:

Importing, Exporting and Organizing Time Series Data in Python – Part 1 and  Part 2

Comparing Data Sets: Are Two Data Files the Same?

Using Python IDEs

The use of an integrated development environment (IDE) can enable code development and  make the debugging process easier.  Tom has done a good amount of development in PyCharm, so he has generated a  sequence of posts that provide guidance on how to take better advantage of PyCharm:

A Guide to Using Git in PyCharm – Part 1 , Part 2

Debugging in Python (using PyCharm) – Part 1 , Part 2 and Part 3

PyCharm as a Python IDE for Generating UML Diagrams

Josh also provides instructions to setup PyDev in eclipse in his Setting up Python and Eclipse post,  another Python IDE that you may want to check out is Spyder.

Plotting

The plotting library for python is matplotlib.  Some of the example found in the blog will provide some guidance on importing and using the library.  Matt put together a github repository with several  Matlab and Matplotlib Plotting Examples, you can also find guidance on generating more specialized plots:

Customizing color matrices in matplotlib

Easy vectorized parallel plots for multiple data sets

Interactive plotting basics in matplotlib

Python Data Analysis Part 1a: Borg Runtime Metrics Plots (Preparing the Data) , Part 1b: Setting up Matplotlib and Pandas , Part 2: Pandas / Matplotlib Live Demo.

Miscellaneous  Python tips and tricks

Other applications in Python that my fellow bloggers have found useful are related to machine learning:  Basic Machine Learning in Python with Scikit-learn,  Solving systems of equations: Root finding in MATLAB, R, Python and C++  and using  Python’s template class.

1.2. Matlab

Matlab with its powerful toolkit, easy-to-use IDE and high-level language can be used for quick development as long as you are not concerned about speed.  A major disadvantage of this software is that it is not free … fortunately I have a generous boss paying for it.  Here are examples of Matlab applications available in the blog:

A simple command for plotting autocorrelation functions in Matlab

Plotting Probability Ellipses for Bivariate Normal Distributions

Solving Analytical Algebra/Calculus Expressions with Matlab

Generating .gifs from Matlab Figures

Code Sample: Stacked Bars and Lines in Matlab

1.3. C++

I have heard C++ receive extremely unflattering nicknames lately, it is a low-level language which means that you need to worry about everything, even memory allocation, but the fact is that it is extremely fast and powerful and is widely used in the group for modeling, simulation and optimization purposes which would take forever in other  languages.

Getting started

If you are getting started with C++,there are some online  tutorials , and you may want to check out the following material available in the blog:

Setting up Eclipse for C/C++

Getting started with C and C++

Matt’s Thoughts on C++

Training

Here is some training material that Joe put together:

C++ Training: Libraries , C++ Training: Valgrind ,  C++ Training: Makefiles ,  C++ Training: Exercise 1C++ Training: Using gprofCompiling Code using Makefiles

Debugging

If you are developing code in C++ is probably a good idea to install an IDE,  I recently started using CLion, following Bernardo’s and Dave’s recommendation, and I am not displeased with it.  Here are other posts available within this topic:

Quick testing of code online

Debugging the NWS model: lessons learned

Sample code

If you are looking for sample code of commonly used processes in C++, such as defining vectors and arrays, generating output files and timing functions, here are some examples:

C++: Vectors vs. Arrays

A quick example code to write data to a csv file in C++

Code Sample: Timing Functions for C++

1.4. R

R is another free open source environment widely used for statistics.  Joe recommends a reading in his Programming language R is gaining prominence in the scientific community post.  Downloads are available here.  If you happen to use an R package for you research, here’s some guidance on How to cite packages in R.  R also supports a very nice graphics package and the following posts provide plotting examples:

Survival Function Plots in R

Easy labels for multi-panel plots in R

R plotting examples

Parallel plots in R

1.5. Command line/ Linux:

Getting familiar with the command line and linux environment is essential to perform many of the examples and tutorials available in the blog.  Please check out the   Terminal basics for the truly newbies if you want an introduction to the terminal basics and requirements, also take a look at Using gdb, and notes from the book “Beginning Linux Programming”.   Also check out some useful commands:

Using linux “cut” , Using linux “split” , Using linux “grep”

Useful Linux commands to handle text files and speed up work

Using Linux input redirection in a C++ code test

Emacs in Cygwin

2. Frameworks for optimization, sensitivity analysis, and decision support

We use a variety of free open source libraries to perform commonly used analysis in our research.  Most of the libraries that I outline here were developed by our very own contributors.

2.2. MOEAFramework

I have personally used this framework for most of my research.  It has great functionality and speed. It is an open source Java library that supports several multi-objective evolutionary algorithms (MOEAs) and provides  tools to statistically test their performance.  It has other powerful capabilities for sensitivity and data analysis.   Download and documentation material are available here.  In addition to the documentation and examples provided on the MOEAFramework site, other useful resources and guidance can be found in the following posts:

Setup guidance

MOEAframework on Windows

How to specify constraints in MOEAframework (External Problem)

Additional information on MOEAframework Setup Guide

Extracting data

Extracting Data from Borg Runtime Files

Runtime metrics for MOEAFramework algorithms, extracting metadata from Borg runtime, and handling infinities

Parameter Description Files for the MOEAFramework algorithms and for the Borg MOEA

Other uses

Running Sobol Sensitivity Analysis using MOEAFramework

Speeding up algorithm diagnosis by epsilon-sorting runtime files

2.2. Project Platypus

This is the newest python framework developed by Dave Hadka that support a collection of libraries for optimization, sensitivity analysis, data analysis and decision making.  It’s free to download in the Project Platypus github repository .  The repository comes with its own documentation and examples.  We are barely beginning to document our experiences with this platform 💡, but it is very intuitive and user friendly.  Here is the documentation available in the blog so far:

A simple allocation optimization problem in Platypus

Rhodium – Open Source Python Library for (MO)RDM

Using Rhodium for RDM Analysis of External Dataset

 

 

2.3. OpenMORDM

This is an open source library in R for Many Objective robust decision making (MORDM), for more details and documentation on both MORDM and the library use, check out the following post:

Introducing OpenMORDM

2.4. SALib

SALib is a python library developed by Jon Herman that supports commonly used methods to perform sensitivity analysis.  It is available here, aside from the documentation available in the github repository,  you can also find  guidance on some of the available methods in the following posts:

Method of Morris (Elementary Effects) using SALib

Extensions of SALib for more complex sensitivity analyses

Running Sobol Sensitivity Analysis using SALib

SALib v0.7.1: Group Sampling & Nonuniform Distributions

There’s also an R Package for sentitivity analysis:  Starting out with the R Sensitivity Package.  Since we are on the subject, Jan Kwakkel provides guidance on Scenario discovery in Python as well.

2.5. Pareto sorting function in python (pareto.py)

This is a non-dominated sorting function for multi-objective problems in python available in Matt’s github repository.  You can find more information about it in the following posts:

Announcing version 1.0 of pareto.py

Announcing pareto.py: a free, open-source nondominated sorting utility

3.  Borg MOEA

The Borg Multi-objective Evolutionary Algorithm (MOEA) developed by Dave Hadka and Pat Reed, is widely used in our group due to its ability to tackle complex many-objective problems.   We have plenty of documentation and references in our blog so you can get familiar with it.

3.1. Basic Implementation

You can find a brief introduction and basic use in Basic Borg MOEA use for the truly newbies (Part 1/2) and (Part 2/2).  If you want to link your own simulation model to the optimization algorithm, you may want to check: Compiling, running, and linking a simulation model to Borg: LRGV Example.  Here are other Borg-related posts in the blog:

Basic implementation of the parallel Borg MOEA

Simple Python script to create a command to call the Borg executable

Compiling shared libraries on Windows (32 bit and 64 bit systems)

Collecting Borg’s operator dynamics

3.2. Borg MOEA Wrappers

There are Borg MOEA wrappers available for a number of languages.   Currently the Python, Matlab and Perl wrappers are  documented in the blog.  I believe an updated version of the Borg Matlab wrapper for OSX documentation is required at the moment 💡.

Using Borg in Parallel and Serial with a Python Wrapper – Part 1

Using Borg in Parallel and Serial with a Python Wrapper – Part 2

Setting Borg parameters from the Matlab wrapper

Compiling the Borg Matlab Wrapper (Windows)

Compiling the Borg Matlab Wrapper (OSX/Linux)

Code Sample: Perl wrapper to run Borg with the Variable Infiltration Capacity (VIC) model

4. High performance computing (HPC)

With HPC we can handle and analyse massive amounts of data at high speed. Tasks that would normally take months can be done in days or even minutes and it can help us tackle very complex problems.  In addition, here are some Thoughts on using models with a long evaluation time within a Parallel MOEA framework from Joe.

In the group we have a healthy availability of HPC resources; however, there are some logistics involved when working with computing clusters.  Luckily, most of our contributors have experience using HPC and have documented it in the blog.   Also, I am currently using the  MobaXterm interface to facilitate file transfer between my local and remote directories, it also enables to easily navigate and edit files in your remote directory.  It is used by our collaborators in Politecnico di Milano who recommended it to Julie who then recommended it to me.   Moving on, here are some practical resources when working with remote clusters:

4.1. Getting started with clusters and key commands

Python for automating cluster tasks: Part 1, Getting started and Part 2, More advanced commands

The Cluster and Basic UNIX Commands

Using a local copy of Boost on your cluster account

4.2. Submission scripts in  Bash

Some ideas for your Bash submission scripts

Key bindings for Bash history-search

4.3. Making bash sessions more enjoyable

Speed up your Bash sessions with “alias”

Get more screens … with screen

Running tmux on the cluster

Making ssh better

4.4. Portable Batch System (PBS)

Job dependency in PBS submission

PBS Job Submission with Python

PBS job chaining

Common PBS Batch Options

4.5. Python parallelization and speedup

Introduction to mpi4py

Re-evaluating solutions using Python subprocesses

Speed up your Python code with basic shared memory parallelization

Connecting to an iPython HTML Notebook on the Cluster Using an SSH Tunnel

NumPy vectorized correlation coefficient

4.6. Debugging

Debugging: Interactive techniques and print statements

Debugging MPI By Dave Hadka

4.7. File transfer

Globus Connect for Transferring Files Between Clusters and Your Computer

 

9 basic skills for editing and creating vector graphics in Illustrator

This post intends to provide guidance for editing and creating vector graphics using Adobe Illustrator.     The goal is to learn some of the commonly used features to help you get started with your vectorized journey.  Let it be a conceptual diagram, or a logos or cropping people out of your photo, these 9 features (and a fair amount googling) will help you do the job.   Before we begin, it may be worthwhile to distinguish some of the main differences between a raster and a vector graphic.  A raster image is comprised of a collection of squares or pixels, and vector graphics are  based on mathematical formulas that define geometric forms (i.e.  polygons, lines, curves, circles and rectangles), which makes them independent of resolution.

The three main advantages of using vector graphics over raster images are illustrated below:

1. Scalability: Vector graphics scale infinitely without losing any image quality. Raster images guess the colors of missing pixels when sizing up, whereas vector graphics simply use the original mathematical equation to create a consistent shape every time.

scalability-01.png

2. Edibility: Vector files are not flattened, that is, the original shapes of an image exist separately on different layers; this provides flexibility on modifying different elements without impacting the entire image.

edibility.png

3. Reduced file size: A vector file only requires four data points to recreate a square ,whereas a raster image needs to store many small squares.

reduced_file_size.png

 

9 key things to know when getting started with Adobe Illustrator:

1. Starting a project

You can start a new project simply by clicking File> New, and the following window will appear.  You can provide a number of specifications for your document before starting, but you can also customize your document at any stage by clicking File> Document setup (shortcut Alt+Ctrl+P).

pic1.PNG

2. Creating basic shapes

Lines & Arrows

Simply use the line segment tool ( A ) and remember to press the shift button to create perfectly straight lines.  Arrows can be added by using the stroke window (Window> Stroke) and (B) will appear, there’s a variety of arrow styles that you can select from and scale (C).  Finally,  in the line segment tool you can provide the exact length of your line.

Slide1.PNG

Polygons 

Some shapes are already specified in Illustrator (e.g. rectangles , stars and circles (A), but many others such as triangles, need to be specified through the polygon tool.  To draw a triangle I need to specify the number of sides =3 as shown in  (B).

Slide1.PNGCurvatures 

To generate curvatures, you can use the pen tool (A).  Specify two points with the pen, hold the click in the second point and a handle will appear, this handle allows you to shape the curve.  If you want to add more curvatures, draw another point (B) and drag the handle in the opposite direction of the curve.  You can then select the color (C) and the width (D) of your wave.

Slide1.PNG

3. Matching colors

If you need to match the color of an image (A) there are a couple of alternatives:

i) Using the “Eyedrop” tool ( B).  Select the component of the image that you want to match, then select the Eyedrop tool and click on the desired color (C).

Slide1.PNG

ii) Using the color picker panel.  Select the image component with the desired color, then double click on the color picker (highlighted in red) and the following panel should appear.  You can see the exact color code and you can copy and paste it on the image that you wish to edit.

Slide1.PNG

4. Extracting exact position and dimensions

In the following example, I want the windows of my house to be perfectly aligned.  First, in (A), I click on one of the windows of my house and the control panel automatically provides its x and y coordinates, as well its width and height.  Since I want to align both of the windows horizontally, I investigate the  Y coordinates  of the first window and copy it onto the y coordinate of he second window as shown in (B).  The same procedure would apply if you want to copy the dimensions from one figure to another.

twohouses.png

5. Free-style drawing and editing 

The pencil tool (A) is one of my favorite tools in Illustrator, since it corrects my shaky strokes, and allows me to  paint free style.  Once I added color and filled the blob that I drew, it started resembling more like a tree top (B).  You can edit your figure by right clicking it.  A menu will appear enabling you to rotate, reflect, shear and  scale, among other options.  I only wanted to tilt my tree so I specify a mild rotation  (C).

Slide1.PNG

Slide1.PNG

6. Cropping:

Cropping in Illustrator requires clipping masks and I will show a couple of examples using  Bone Bone, a fluffy celebrity cat.  Once a .png image is imported into illustrator, it can be cropped using  one of the following three methods:

Method 1.  Using the direct selection tool

Slide1.PNG

Method 2. Using shapesSlide2.PNG

Method 3. Using the pen tool for a more detailed cropSlide3.PNG

To reverse  to the original image Select Object> Clipping mask> Release or Alt+Ctrl+7

7. Customize the art-board size

If you want your image to be saved without extra white space (B), you can adapt the size of the canvas with  the Art-board tool (or Shft+8) ( A).  Slide1.PNG

8. Using layers:

Layers can help you organize artwork, specially when working with multiple components in an image.  If the Layers panel is not already in your tools, you can access it through Window>  Layers or through the F7 shortcut.  A panel like the  one below should appear.   You can name the layers by double clicking on them, so you can give them a descriptive name.  Note that you can toggle the visibility of each layer on or off. You can also lock a layer if you want to protect it from further change, like the house layer in the example below.  Note that each layer is color-coded, my current working layer is coded in red, so when I select an element in that layer it will be highlighted in red.  The layers can also have sub-layers to store individual shapes, like in the house layer, which is comprised of a collection of rectangles and a triangle.

layer.png

closelayer.png

9.  Saving vector and exporting raster

Adobe Illustrator, naturally allows you to save images in many vector formats, But you can also export raster images such as .png, .jpeg, .bmp, etc.. To export raster images do File> Export and something like the blurry panel below should show up.  You can specify the  resolution and the color of the background. I usually like a transparent background  since it allows you more flexibility when using your image in programs such as Power Point.

background.png

There are many more features available in Illustrator, but this are the ones that I find myself using quite often.  Also, you probably won’t have to generate images from scratch, there are many available resources online. You can download svg images for free which you an later customize in Illustrator.  You can also complement this post by reading Jon Herman’s Scientific figures in Illustrator.

A simple allocation optimization problem in Platypus

For those of you who are not familiar with Project Platypus, its a repository that supports a collection of python libraries for multi-objective optimization, decision making and data analysis. All the libraries are written in a very intuitive way and it is just so slick.

In this post I will focus exclusively on the platypus library which supports a variety of multi-objective evolutionary algorithms (MOEAs), such as NSGA-II, NSGA-III, MOEA/D, IBEA, EpsMOEA, SPEA2, GDE3, OMOPSO and SMPSO. Along with a number of analysis tools and performance metrics (which we will be discussing in subsequent posts).

First you can install the entire framework by typing the following commands on your terminal:

git clone https://github.com/Project-Platypus/Platypus.git
cd Platypus
python setup.py develop

If you have trouble with this first step, please feel free to report it.  The library is still under development and it might be infested with some minor bugs.

I wanted to start this post with a classical allocation problem to illustrate how you would implement your own function and optimize it using platypus.   This example, by the way, was inspired on Professor Loucks’ public system’s modeling class.

We have the following problem:

  1. We want to allocate coconuts to three different markets.
  2. The goal is to find the allocations that maximizes the total benefits.
  3. We have only 6 coconut trucks to distribute to the three markets.

So, lets find the best allocations using the NSGAII algorithm supported by platypus:


from platypus.algorithms import NSGAII
from platypus.core import Problem
from platypus.types import Real

class allocation(Problem):

def __init__(self):
   super(allocation, self).__init__(3, 3, 1)
   self.types[:] = [Real(-10, 10), Real(-10, 10), Real(-10,10)]
   self.constraints[:] = "<=0"

def evaluate(self, solution):
   x = solution.variables[0]
   y = solution.variables[1]
   z = solution.variables[2]
   solution.objectives[:] = [6*x/(1+x), 7*y/(1+1.5*y), 8*z/(1+0.5*z)]
   solution.constraints[:] = [x+y+z-6]

algorithm = NSGAII(allocation())
algorithm.run(10000)

for solution in algorithm.result:
   print(solution.objectives)

Assuming that you already have Platypus installed.  You should be able to import the classes specified in lines 1-3 in the code above.   The first line, as you can tell,  is were you import the MOEA of your choosing, make sure its supported by platypus.  In the second line, we import the Problem class that allows you to define your own  function.  You also need to import the type class  to specify your decision variables’ type.  The library supports Real, Binary,  Integer, etc.  I define the allocation class in line 5.  In line 8, I specify the number of objectives=3, number of decision variables=3 and number of constraints =1.  The bounds of my decision variables are defined in line 9.  The function in line 12 evaluates and stores the solutions.

Note that for the solution.objectives, each of the objective functions are specified in a vector and separated by a comma.  You can set constraints in line 17.  If you have more than one constraint it will be in the same vector with a comma separation.  In line 19, the function is to be optimized with the NSGAII problem, in line 20 you set the number of iterations for the algorithm, I went up to 100000, it really depends on the difficulty of your problem and your choice of algorithm.  Finally, I print the objective values in line 23, but you can have the decision variables as well if you wish.  As you can see, the setup of your problem can be extremely easy.  The names of the classes and functions in the library are very instinctive and you can focus entirely on your  problem formulation.