Milton Friedman’s thermostat and sensitivity analysis of control policies

If you’re reading this blog, you probably already know that correlation does not necessarily imply causation. However, does a lack of correlation necessarily imply a lack of causation? Despite widespread misconception, even by Nobel Laureates, it does not. This is especially true for controlled systems, as explained in Milton Friedman’s thermostat example. The act of controlling a system for stability (e.g., a thermostat that expends energy to stabilize indoor air temperature) will tend to remove correlations between variables that are, in a certain sense, causally related (e.g., between energy expenditure and indoor temperature). More generally, the control of a dynamical system can introduce complex, path-dependent behavior that complicates the analysis of observed data using standard statistical techniques. This blog post will introduce these issues in the context of reservoir operations. I will then connect this to recent developments in using sensitivity analysis to improve understanding of how complex control policies respond to changing information over time, and demonstrate the benefits of using simulated state trajectories rather than random sampling approaches when analyzing controlled systems.

A Jupyter Notebook containing the Python code used for this analysis can be found in this GitHub repository.

Reservoir storage stabilization policy

First, consider a reservoir with a capacity of 20 MG, and daily inflows that average 10 MGD, with moderate persistence. Let’s assume the reservoir operator follows a “storage stabilization” policy that attempts to keep the storage constant at 10 MG. This simply requires the operator to release (S – S’ + I*) each day, where S is the current storage, S’ is the storage target, and I* is the projected inflow over the course of the next time step. How will this system evolve over time? With a perfect forecast, we have a situation analogous to Milton Friedman’s thermostat: the control policy keeps storage constant over time, leading it to become uncorrelated from both inflows and releases. Meanwhile, because storage is unvarying, the release depends only on inflows in a perfect linear relationship.

Figure 1: Time series of state variables for storage stabilization policy with perfect forecast.
Figure 2: Pairwise relationships between state variables for storage stabilization policy with perfect forecast.

The fact that inflow is uncorrelated with storage may appear paradoxical at first, but it makes sense once we understand the storage variable contributes no useful information to the system, since it is unchanging over time.

With imperfect forecasts, the operator sometimes needs to increase the release to avoid overtopping the reservoir, or decrease the release to avoid negative storage. This adds noise to the relationships above, but the reservoir releases are still minimally correlated to storages.

Figure 3: Pairwise relationships between state variables for storage stabilization policy with imperfect forecasts.

Power production policy

The reservoir storage stabilization policy and Milton Friedman’s thermostat example demonstrate just one particular aspect of a much broader point: the act of controlling a dynamical system will fundamentally alter the trajectories of the system through state-space. This has important implications for how observed data from these systems should be understood.

But first, consider an alternative reservoir control policy that attempts to take advantage of power price information when making releases for the purpose of hydropower production. The power price has a mean of 10 and lower persistance than the inflow distribution. We assume the following simple set of rules: (1) when price is below 8, release S/4 MG; (2) when price is between 8 and 12, release S/2; (3) when price is above 12, release (S + I*/2). Again, we also adjust the release to avoid overtopping the reservoir or incurring negative storages. Despite the simplicity of this policy, it produces surprisingly complex dynamics.

Figure 4: Time series of state variables for power production policy with perfect forecast.
Figure 5: Pairwise relationships between state variables for power production policy with imperfect forecasts.

In the bottom row of Figure 5, we see interesting and highly non-linear relationships driving the release policy as a function of the projected inflow, storage, and power price. Additionally, the relationships between the state variables themselves (i.e., power price and storage) can display non-linear and thresholding-type behavior. Comparing the storage stabilization policy and the power production policy underscores the importance of the control process in dictating the dynamics of the system and the regions of state-space that are explored.

Figure 6: Comparison of state-space inhabited by storage stabilization policy and power production policy.

Implications for sensitivity analysis of control policies

So what does any of this have to do with sensitivity analysis? Recent advances in adaptive control (e.g. Direct Policy Search, Reinforcement Learning) have allowed for the development of complex operating policies for managing water resource systems under uncertainty by adapting to evolving conditions over time (see recent reviews by Giuliani et al., 2021, and Herman et al., 2020). These operating policies can be built from Artificial Neural Networks, Radial Basis Functions, Decision Trees, or other functional forms, and can exhibit highly non-linear behavior. This complicates efforts to understand how they work “under the hood,” leading to skepticism from some researchers and decision-makers who prefer simpler and more transparent operating policies. Recent work has attempted to “open the black box” by combining sensitivity analysis and visualization in order to illuminate how different operating policies monitor and respond to different sources of information (e.g., Quinn et al., 2019; Hamilton et al., 2022).

In most applications of global sensitivity analysis, researchers want to understand the impacts of uncertain model parameters (e.g., soil permeability) or inputs (e.g., precipitation) on model outputs (e.g., runoff) (see Pianosi et al., 2016, for a review of sensitivity analysis in environmental models). Importantly, in general the inputs to the sensitivity analysis are considered to be exogenous factors and forcings that do not respond to the dynamics of the rest of the system model. For this reason, (quasi)random sampling approaches are often used to generate samples from across the feasible range of each parameter.

However, when sensitivity analysis is applied to the problem of better understanding adaptive control policies (e.g., reservoir releases), then the policy inputs may be either exogenous (e.g., inflows, power prices) or endogenous (e.g., storage). The latter, as we have seen, can cause highly non-linear and path-dependent dynamics within the system, resulting in simulated trajectories that are not uniformly distributed across state space. Any non-uniformity in the distribution of states is potentially valuable “information” about the dynamics of the control policy and the broader system, which we would like to capture with our sensitivity analysis. For this reason, it is preferable to use simulated system trajectories as inputs to the sensitivity analysis, rather than randomly generated inputs, in order to ensure that our sensitivity analysis reflects the actual data generating process of the system.

Figure 7: Comparison of state-space inhabited by storage stabilization policy, using randomly generated input data vs. simulated state trajectories.
Figure 8: Comparison of state-space inhabited by power production policy, using randomly generated input data vs. simulated state trajectories.

Due to the non-linear, non-independent, non-normally distributed nature of these simulated data, many common variance-based global sensitivity analysis methods may not be appropriate. However, moment-independent methods such as information theoretic sensitivity analysis and the delta-moment independent method can help overcome some of these challenges. See Hamilton et al., 2022, Hadjimichael et al., 2020, and references therein, as well as this blog post by Keyvan Malek, for more discussion of these issues.

For the sake of simplicity, I use R-squared as a simple indicator of variance-based sensitivity rather than more sophisticated measures such as Sobol sensitivity. I also apply a discretized version of information theoretic sensitivity analysis (ITSA). Both indices range between 0 and 1, but they cannot be directly compared to each other (e.g., R2 is often higher than ITSA given the same data) . Tables 1 and 2 show the sensitivity indices for the two operating policies.

ITSA, simR2, simITSI, unifR2, unif
Projected inflow0.520.950.260.69
Storage0.060.040.140.29
Power price0.050.020.070.00
Table 1: Comparison of sensitivity indicies for policy inputs governing reservoir releases, under storage stabilization policy
ITSA_simR2_simITSA_unifR2_unif
Projected inflow0.290.480.140.30
Storage0.410.450.170.31
Power price0.090.010.120.21
Table 2: Comparison of sensitivity indicies for policy inputs governing reservoir releases, under power production policy

The most obvious contrast is between the uniformly sampled data and the simulated data; the former tends to estimate significantly lower sensitivity. This is due to the missing information on the path-dependent relationships between states within the system, as a result of using randomly sampled input data rather than actual simulated trajectories. Another interesting comparison is ITSA_sim vs R2_sim for the power production policy. While R2 classifies projected inflow and storage as having roughly equivalent influence on release decisions, ITSA finds substantially more influence from storage. This finding makes sense when comparing the two relationships in Figure 5; both have similar shapes and variances if you squint, but the storage-release relationship has more well-defined fine structure that cannot be discerned by variance-based approaches.

References

Giuliani, M., Lamontagne, J. R., Reed, P. M., & A. Castelletti. (2021). A State-of-the-Art Review of Optimal Reservoir Control for Managing Conflicting Demands in a Changing World. Water Resources Research, 57, e2021WR029927. https://doi.org/10.1029/2021WR029927

Hadjimichael, A., Quinn, J., & P. Reed. (2020). Advancing diagnostic model evaluation to better understand water shortage mechanisms in institutionally complex river basins. Water Resources Research, 56, e2020WR028079. https://doi.org/10.1029/2020WR028079

Hamilton, A. L., Characklis, G. W., & P. M. Reed. (2022). From stream flows to cash flows: Leveraging Evolutionary Multi-Objective Direct Policy Search to manage hydrologic financial risks. Water Resources Research, 58, e2021WR029747. https://doi.org/10.1029/2021WR029747

Herman, J. D., Quinn, J. D., Steinschneider, S., Giuliani, M., & S. Fletcher (2020). Climate adaptation as a control problem: Review and perspectives on dynamic water resources planning under uncertainty. Water Resources Research, 56, e24389. https://doi.org/10.1029/2019WR025502

Pianosi, F., Beven, K., Freer, J., Hall, J. W., Rougier, J., Stephenson, D. B., & T. Wagener. (2016). Sensitivity analysis of environmental models: A systematic review with practical workflow. Environmental Modelling & Software, 79, 214-232. http://dx.doi.org/10.1016/j.envsoft.2016.02.008

Quinn, J. D., Reed, P. M., Giuliani, M., & A. Castelletti (2019). What is controlling our control rules? Opening the black box of multireservoir operating policies using time-varying sensitivity analysis. Water Resources Research, 55, 5962–5984. https://doi.org/10.1029/2018WR024177

Sensitivity Analysis Tools

Sensitivity analysis (SA) is one of the main themes of the Water Programming Blog. There are several decent blog posts that go over theoretical aspects of sensitivity analysis (for example, here , here, and here). Also, many blog posts explain how to efficiently and elegantly visualize sensitivity analysis results (for example, here, and here). In addition, there are many blog posts related to SALib, a widely used Python library developed at Cornell University by former members of Dr. Reed research group (for example, here, here, and here).

Recently, I have been trying to put together a comprehensive list of other SA tools, and I thought it might be useful to write a blog post on this topic. I organized the following list based on the platforms I have explored so far, including MATLAB, Python, and R. After that, I will introduce some other open-source and commercialized SA tools.

MATLAB

Many MATLAB packages have been developed to perform sensitivity analysis and uncertainty quantification. As the following table shows, they have been created by a variety of universities and research institutes. Also, several of them cover different sensitivity analysis methods, such as Regression-based SA, Variance-based SA (e.g., Sobol), and derivative-based SA. All of them support at least two sampling techniques, such as Latin Hyper Cube Sampling. Many of them are generic (discipline-free) and can be used to answer different types of questions; however, a few of them (e.g., PeTTSy and DyGloSA) have been tailored to specific applications, such as biological models. Also, almost all of them include some post-processing and visualization components.

There are two toolboxes that work in platforms other than MATLAB. The SAFE package developed by Pianosi et al. (2015) has R and Python versions, and the SaSAT package developed at the University of New South Wales works in Microsoft Excel.

AbbreviationFull NameExample of Methods SupportedInstitution
GSATGlobal Sensitivity Analysis ToolboxSobol and FASTMATLAB
SAFESensitivity Analysis For EverybodyEET, or Morris method,RSA, Sobol, FAST, and PAWN University of Bristol
GSUAGlobal Sensitivity and Uncertainty Analysis ToolboxSobolMATLAB
GUI-HDMR Global Sensitivity Analysis ToolboxGlobal Sensitivity Analysis using HDMRUniversity of Leeds
DyGloSADynamical Global Sensitivity Analysis ToolboxDynamical Global parameter Sensitivity Analysis (GPSA) of ODE modelsUniversity of Luxembourg
PeTTSyPerturbation Theory Toolbox for SystemsPerturbation analysis of complex systems biology modelsUniversity of Cambridge
SaSATSampling and Sensitivity Analysis ToolsRegression-based (Pearson, Spearman, and Partial Rank Correlation Coefficients)The University of New South Wales
SensSBSensitivity Analysis in Systems Biology modelsLocal SA, derivative and variance based global sensitivity analysisProcess Engineering Group at IIM-CSIC (Vigo, Spain)
SobolGSAGlobal Sensitivity Analysis and Metamodeling SoftwareMorris, Sobol FAST and derivative-based Imperial College London
SUMO SUrrogate Modeling ToolboxSurrogate models, sensitivity analysisGhent University
UQLabThe Framework for Uncertainty QuantificationMorris, Kucherenko,ANCOVA, Borgonovo, SobolETH Zurich
FAST: Fourier Amplitude Sensitivity Testing
EET: Elementary Effects Test
RSA: Regional Sensitivity Analysis 

Python

Interestingly, I was not able to find many Python libraries, and most of the ones that I did find were developed for specific applications. Please leave a comment if you are aware of any other packages that have not been listed here. Among these packages, SALib seems to be the one that covers more SA and sampling methods. There are two SA and QU packages that have C++ versions (OpenTURNS and UQTk). Also, uncertainpy have been originally developed for neuroscience applications.

AbbreviationDescriptionExample of Methods SupportedInstitution
SALibPython sensitivity analysis library Sobol, Morris, FAST, RBD-FAST, Delta Moment-Independent Measure, Derivative-based, FactorialCornell University
uncertainpyUncertainty quantification and sensitivity analysis librarySobolUniversity of Oslo
MATKModel analysis toolKit FAST, SobolLos Alamos National Laboratory
UQTkQuantification of uncertainty in numerical modelsSobol Sandia National Lab
OpenTURNSOpen source initiative for the Treatment of UncertaintiesSpearman Correlation Coefficients, Sobol, ANCOVA, UQTechnical University of Denmark
varsensVariance Based Sensitivity AnalysisSobolVanderbilt University
FAST: Fourier Amplitude Sensitivity Testing
QU: Quantification of Uncertainty

R

I was able to find about fifty R packages that have sensitivity analysis features. The following table lists the ones that have the most comprehensive SA functionalities. It seems that the rest of them were developed for specific areas of science and have limited SA functionality. I list some of these here (RMut, pksensi, ivmodel, FME, episensr, pse).

Based on what I found, sensitivity package seems to cover a wider range of SA methods. Reader can refer to this blog post for more information about the sensitivity package.

NameExample of Methods Supported
sensobolThird-order Sobol
sensitivitySobol, Morris, FAST, RBD-FAST, Delsa, Derivative-based , Factorial
ODEsensitivityMorris, Sobol
multisensiSA on models with multivariate outputs
konfoundRobustness and sensitivity of empirical models
fastFAST
BASSSobol
FAST: Fourier Amplitude Sensitivity Testing

Other Platforms

There are many other SA tools that have been developed in other platforms, and the following table lists only a few of them. There are also several commercial SA platforms such as SDI, VISYOND, and SMARTUQ that seem to have nice graphical user interfaces (GUIs), but, because they are not freeware and the source codes are not available, they might have limited applications in academic research.

AbbreviationMain applicationsProgramming LanguageInstitution
DakotaOptimization, QU, SA (Sobol, FAST, Morris)C++Sandia National Laboratory
PSUADEQU, Spearman, Pearson Correlation Coefficient, Sobol,  Morris, FASTC++Lawrence Livermore National Laboratory
SIMLabSobol, FAST, MorrisGUI-based The European Commission’s science and knowledge service
QU: Quantification of Uncertainty
FAST: Fourier Amplitude Sensitivity Testing

Please leave a comment and let me know if you are aware of any other useful tools that I did not list here.

A Python Implementation of grouped Radial Convergence Plots to visualize Sobol Sensitivity Analysis results

TDLR; A Python implementation of grouped radial convergence plots based on code from the Rhodium library. This script is will be added to Antonia’s repository for Radial Convergence Plots.

Radial convergence plots are a useful tool for visualizing results of Sobol Sensitivities analyses. These plots array the model parameters in a circle and plot the first order, total order and second order Sobol sensitivity indices for each parameter. The first order sensitivity is shown as the size of a closed circle, the total order as the size of a larger open circle and the second order as the thickness of a line connecting two parameters.

In May, Antonia created a new Python library to generate Radial Convergence plots in Python, her post can be found here and the Github repository here. I’ve been working with the Rhodium Library a lot recently and found that it contained a Radial Convergence Plotting function with the ability to plot grouped output, a functionality that is not present in Antonia’s repository. This function produces the same plots as Calvin’s R package. Adding a grouping functionality allows the user to color code the visualization to improve the interpretability of the results. In the code below I’ve adapted the Rhodium function to be a standalone Python code that can create visualizations from raw output of the SALib library. When used on a policy for the Lake Problem, the code generates the following plot shown in Figure 1.

Figure 1: Example Radial Convergence Plot for the Lake Problem reliability objective. Each of the points on the plot represents a sampled uncertain parameter in the model. The size of the filled circle represents the first order Sobol Sensitivity Index, the size of the open circle represents the total order Sobol Sensitivty Index and the thickness of lines between points represents the second order Sobol Sensitivity Index.

import numpy as np
import itertools
import matplotlib.pyplot as plt
import seaborn as sns
import math
sns.set_style('whitegrid', {'axes_linewidth': 0, 'axes.edgecolor': 'white'})

def is_significant(value, confidence_interval, threshold="conf"):
    if threshold == "conf":
        return value - abs(confidence_interval) > 0
    else:
        return value - abs(float(threshold)) > 0

def grouped_radial(SAresults, parameters, radSc=2.0, scaling=1, widthSc=0.5, STthick=1, varNameMult=1.3, colors=None, groups=None, gpNameMult=1.5, threshold="conf"):
    # Derived from https://github.com/calvinwhealton/SensitivityAnalysisPlots
    fig, ax = plt.subplots(1, 1)
    color_map = {}
    
    # initialize parameters and colors
    if groups is None:
        
        if colors is None:
            colors = ["k"]
        
        for i, parameter in enumerate(parameters):
            color_map[parameter] = colors[i % len(colors)]
    else:        
        if colors is None:
            colors = sns.color_palette("deep", max(3, len(groups)))
        
        for i, key in enumerate(groups.keys()):
            #parameters.extend(groups[key])
            
            for parameter in groups[key]:
                color_map[parameter] = colors[i % len(colors)]
    
    n = len(parameters)
    angles = radSc*math.pi*np.arange(0, n)/n
    x = radSc*np.cos(angles)
    y = radSc*np.sin(angles)
    
    # plot second-order indices
    for i, j in itertools.combinations(range(n), 2):
        #key1 = parameters[i]
        #key2 = parameters[j]
        
        if is_significant(SAresults["S2"][i][j], SAresults["S2_conf"][i][j], threshold):
            angle = math.atan((y[j]-y[i])/(x[j]-x[i]))
                
            if y[j]-y[i] < 0:
                angle += math.pi
                
            line_hw = scaling*(max(0, SAresults["S2"][i][j])**widthSc)/2
                
            coords = np.empty((4, 2))
            coords[0, 0] = x[i] - line_hw*math.sin(angle)
            coords[1, 0] = x[i] + line_hw*math.sin(angle)
            coords[2, 0] = x[j] + line_hw*math.sin(angle)
            coords[3, 0] = x[j] - line_hw*math.sin(angle)
            coords[0, 1] = y[i] + line_hw*math.cos(angle)
            coords[1, 1] = y[i] - line_hw*math.cos(angle)
            coords[2, 1] = y[j] - line_hw*math.cos(angle)
            coords[3, 1] = y[j] + line_hw*math.cos(angle)

            ax.add_artist(plt.Polygon(coords, color="0.75"))
        
    # plot total order indices
    for i, key in enumerate(parameters):
        if is_significant(SAresults["ST"][i], SAresults["ST_conf"][i], threshold):
            ax.add_artist(plt.Circle((x[i], y[i]), scaling*(SAresults["ST"][i]**widthSc)/2, color='w'))
            ax.add_artist(plt.Circle((x[i], y[i]), scaling*(SAresults["ST"][i]**widthSc)/2, lw=STthick, color='0.4', fill=False))
    
    # plot first-order indices
    for i, key in enumerate(parameters):
        if is_significant(SAresults["S1"][i], SAresults["S1_conf"][i], threshold):
            ax.add_artist(plt.Circle((x[i], y[i]), scaling*(SAresults["S1"][i]**widthSc)/2, color='0.4'))
           
    # add labels
    for i, key in enumerate(parameters):                
        ax.text(varNameMult*x[i], varNameMult*y[i], key, ha='center', va='center',
                rotation=angles[i]*360/(2*math.pi) - 90,
                color=color_map[key])
        
    if groups is not None:
        for i, group in enumerate(groups.keys()):
            print(group)
            group_angle = np.mean([angles[j] for j in range(n) if parameters[j] in groups[group]])
            
            ax.text(gpNameMult*radSc*math.cos(group_angle), gpNameMult*radSc*math.sin(group_angle), group, ha='center', va='center',
                rotation=group_angle*360/(2*math.pi) - 90,
                color=colors[i % len(colors)])
            
    ax.set_facecolor('white')
    ax.set_xticks([])
    ax.set_yticks([])
    plt.axis('equal')
    plt.axis([-2*radSc, 2*radSc, -2*radSc, 2*radSc])
    #plt.show()

    
    return fig

The code below implements this function using the SALib to conduct a Sobol Sensitivity Analysis on the Lake Problem to produce Figure 1.

import numpy as np
import itertools
import matplotlib.pyplot as plt
import math
from SALib.sample import saltelli
from SALib.analyze import sobol
from lake_problem import lake_problem
from grouped_radial import grouped_radial

# Define the problem for SALib
problem = {
	'num_vars': 5,
	'names': ['b', 'q', 'mean', 'stdev', 'delta'],
	'bounds': [[0.1, 0.45],
			   [2.0, 4.5],
			   [0.01, 0.05],
			   [0.001, 0.005],
			   [0.93, 0.99]]
}

# generate Sobol samples
param_samples = saltelli.sample(problem, 1000)

# extract each parameter for input into the lake problem
b_samples = param_samples[:,0]
q_samples = param_samples[:,1]
mean_samples = param_samples[:,2]
stdev_samples = param_samples[:,3]
delta_samples = param_samples[:,4]


# run samples through the lake problem using a constant policy of .02 emissions
pollution_limit = np.ones(100)*0.02

# initialize arrays to store responses
max_P = np.zeros(len(param_samples))
utility = np.zeros(len(param_samples))
inertia = np.zeros(len(param_samples))
reliability = np.zeros(len(param_samples))

# run model across Sobol samples
for i in range(0, len(param_samples)):
	print("Running sample " + str(i) + ' of ' + str(len(param_samples)))
	max_P[i], utility[i], inertia[i], reliability[i] = lake_problem(pollution_limit,
																	b=b_samples[i],
																	q=q_samples[i],
																	mean=mean_samples[i],
																	stdev=stdev_samples[i],
																	delta=delta_samples[i])

#Get sobol indicies for each response
SA_max_P = sobol.analyze(problem, max_P, print_to_console=False)
SA_reliability = sobol.analyze(problem, reliability, print_to_console=True)
SA_inertia = sobol.analyze(problem, inertia, print_to_console=False)
SA_utility = sobol.analyze(problem, utility, print_to_console=False)

# define groups for parameter uncertainties
groups={"Lake Parameters" : ["b", "q"],
        "Natural Pollution" : ["mean", "stdev"],
        "Discounting" : ["delta"]}


fig = grouped_radial(SA_reliability, ['b', 'q', 'mean', 'stdev', 'delta'], groups=groups, threshold=0.025)
plt.show()

Radial convergence diagram (aka chord diagram) for sensitivity analysis results and other inter-relationships between data

TLDR; Python script for radial convergence plots that can be found here.

You might have encountered this type of graph before, they’re usually used to present relationships between different entities/parameters/factors and they typically look like this:

From https://datavizcatalogue.com/methods/non_ribbon_chord_diagram.html

In the context of our work, I have seen them used to present sensitivity analysis results, where we are interested in both the individual significance of a model parameter, but also the extent of its interaction with others. For example, in Butler et al. (2014) they were used to present First, Second, and Total order parameter sensitivities as produced by a Sobol’ Sensitivity Analysis.

From Butler et al. (2014)

I set out to write a Python script to replicate them. Calvin Whealton has written a similar script in R, and the same functionality also exists within Rhodium. I just wanted something with a bit more flexibility, so I wrote this script that produces two types of these graphs, one with straight lines and one with curved (which are prettier IMO). The script takes dictionary items as inputs, either directly from SALib and Rhodium (if you are using it to display Sobol results), or by importing them (to display anything else). You’ll need one package to get this to run: NetworkX. It facilitates the organization of the nodes in a circle and it’s generally a very stable and useful package to have.

Graph with straight lines
Graph with curved lines

I made these graphs to display results the results of a Sobol analysis I performed on the model parameters of a system I am studying (a, b, c, d, h, K, m, sigmax, and sigmay). The node size indicates the first order index (S1) per parameter, the node border thickness indicates the total order index (ST) per parameter, and the thickness of the line between two nodes indicates the secord order index (S2). The colors, thicknesses, and sizes can be easily changed to fit your needs. The script for these can be found here, and I will briefly discuss what it does below.

After loading the necessary packages (networkx, numpy, itertools, and matplotlib) and data, there is some setting parameters that can be adapted for the figure generation. First, we can define a significance value for the indices (here set to 0.01). To keep all values just set it to 0. Then we have some stylistic variables that basically define the thicknesses and sizes for the lines and nodes. They can be changed to get the look of the graph to your liking.

# Set min index value, for the effects to be considered significant
index_significance_value = 0.01
node_size_min = 15 # Max and min node size
node_size_max = 30
border_size_min = 1 # Max and min node border thickness
border_size_max = 8
edge_width_min = 1 # Max and min edge thickness
edge_width_max = 10
edge_distance_min = 0.1 # Max and min distance of the edge from the center of the circle
edge_distance_max = 0.6 # Only applicable to the curved edges

The rest of the code should just do the work for you. It basically does the following:

  • Define basic variables and functions that help draw circles and curves, get angles and distances between points
  • Set up graph with all parameters as nodes and draw all second order (S2) indices as lines (edges in the network) connecting the nodes. For every S2 index, we need a Source parameter, a Target parameter, and the Weight of the line, given by the S2 index itself. If you’re using this script for other data, different information can fit into the line thickness, or they could all be the same.
  • Draw nodes and lines in a circular shape and adjust node sizes, borders, and line thicknesses to show the relative importance/weight. Also, annotate text labels on each node and adjust their location accordingly. This produces the graph with the straight lines.
  • For the graph with the curved lines, define function that will generate the x and y coordinates for them, and then plot using matplotlib.

I would like to mention this script by Enrico Ubaldi, based on which I developed mine.