There have been several blogposts on sensitivity analysis (SA) on this blog, focusing primarily on tools to perform it (e.g., SALib) and visualize outputs. Today I’ll be providing some more information on how to decide which factors are most important in affecting our output and which are largely inconsequential. Picking what is actually important for what we care about is obviously largely subjective and case-dependent, but this post is meant to provide some support to that exercise. I will performing a *Global Sensitivity Analysis* of a system resulting in a rank-ordering of the most important factors driving variability in the output (i.e., *factor prioritization*), which can be used to decide which are the least influential factors that can be fixed to simplify the model (i.e., *factor fixing*) [1].

The scripts I’ll be using can be found **here**, and I’ll be using a fishery model to demonstrate, as a simplified representation of a socio-ecological system we’re trying to manage. The procedure I’ll be following has been based on the work found in [2-4].

**The idea is this: **I generate 1000 samples of uncertain factors that might be driving variability in my outcome (let’s call this

**Set 1**). I apply a certain SA method on the samples and the outcomes and get sensitivity indices for each of my factors, ranking them from most important to least.

*Where do I draw the line between important and not important?*

We can create a

**Set 2**, using only the T most important factors from our

**Set 1**sample, and fixing all other factors to their default values.

We can also create a

**Set 3**, now fixing the T most important factors to defaults and using the sampled values of all other factors from

**Set 1**.

**If we classified our important and unimportant factors correctly, then the correlation coefficient between the model outputs of Set 2 and Set 1 should approximate 1 (since we’re fixing all factors that don’t matter), and the correlation coefficient between outputs from Set 3 and Set 1 should approximate 0 (since the factors we sampled are inconsequential to the output). **

Here’s how it’s done using SALib and the Delta Method (in the interest of space I’ll only share the most important snippets of code, you need the full scripts to make it run, which are in this repository) :

First we set up our problem using SALib nomenclature, generate 1000 samples using all factors (which will be our Set 1) and run the model for all 1000 samples. Finally we analyze our output using the Delta method. (This should take a couple minutes to run on your personal computer.)

```
# Set up dictionary with system parameters
problem = {
'num_vars': 9,
'names': ['a', 'b', 'c', 'd','h',
'K','m','sigmaX','sigmaY'],
'bounds': [[ 0.002, 2],
[0.005, 1],
[0.2, 1],
[0.05, 0.2],
[0.001, 1],
[100, 5000],
[0.1, 1.5],
[0.001, 0.01],
[0.001, 0.01]]
}
defaultvalues = np.array([0.005, 0.5, 0.5, 0.1, 0.1, 2000, 0.7, 0.004, 0.004])
# Generate samples
nsamples = 1000
X_Set1 = latin.sample(problem, nsamples) # This is Set 1
# Run model for all samples
output = [fish_game(*X_Set1[j,:]) for j in range(nsamples)]
# Perform analysis
results = delta.analyze(problem, X_Set1, np.asarray(output), print_to_console=True)
```

This will produce output like below, telling as the Delta indices of each of the sampled parameters, the confidence internals of those, the First order Sobol indices of the parameters, and their equivalent confidence intervals.

```
Parameter delta delta_conf S1 S1_conf
a 0.102206 0.021648 0.052453 0.033510
b 0.139056 0.018379 0.065019 0.022922
c 0.090550 0.016505 0.006749 0.007823
d 0.076542 0.005375 0.003923 0.009140
h 0.097057 0.016910 0.021070 0.009275
K 0.267461 0.020434 0.190670 0.057397
m 0.252351 0.040149 0.315562 0.031664
sigmaX 0.076175 0.014001 0.005930 0.005333
sigmaY 0.075390 0.015346 0.004970 0.011557
```

Without further analysis, one simple way of determining whether a parameter is unimportant is to check whether the confidence interval of its value overlaps 0 (i.e., subtract *delta_conf *from *delta*). For our particular results, this doesn’t seem to be the case for any of our delta values, though it does happen for some of the S1 values (*c, d, sigmaY*). You can refer to this post for discussion on what this might mean.

Looking at the delta values, we can clearly see two factors coming up top (*K *and *m*), followed by *b*, and *a* closely behind it. The rest of the parameters are reduced in their importance in small decrements after that. So where *should *we draw the line of importance? Another simple way is to use a threshold (say, 0.1) as a cutoff value [3], but one could argue over including *a* and not *h*, given how close their indices are and the wider confidence interval of *a* (see also the appendix below on this).

But, let’s continue with our analysis. What I am doing below is the following. First, I sort the factors from most to least important based on my results for the delta indices. Then, I create my Sets 2 and 3 on which I’ll be iteratively replacing the values of important factors with either those from Set 1 or with defaults. Finally, I loop through all possible numbers of important factors (1 to 9), generate Sets 2 and 3, calculate outputs for all samples in each, and calculate their correlation with the outputs from Set 1. (This should take 20-30 minutes to run on your personal computer.)

```
# Sort factors by importance
factors_sorted = np.argsort(results['delta'])[::-1]
# Set up DataFrame of default values to use for experiment
X_defaults = np.tile(defaultvalues,(nsamples, 1))
# Create initial Sets 2 and 3
X_Set2 = np.copy(X_defaults)
X_Set3 = np.copy(X_Set1)
for f in range(1, len(factors_sorted)+1):
ntopfactors = f
for i in range(ntopfactors): #Loop through all important factors
X_Set2[:,factors_sorted[i]] = X_Set1[:,factors_sorted[i]] #Fix use samples for important
X_Set3[:,factors_sorted[i]] = X_defaults[:,factors_sorted[i]] #Fix important to defaults
# Run model for all samples
output_Set2 = [fish_game(*X_Set2[j,:]) for j in range(nsamples)]
output_Set3 = [fish_game(*X_Set3[j,:]) for j in range(nsamples)]
# Calculate coefficients of correlation
coefficient_S1_S2 = np.corrcoef(output,output_Set2)[0][1]
coefficient_S1_S3 = np.corrcoef(output,output_Set3)[0][1]
```

I can also plot the outputs from each iteration, which should look something like this (this is animated to show all figures, in the interest of space):

The figures above tell us the following:

If we choose one important factor (*K*) and fix all other parameters our outputs don’t really capture the variability of outcomes produced when considering all nine (this is also a case against one-at-a-time type analyses). The coefficient of correlation between Sets 1 and 2 is pretty low (0.44) suggesting we’re still missing important parameters. We’re doing a better job by actually fixing our most important parameter and varying all others (figure on the right, with R=0.763).

Adding the second most important factor (*m*), shifts things significantly to the right direction, by increasing our coefficient on the right and reducing the one on the left to R=0.203.

There is only a slight improvement with the addition of the third factor (*b*), but with the inclusion of the fourth (*a*), our reduced model is already looking very close to the full, with R=0.94. Our counter model excluding these four factors (on the right) also has a very low coefficient of R=0.025.

One could consider this performance sufficient, with the model reduced to four parameters instead of nine. Further adding parameter *h* and then *c* would further improve the values to a near perfect match between Set 2 and Set 1, but this is where subjectivity takes over, depending on the cost of adding these variables and how much we care about fidelity in this case.

It is also clear that it is likely safe to fix the last three parameters, as in this case they don’t have any consequential effects on our outcomes.

References:

[1] Saltelli, Andrea, et al. *Global Sensitivity Analysis: The Primer*. (2008)

[2] T. H. Andres, “Sampling methods and sensitivity analysis for large parameter sets,” *Journal of Statistical Computation and Simulation*, vol. 57, no. 1–4, pp. 77–110, Apr. 1997, doi: 10.1080/00949659708811804.

[3] Y. Tang, P. Reed, T. Wagener, and K. van Werkhoven, “Comparing sensitivity analysis methods to advance lumped watershed model identification and evaluation,” *Hydrology and Earth System Sciences*, vol. 11, no. 2, pp. 793–817, Feb. 2007, doi: https://doi.org/10.5194/hess-11-793-2007.

[4] J. Nossent, P. Elsen, and W. Bauwens, “Sobol’ sensitivity analysis of a complex environmental model,” *Environmental Modelling & Software*, vol. 26, no. 12, pp. 1515–1525, Dec. 2011, doi: 10.1016/j.envsoft.2011.08.010.

Appendix:

Another way to identify a threshold of importance to classify parameters, is to add a dummy parameter to your model, that does nothing. Reperforming my SA for this same system including the dummy, produces this:

```
Parameter delta delta_conf S1 S1_conf
a 0.105354 0.019236 0.040665 0.020949
b 0.144955 0.023576 0.050471 0.014810
c 0.075516 0.009578 0.003889 0.006113
d 0.081177 0.011604 0.004186 0.007235
h 0.101583 0.010008 0.032759 0.021343
K 0.261329 0.022876 0.174340 0.038246
m 0.258345 0.024750 0.325690 0.052234
sigmaX 0.071862 0.008620 0.001681 0.006720
sigmaY 0.077337 0.009344 0.003131 0.006918
dummy 0.072546 0.008313 0.004176 0.009567
```

Even though the dummy does absolutely nothing in our model, it was still given a non-zero delta index by the analysis (0.07). One could use this as the cutoff value of non-importance and choose to fix parameters *c, sigmaX, *and *sigmaY.*

Pingback: Open Source Sensitivity Analysis Tools – Water Programming: A Collaborative Research Blog

Pingback: Determining the appropriate number of samples for a sensitivity analysis – Water Programming: A Collaborative Research Blog

Thank you for posting this blog! It is very helpful. What confused me a lot is that how can we add a dummy parameter? The parameters in the fish_game model are fixed and have there unique physical meaning. I checked the scripts in the github but there is no code related to this point. Thanks.

Hi, thanks for reading! To add a dummy parameter I did the following: I redefine the problem dictionary item with another parameter (i.e., change ‘num_vars’ to 10 and add another name in ‘names’ and another array in ‘bounds’ – it doesn’t matter what); in my model function fish_game I add another parameter after all the others when the function is defined, but I never use the parameter to do anything within the function itself. So the function can receive it as input but does nothing with it. I then repeat all steps as normal and see what the indices for that dummy are. Does that make more sense?

Hi Antonia, thanks for your interesting posts. Your approach for selecting important parameters is almost perfect! I have used a modified version of your code in an article, and I would like to refer the readers to your original work. Could you please let me know if you have a doi for your repository (or a published paper) that I can cite? Cheers, Enayat

Hi Enayat! Thank you for the kind words. I created a DOI for the repository, which you can cite https://doi.org/10.5281/zenodo.4030955 Thanks!

Thanks heaps!