This is the second part of a two-part blog series on fitting hidden Markov models (HMMs). In Part I, I explained what HMMs are, why we might want to use them to model hydro-climatological data, and the methods traditionally used to fit them. Here I will show how to apply these methods using the Python package hmmlearn using annual streamflows in the Colorado River basin at the Colorado/Utah state line (USGS gage 09163500). First, note that to use hmmlearn on a Windows machine, I had to install it on Cygwin as a Python 2.7 library.

For this example, we will assume the state each year is either wet or dry, and the distribution of annual streamflows under each state is modeled by a Gaussian distribution. More states can be considered, as well as other distributions, but we will use a two-state, Gaussian HMM here for simplicity. Since streamflow is strictly positive, it might make sense to first log-transform the annual flows at the state line so that the Gaussian models won’t generate negative streamflows, so that’s what we do here.

After installing hmmlearn, the first step is to load the Gaussian hidden Markov model class with `from hmmlearn.hmm import GaussianHMM`

. The `fit`

function of this class requires as inputs the number of states (*n_components*, here 2 for wet and dry), the number of iterations to run of the Baum-Welch algorithm described in Part I (*n_iter*; I chose 1000), and the time series to which the model is fit (here a column vector, Q, of the annual or log-transformed annual flows). You can also set initial parameter estimates before fitting the model and only state those which need to be initialized with the *init_params* argument. This is a string of characters where ‘s’ stands for startprob (the probability of being in each state at the start), ‘t’ for transmat (the probability transition matrix), ‘m’ for means (mean vector) and ‘c’ for covars (covariance matrix). As discussed in Part I it is good to test several different initial parameter estimates to prevent convergence to a local optimum. For simplicity, here I simply use default estimates, but this tutorial shows how to pass your own. I call the model I fit on line 5 `model`

.

Among other attributes and methods, `model`

will have associated with it the means (`means_`

) and covariances (`covars_`

) of the Gaussian distributions fit to each state, the state probability transition matrix (transmat_), the log-likelihood function of the model (`score`

) and methods for simulating from the HMM (`sample`

) and predicting the states of observed values with the Viterbi algorithm described in Part I (`predict`

). The `score`

attribute could be used to compare the performance of models fit with different initial parameter estimates.

It is important to note that which state (wet or dry) is assigned a 0 and which state is assigned a 1 is arbitrary and different assignments may be made with different runs of the algorithm. To avoid confusion, I choose to reorganize the vectors of means and variances and the transition probability matrix so that state 0 is always the dry state, and state 1 is always the wet state. This is done on lines 22-26 if the mean of state 0 is greater than the mean of state 1.

from hmmlearn.hmm import GaussianHMM def fitHMM(Q, nSamples): # fit Gaussian HMM to Q model = GaussianHMM(n_components=2, n_iter=1000).fit(np.reshape(Q,[len(Q),1])) # classify each observation as state 0 or 1 hidden_states = model.predict(np.reshape(Q,[len(Q),1])) # find parameters of Gaussian HMM mus = np.array(model.means_) sigmas = np.array(np.sqrt(np.array([np.diag(model.covars_[0]),np.diag(model.covars_[1])]))) P = np.array(model.transmat_) # find log-likelihood of Gaussian HMM logProb = model.score(np.reshape(Q,[len(Q),1])) # generate nSamples from Gaussian HMM samples = model.sample(nSamples) # re-organize mus, sigmas and P so that first row is lower mean (if not already) if mus[0] > mus[1]: mus = np.flipud(mus) sigmas = np.flipud(sigmas) P = np.fliplr(np.flipud(P)) hidden_states = 1 - hidden_states return hidden_states, mus, sigmas, P, logProb, samples # load annual flow data for the Colorado River near the Colorado/Utah state line AnnualQ = np.loadtxt('AnnualQ.txt') # log transform the data and fit the HMM logQ = np.log(AnnualQ) hidden_states, mus, sigmas, P, logProb, samples = fitHMM(logQ, 100)

Okay great, we’ve fit an HMM! What does the model look like? Let’s plot the time series of hidden states. Since we made the lower mean always represented by state 0, we know that hidden_states == 0 corresponds to the dry state and hidden_states == 1 to the wet state.

from matplotlib import pyplot as plt import seaborn as sns import numpy as np def plotTimeSeries(Q, hidden_states, ylabel, filename): sns.set() fig = plt.figure() ax = fig.add_subplot(111) xs = np.arange(len(Q))+1909 masks = hidden_states == 0 ax.scatter(xs[masks], Q[masks], c='r', label='Dry State') masks = hidden_states == 1 ax.scatter(xs[masks], Q[masks], c='b', label='Wet State') ax.plot(xs, Q, c='k') ax.set_xlabel('Year') ax.set_ylabel(ylabel) fig.subplots_adjust(bottom=0.2) handles, labels = plt.gca().get_legend_handles_labels() fig.legend(handles, labels, loc='lower center', ncol=2, frameon=True) fig.savefig(filename) fig.clf() return None plt.switch_backend('agg') # turn off display when running with Cygwin plotTimeSeries(logQ, hidden_states, 'log(Flow at State Line)', 'StateTseries_Log.png')

Wow, looks like there’s some persistence! What are the transition probabilities?

print(model.transmat_)

Running that we get the following:

[[ 0.6794469 0.3205531 ]

[ 0.34904974 0.65095026]]

When in a dry state, there is a 68% chance of transitioning to a dry state again in the next year, while in a wet state there is a 65% chance of transitioning to a wet state again in the next year.

What does the distribution of flows look like in the wet and dry states, and how do these compare with the overall distribution? Since the probability distribution of the wet and dry states are Gaussian in log-space, and each state has some probability of being observed, the overall probability distribution is a mixed, or weighted, Gaussian distribution in which the weight of each of the two Gaussian models is the unconditional probability of being in their respective state. These probabilities make up the stationary distribution, π, which is the vector solving the equation π = π**P**, where **P** is the probability transition matrix. As briefly mentioned in Part I, this can be found using the method described here: π = (1/ Σ* _{i}*[

*e*])

_{i}*N(*

**e**in which**e**is the eigenvector of**P**corresponding to an eigenvalue of 1, and*e*is the_{i}*i*element of^{th}**e**. The overall distribution for our observations is then Y ~ π_{0}*μ*,

_{0}*σ*) + π

_{0}^{2}_{1}*N(

*μ*,

_{1}*σ*). We plot this distribution and the component distributions on top of a histogram of the log-space annual flows below.

_{1}^{2}from scipy import stats as ss def plotDistribution(Q, mus, sigmas, P, filename): # calculate stationary distribution eigenvals, eigenvecs = np.linalg.eig(np.transpose(P)) one_eigval = np.argmin(np.abs(eigenvals-1)) pi = eigenvecs[:,one_eigval] / np.sum(eigenvecs[:,one_eigval]) x_0 = np.linspace(mus[0]-4*sigmas[0], mus[0]+4*sigmas[0], 10000) fx_0 = pi[0]*ss.norm.pdf(x_0,mus[0],sigmas[0]) x_1 = np.linspace(mus[1]-4*sigmas[1], mus[1]+4*sigmas[1], 10000) fx_1 = pi[1]*ss.norm.pdf(x_1,mus[1],sigmas[1]) x = np.linspace(mus[0]-4*sigmas[0], mus[1]+4*sigmas[1], 10000) fx = pi[0]*ss.norm.pdf(x,mus[0],sigmas[0]) + \ pi[1]*ss.norm.pdf(x,mus[1],sigmas[1]) sns.set() fig = plt.figure() ax = fig.add_subplot(111) ax.hist(Q, color='k', alpha=0.5, density=True) l1, = ax.plot(x_0, fx_0, c='r', linewidth=2, label='Dry State Distn') l2, = ax.plot(x_1, fx_1, c='b', linewidth=2, label='Wet State Distn') l3, = ax.plot(x, fx, c='k', linewidth=2, label='Combined State Distn') fig.subplots_adjust(bottom=0.15) handles, labels = plt.gca().get_legend_handles_labels() fig.legend(handles, labels, loc='lower center', ncol=3, frameon=True) fig.savefig(filename) fig.clf() return None plotDistribution(logQ, mus, sigmas, P, 'MixedGaussianFit_Log.png')

Looks like a pretty good fit – seems like a Gaussian HMM is a decent model of log-transformed annual flows in the Colorado River at the Colorado/Utah state line. Hopefully you can find relevant applications for your work too. If so, I’d recommend reading through this hmmlearn tutorial, from which I learned how to do everything I’ve shown here.