Hydro-climatological variables often exhibit long-term persistence caused by regime-shifting behavior in the climate, such as the El Niño-Southern Oscillations (ENSO). One popular way of modeling this long-term persistence is with hidden Markov models (HMMs) [Thyer and Kuczera, 2000; Akintug and Rasmussen, 2005; Bracken et al., 2014]. What is an HMM? Recall from my five blog posts on weather generators, that the occurrence of precipitation is often modeled by a (first order) Markov model in which the probability of rain on a given day depends only on whether or not it rained on the previous day. A (first order) hidden Markov model is similar in that the climate “state” (e.g., wet or dry) at a particular time step depends only on the state from the previous time step, but the state in this case is “hidden,” i.e. not observable. Instead, we only observe a random variable (discrete or continuous) that was generated under a particular state, but we don’t know what that state was.

For example, imagine you are a doctor trying to diagnose when an individual has the flu. On any given day, this person is in one of two states: sick or healthy. These states are likely to exhibit great persistence; when the person gets the flu, he/she will likely have it for several days or weeks, and when he/she is heathy, he/she will likely stay healthy for months. However, suppose you don’t have the ability to test the individual for the flu virus and can only observe his/her temperature. Different (overlapping) distributions of body temperatures may be observed depending on whether this person is sick or healthy, but the state itself is not observed. In this case, the person’s temperature can be modeled by an HMM.

So why are HMMs useful for describing hydro-climatological variables? Let’s go back to the example of ENSO. Maybe El Niño years in a particular basin tend to be wetter than La Niña years. Normally we can observe whether or not it is an El Niño year based on SST anomalies in the tropical Pacific, but suppose we only have paleodata of tree ring widths. We can infer from the tree ring data (with some error) what the total precipitation might have been in each year of the tree’s life, but we may not know what the SST anomalies were those years. Or even if we do know the SST anomalies, maybe there is another more predictive regime-shifting teleconnection we haven’t yet discovered. In either case, we can model the total annual precipitation with an HMM.

What is the benefit of modeling precipitation in these cases with an HMM as opposed to say, an autoregressive model? Well often the year to year correlation of annual precipitation may not actually be that high, but several consecutive wet or consecutive dry years are observed [Bracken et al., 2014]. Furthermore, paleodata suggests that greater persistence (e.g. megadroughts) in precipitation is often observed than would be predicted by autoregressive models [Ault et al., 2013; Ault et al., 2014]. This is where HMMs may come in handy.

Here I will explain how to fit HMMs generally, and in Part II I will show how to apply these methods using the Python package hmmlearn. To understand how to fit HMMs, we first need to define some notation. Let *Y _{t}* be the observed variable at time

*t*(e.g., annual streamflow). The distribution of

*Y*depends on the state at time

_{t }*t*,

*X*(e.g., wet or dry). Let’s assume for simplicity that our observations can be modeled by Gaussian distributions. Then f(

_{t}*Y*|

_{t}*X*=

_{t}*i*) ~ N(

*μ*,

_{i}*σ*) and f(

_{i}^{ 2}*Y*|

_{t}*X*=

_{t}*j*) ~ N(

*μ*,

_{j}*σ*) for a two-state HMM. The state at time

_{j}^{ 2}*t, X*depends on the state at the previous time step,

_{t},*X*. Let

_{t-1}**P**be the state transition matrix, where each element

*p*represents the probability of transitioning from state

_{i,j}*i*at time

*t*to state

*j*at time

*t+1*, i.e.

*p*= P(

_{ij}*X*=

_{t+1 }*j*|

*X*=

_{t}*i*).

**P**is a

*n*x

*n*matrix where

*n*is the number of states (e.g. 2 for wet and dry). In all Markov models (hidden or not), the unconditional probability of being in each state, π can be modeled by the equation π = π

**P**, where π is a 1 x

*n*vector in which each element π

*represents the unconditional probability of being in state*

_{i}*i*, i.e. π

*= P(*

_{i}*X*=

_{t}*i*). π is also called the stationary distribution and can be calculated from

**P**as described here. Since we have no prior information on which to condition the first set of observations, we assume the initial probability of being in each state is the stationary distribution.

In fitting a two-state Gaussian HMM, we therefore need to estimate the following vector of parameters: *θ* = [*μ _{0}*,

*σ*,

_{0}*μ*,

_{1}*σ*]. Note

_{1}, p_{00}, p_{11}*p*= 1 –

_{01}*p*and

_{00}*p*= 1 –

_{10}*p*. The most common approach to estimating these parameters is through the Baum-Welch algorithm, an application of Expectation-Maximization built off of the forward-backward algorithm. The first step of this process is to set initial estimates for each of the parameters. These estimates can be random or based on an informed prior. We then begin with the forward step, which computes the joint probability of observing the first

_{11}*t*observations and ending up in state

*i*at time

*t*, given the initial parameter estimates: P(

*X*=

_{t}*i*,

*Y*=

_{1}*y*,

_{1}*Y*=

_{2}*y*, …,

_{2}*Y*=

_{t}*y*|

_{t}*θ*). This is computed for all

*t*ϵ {1, …,

*T*}. Then in the backward step, the conditional probability of observing the remaining observations after time

*t*given the state observed at time

*t*is computed: P(

*Y*=

_{t+1}*y*, …,

_{t+1}*Y*=

_{T}*y*|

_{T}*X*=

_{t}*i*,

*θ*). Using Bayes’ theorem, it can shown that the product of the forward and backward probabilities is proportional to the probability of ending up in state

*i*at time

*t*given all of the observations, i.e. P(X

*=*

_{t}*i*| Y

*= y*

_{1 }*,…, Y*

_{1}*= y*

_{T }*,*

_{T}*θ*). This is derived below:

1)

2)

3)

4)

The first equation is Bayes’ Theorem. The second equation is derived by the conditional independence of the observations up to time *t* (*Y _{1}*,

*Y*, …,

_{2}*Y*) and the observations after time

_{t}*t*(

*Y*,

_{t+1}*Y*, …,

_{t+2}*Y*), given the state at time

_{T}*t*(

*X*). The third equation is derived from the definition of conditional probability, and the fourth recognizes the denominator as a normalizing constant.

_{t}Why do we care about the probability of ending up in state *i* at time *t* given all of the observations (the left hand side of the above equations)? In fitting a HMM, our goal is to find a set of parameters, *θ*, that maximize this probability, i.e. the likelihood function of the state trajectories given our observations. This is therefore equivalent to maximizing the product of the forward and backward probabilities. We can maximize this product using Expectation-Maximization. Expectation-Maximization is a two-step process for maximum likelihood estimation when the likelihood function cannot be computed directly, for example, because its observations are hidden as in an HMM. The first step is to calculate the expected value of the log likelihood function with respect to the conditional distribution of *X* given *Y* and *θ* (the left hand side of the above equations, or proportionally, the right hand side of equation 4). The second step is to find the parameters that maximize this function. These parameter estimates are then used to re-implement the forward-backward algorithm and the process repeats iteratively until convergence or some specified number of iterations. It is important to note that the maximization step is a local optimization around the current best estimate of *θ*. Hence, the Baum-Welch algorithm should be run multiple times with different initial parameter estimates to increase the chances of finding the global optimum.

Another interesting question beyond fitting HMMs to observations is diagnosing which states the observations were likely to have come from given the estimated parameters. This is often performed using the Viterbi algorithm, which employs dynamic programming (DP) to find the most likely state trajectory. In this case, the “decision variables” of the DP problem are the states at each time step, *X _{t}*, and the “future value function” being optimized is the probability of observing the true trajectory, (

*Y*, …,

_{1}*Y*), given those alternative possible state trajectories. For example, let the probability that the first state was

_{T}*k*be

*V*. Then

_{1,k}*V*= P(

_{1,k}*X*=

_{1}*k*) = P(

*Y*=

_{1}*y*=

_{1}| X_{1}*k*)

*π*

*. For future time steps,*_{k}*V*= P(_{t,k}*Y*=_{t }*y*=_{t}| X_{t}*k*)*p*where

_{ik}*V_{t-1,i}*i*is the state in the previous time step. Thus, the Viterbi algorithm finds the state trajectory (

*X*, …,

_{1}*X*) maximizing

_{T}*V*.

_{T,k}Now that you know how HMMs are fit using the Baum-Welch algorithm and decoded using the Viterbi algorithm, read Part II to see how to perform these steps in practice in Python!

Great post, Julie! I taught a class on HMMs using R, so if anyone is interested in an R approach and some additional background, check out https://github.com/wraseman/teaching/tree/master/hidden-markov-models

Oh cool, thanks Billy!

Pingback: Fitting and Simulating from NHMMs in R – Water Programming: A Collaborative Research Blog