This post is a bit off topic for this blog, but I think it should be interesting to our readers. The current procedure used by Federal agencies (FEMA, Bureau of Reclamation, USGS, Army Corps, etc.) to assess flood risk at gauged sites is described in Bulletin 17B. That procedure is used to estimate flood risk for things like FEMA flood maps, to set flood insurance rates, and to design riparian structures like levees. Given how important the procedure is, many people are surprised to learn that it has not been updated since 1982, despite improvements in statistical techniques and computational resources, and the additional data that is now available. This post wont speculate as to why change has taken so long, but I will briefly describe the old procedures and the proposed updates.

**Bulletin 17B Procedure:**

Dr. Veronica Webster has written an excellent brief history on the evolution of national flood frequency standards in the US dating back to the 1960s. For this post, we focus on the procedures adopted in 1982. In short, Bulletin 17B recommends fitting the log-Pearson Type III distribution to the annual peak flow series at a gauged location, using the log-space method of moments. In other words, one takes the logarithms of the flood series and then uses the mean, variance, and skew coefficient of the transformed series to fit a Pearson Type III distribution. The Pearson Type III distribution is a shifted Gamma distribution. When the population skew coefficient is positive the distribution is bounded on the low end, and when the population skew is negative the distribution is bounded on the high end. When the population skew is zero, the Pearson Type III distribution becomes a normal distribution. For those wanting more information, Veronica and Dr. Jery Stedinger have written a nice three-part series on the log-Pearson Type III, starting with a paper on the distribution characteristics.

Unfortunately data is not always well behaved and the true distribution of floods is certainly not log-Person Type III, so the Bulletin takes measures to make the procedure more robust. One such measure is the use of a regional skew coefficient to supplement the sample skew coefficient computed from the transformed flood record. The idea here is that computing the skew coefficient from short samples is difficult because the skew coefficient can be noisy, so one instead uses a weighted average of the sample skew and a regional skew. The relative weights are proportional to the mean squared error of each skew, with the more accurate skew given more weight. The Bulletin recommends obtaining a regional skew from either an average of nearby and hydrologically similar records, a model of skew based on basin characteristics (like drainage area), or the use of the National Skew Map (see below), based on a 1974 study.

A second concern is that small observations in a flood record might exert unjustifiable influence in the analysis and distort the estimates of the likelihood of the large floods of interest. Often such small observations are caused by different hydrologic processes than those that cause the largest floods. In my own experience, I’ve encountered basins wherein the maximum annual discharge is zero, or nearly zero in some years. The Bulletin recommends censoring such observations, meaning that one removes them from the computation of the sample moments, then applies a probability adjustment to account for their presence. The Bulletin provides an objective outlier detection test to identify potential nuisance observations, but ultimately leaves censoring decisions to the subjective judgement of the analyst. The proposed objective test is the Grubbs-Beck test, which is a 10% significance test of whether the smallest observation in a normal sample is unusually small.**Updates:**

The Hydrologic Frequency Analysis Work Group (HFAWG) is charged with updating the Bulletin. Their recommendations can be found on-line, as well as a testing report which compares the new methods to the old Bulletin. Bulletin 17**C** is currently being written. Like Bulletin 17B, the new recommendations also fit the log-Pearson Type III distribution to the annual maximum series from a gaged site, but a new fitting technique is used: the expected moments algorithm (EMA). This method is related to the method of moments estimators previously used, but allows for the incorporation of historical/paleo flood information, censored observations, and regional skew information in a unified, systematic methodology.

Historical information might include non-systematic records of large floods: “The town hall has been there since 1760 and has only been flooded once,” thus providing a threshold flow which has only been crossed once in 256 years. EMA can include that sort of valuable community experience about flooding in a statistically rigorous way! For the case that no historical information is available, no observations are censored, and no regional skew is used, the EMA moment estimators are exactly the same as the Bulletin 17B method of moments estimators. See Dr. Tim Cohn’s website for EMA software.

The EMA methodology also has correct quantile confidence intervals (confidence intervals for the 100-year flood, etc.), which are more accurate than the ones used in Bulletin 17B.

Another update to the Bulletin involves the identification of outlier observations, which are now termed Potentially Influential Low Flows (PILFs). A concern with the old detection test was that it rarely identified multiple outliers, even when several very small observations are present. In fact, the Grubbs-Beck test, as used in the Bulletin, is only designed to test the smallest observation and not the second, third, or kth smallest observations. Instead, the Bulletin adopts a generalization of the Grubbs-Beck test designed to test for multiple PILFs (see this proceedings, full paper to come). The new test identifies PILFs more often and will identify more PILFs than the previous test, but we’ve found that use of a reasonable outlier test actually results in improved quantile efficiency when fitting log-Pearson Type III data with EMA (again, full paper in review).

The final major revision is the retirement of the skew map (see above), and the adoption of language recommending more modern techniques, like Bayesian Generalized Least Squares (BGLS). In fact, Dr. Andrea Veilleux, along with her colleagues at the USGS have been busy using iterations of BGLS to generate models of regional skew across the country, including California, Iowa, the Southeast, the Northwest, Missouri, Vermont, Alaska, Arizona, and California rainfall floods. My masters work was in this area, and I’d be happy to write further on Bayesian regression techniques for hydrologic data, if folks are interested!

If folks are interested in software that implements the new techniques, the USGS has put out a package with good documentation.

That’s it for now!

Jon