Skip to main content
Top

1987 | Book

Hydrologic Frequency Modeling

Proceedings of the International Symposium on Flood Frequency and Risk Analyses, 14–17 May 1986, Louisiana State University, Baton Rouge, U.S.A.

Editor: Vijay P. Singh

Publisher: Springer Netherlands

insite
SEARCH

About this book

Floods constitute a persistent and serious problem throughout the United States and many other parts of the world. They are respon­ sible for losses amounting to billions of dollars and scores of deaths annually. Virtually all parts of the nation--coastal, mountainous and rural--are affected by them. Two aspects of the problem of flooding that have long been topics of scientific inquiry are flood frequency and risk analyses. Many new, even improved, techniques have recently been developed for performing these analyses. Nevertheless, actual experience points out that the frequency of say a 100-year flood, in lieu of being encountered on the average once in one hundred years, may be as little as once in 25 years. It is therefore appropriate to pause and ask where we are, where we are going and where we ought to be going with regard to the technology of flood frequency and risk analyses. One way to address these questions is to provide a forum where people from all quarters of the world can assemble, discuss and share their experience and expertise pertaining to flood frequency and risk analyses. This is what constituted the motivation for organizing the International Symposium on Flood Frequency and Risk Analyses held May 14-17, 1986, at Louisiana State University, Bat-on Rouge, Louisiana.

Table of Contents

Frontmatter

Flood Frequency Analysis

Hydrological and Engineering Relevance of Flood Frequency Analysis

The mainstream of contemporary flood frequency analysis (FFA) qualifies neither as a hydrological science nor as an engineering discipline. It is not a hydrological science because it does not analyze the frequencies of floods — it merely postulates that flood records are random samples from simple probability distributions. It is not an engineering discipline because it pays most attention to, and exerts most energy on, formal polishing of concepts which are crude by their very nature and whose basic assumptions, which have an overriding influence on design parameters, are arbitrary and are dictated by expediency rather than scientific knowledge. At best, much of flood frequency analysis is just a part of small sample theory in disguise, the term “flood” being used merely as a name for the numbers employed; at worst, it is a pretentious game draining resources both from hydrology and engineering research, and a cheap opportunity to satisfy the need of academics to publish papers and supply easy topics for graduate students who know little beyond elementary statistics, probability theory, and computer programming.

V. Klemeš
Statistical Flood Frequency Analysis — An Overview

The paper critically examines various issues involved in statistical frequency analysis. Emphasis has been laid on areas where further research is required. It is highlighted that the Water Resources Council modified report of 1981 has largely reiterated their earlier findings thereby ignoring criticism labelled by various researchers on their recommendations. In the light of recent studies, it is advocated that WRC, U.S.A recommendations need reevaluation.

Arun Kumar, Subhash Chander
Deterministic Nature of Flood Frequencies: Some Observations

The subject of the paper is the deterministic role that physical characteristics of watersheds have in the transformation process of probabilistic rainfall inputs into probabilistic peak runoffs. Analytical solution of kinematic wave flow over a single runoff plane and experimental hydrographs from a laboratory catchment were the methods of investigation.Results indicate that the dispersion and skewness of peak runoff series increase with increasing resistance to surface runoff and with decreasing bifurcation ratio, the mean basin slope, and the duration of rainfall. The results also indicate that when the rainfall intensity distribution is of two parameter type, such as normal or gamma distribution, with rigid relationship between its coefficients of skewness and variation, the peak runoff distribution does not retain that rigidity.

Ivan Muzik

Empirical Flood Frequency Models

Review of Statistical Models for Flood Frequency Estimation

The relation between flood frequency estimates and the economic decision-making process is briefly discussed and selected developments in frequency analysis are traced in tabular form. The main types of frequency analysis procedures and models are surveyed including at-site, at-site/regional and regional only cases. The at-site/regional cases include Index Flood, Bayesian and TCEV methods. Criteria for selecting a frequency analysis procedure are discussed under two headings, descriptive ability and predictive ability. The former relates to a model’s ability to preserve statistics of observed flood series while the latter relates to quantile estimating ability in a robust manner. The relative merits of different types of flood quantile estimators are discussed, beginning with at-site estimators and the search for a robust at-site estimator. This is followed by the principal results available about at-site/regional quantile estimators. Further topics considered are regional homogeneity of flood statistical behaviour, the effect of spatial and temporal interdependence of flood magnitudes, and the detection and use of historical floods.

Conleth Cunnane
Estimation of Flood Frequencies by a Nonparametric Density Procedure

Annual flood risk is usually estimated by fitting an a priori assumed probability distribution funetion to the observed annual extremepeak series (parametrie method). The main shorteomings of such a proeedure are: the seleetion of a distribution, reliability of parameters (espeeially for skewed data with short record length), inability to analyze multimodal distributions resulting from flooding due to snowmelt versus thunderstorm aetivity, and treatment of outliers.A new methodology has been developed based on the nonparametrie eoneepts for estimation of probabillty density funetion. The nonparametrie method overeomes some of the limitations inherent in the parametrie method, and is developed based on a few and very mild assumptions.Based on the numerieal results using real data and Monte Carlo simulation, it was found that the nonparametrie flood estimates are accurate and suitable for multimodal denslties.

Kaz Adamowski, Charles Labatiuk
Statistical Models for Flood Frequency Estimation of the Mississippi and Yazoo Rivers

This paper investigates the application of the theory of extreme values to predit the magnitudes of annual floods for the Mississippi and Yazoo rivers based on maximum annual levels of the rivers between 1925 and 1975. First, the data were tested to establish an initial probability distribution of the yearly extreme values. It was found by means of goodness of fit tests that both the normal distribution and the log-normal distribution adequately describe the extreme probability distributions for both rivers. Then by using techniques developed by Cramer, the expectations and variances of the mth extremes were calculated. From the estimates of these two parameters, probability predications of the level of floods for any return period may be obtained. The paper presents predicted values of floods for return periods of 100, 200. and 300 years.

Edward Nissan
Partial Duration Series with Log-Normal Distrubuted Peak Values

The T-Year estimate in a partial duration (PD) series with log-normal distributed pesk exceedances is analysed in case of parameter estimation by means of the maximum likelihood method and the method of moments, respectively. Approximate expressions for bias and variance of the T-year estimate are developed, and it is hereby found that the sample variance of the log-normal distribution parameters is the main reason for the uncertainty of the T-year estimate, whereas the sample variance of the intensity of the process is of minor importance. Providing smaller bias (in absolute terms) and smaller variance the maximum likelihood method is found to be superior to the method of moments. The two methods produce bias of different signs, positive bias in case of maximum likelihood estimation and negative in case of estimation by method of moments. The variance expression in case of maximum likelihood estimation is to some extent wave heights. The purpose of the investigation was to assess the 50-year significant wave height on the basis of 17 years of hindcasted wave data together with the standard deviation of the estimate.

Dan Rosbjerg
Estimation of a Prior Distribution for the Bayesian Estimation of Pearson III Skews

The Pearson III distribution is often used as a model for the annual maximum streamflow process. Estimates of flood quantiles and the probabilities of exceedance of flood magnitudes, using this distribution and the small samples usually available are rather biased. A number of investigators have shown that the sampling distributions of, and the bias in the estimates of the Pearson III parameters, and consequently the flood quantiles and probabilities, are dependent only on the magnitude of the skew γ of the distribution and the sample size n. The importance of unbiased estimates of skew is thus recognized. The development of such unbiased estimators, using regional and/or site information, has met with limited success. This has contributed to the controversy surrounding the use of the Pearson III distribution to model the flood frequency Process.The usefulness of Bayesian estimation techniques in combining regional and site specific information has been recognized and stressed in the literature. One problem faced by the Bayesian estimator is the specification of an appropriate prior distribution to describe skews in a region of interest. The lack of this development has inhibited the use of Bayesian techniques. This paper presents a general framework for the estimation of a prior distribution for skews for a region where some data on annual maximum streamflows is available.Sampling distributions ϕ1n(g∣γ), for sample skew g estimated from a sample of size n, where the parent population skew γ is know, are available in the literature for a number of values of n and γ. Sampling distributions ϕ2n(g), for sample skew g estimated from a sample sixe n, using flood flow data from regional stations, where the parent population skew γ is unknown, can be derived for a bumber of sample sizes n, through resampling and subsampling techniques. The objective is then the determination of a prior distribution π(γ), that yields the observed sampling distributions ϕ2n(g), upon convolution with the sampling distributions ϕ1n(g∣γ), over some admissible set for γ, and for all sample sizes n. A discrete set of population skews γk (e.g. -6(0.5)6), is considered. The prior distribution π(γ) is then approximated as a set of probabilities pk associated with each of the admissible γk. The strategy for the minimum error estimation of π(γ) then entails the determination of π(γ) through the solution of a constrained least squares problem where the objective is to minimize the sum of square of errors in predicting the quantiles of the sampling distributions ϕ2n(g) through the convolution of π(γ) and quantiles of the sampling distributions ϕ2n(g∣γ), over all sample sizes n for which data is available.The prior probability distribution derived for regional skew can be used to develop unbiased Bayesian estimators of skew using site data. A framework is thus provided for the use of regional information in developing at site skew estimators. The framework can be extended to the estimation of flood quantiles and probabilities with uncertain skew. The techniques developed are applied to data from 314 U.S. gaging stations, that have no diversion or regulation of flow, and have at least 60 years of annual maximum data.

Upmanu Lall
Comparison of Flood-Frequency Estimates Based on Observed and Model-Generated Peak Flows

Comparisons were made of flood-frequency estimates from a rainfall/runoff model and from observed annual peak data to determine if these estimates exhibited the same statistical variability. The analysis was based on data at 173 stations in 10 States where there were 20 or more rainfall/runoff model. Paired t-tests and analysis-of-variance techniques indicated that there were statistically significant differences between the two sets of flood-frequency estimates. These differences existed for different recurrence interval floods (2-, 10-, 50-, and 100-years), different watershed sizes, and different States. These differences appeared to be primarily related to three infiltration parameters in the rainfall/runoff model and to a lesser extent the observed record length at the station. Furthermore, the ratio of the model-generated 100-year flood discharge to the 2-year flood discharge was 3.98 whereas the same ratio based on observed peak flows was 5.72. Additional research is needed to identify what factors causes flood estimates from the rainfall/runoff model appear to exhibit less variability than flood estimates based on observed peak flow records.

Wilbert O. Thomas Jr.
Flood Frequency Analysis Using Box-Cox Transformation Based Gumbel EV-I Distribution

Box-Cox transformation is one of the powerful procedure for transforming the data series to near normalization which has been employed in flood frequency analysis. In this study, an attempt has been develop methodlogy for transforming the annual peak flood series to the form of Gumbel EV-I distribution using Box-Cox transformation The exponent λ of the Box-Cox transformation has been estimated by trial and error using the method of maximum likelihood (MML) and method of probability weighted moments (MPWM), so as to obtain nearly the same estimates of log likelihood functions by both the methods. This methodology has been applied to 1000 sample of various sample sizes of randomly generated synthetic “flood” series which follow the pearson type-III distribution. The statistical estimates of the reduced variates of the Gumbel EV-I distributed transformed series, viz.mean and standard deviation,have been found nearer to 0.5772 and 1.2825 as required from theoretical considerations and thus verifying the applicability of the proposed methodlogy for transforming to Gumbel EV-I distribution.

M. Perumal, R. D. Singh, S. M. Seth
Flood-Frequency Analysis with Historical Data in China

The U.S. Geological Survey has participated with the People’s Republic of China since 1981 in a scientific and technical exchange of hudrologic data and analytical techniques. The chinese have been documenting historical flood elevations at thousands of sites in China since about 1950. Information has been found at a few sites to establish flood elevations for extraordinary events occurring as much as 2,000 years ago. Peak discharges for many of the historical flood events have been quantified by using stage-discharge relations. hydraulic modesl. The Chinese have used the historical data extensively in their analyses of flood frequency for design purposes. Frequency curves are developed by a acombination of analytical and graphical methods, with the Pearson Type III distributin and Weibull plotting positions. The upper part of the curves are strongly influenced by the historical data.

Vernon B. Sauer

Mixed Distributions

Development of a Versatile Flood Frequency Methodology and Its Application to Flood Series from Different Countries

Many observed annual flood series exhibit reverse curvatures when plotted on lognormal probability paper. The occurrence of reverse curvature may be attributed to factors such as dominance of within-the channel or floodplain flow, seasonal variation in flood-producing storm types, variability in antecedent soil moisture and cover conditions, and a mixture of the probability distributions of observed flood series as a mixture of the probabilities of two lognormal distributions. Distribution parameters can be significantly biased if the observed flood series has outliers and inliers. The distributions of outliers and inliers for various size samples and at various probabilities or significance levels have been developed from extensive Monte Carlo experiments. The objective detection and modification of any outliers and inliers in an observed flood series is an integral part of the versatile flood frequency methodology. Results presented for flood series from 7 basins from different countries show the versatility and superiority of the proposed methodology.

Krishnan P. Singh
Hydroclimatically-Defined Mixed Distributions in Partial Duration Flood Series

The possibility of mixed distributions in flood series is widely recognized and attempts to separate records into homogeneous subsets have been encouraged. climatic is often proposed as a source of mixed distributions, but the separation of flood records into climatic subgroubs has tended to focus on seasonal divisions or a partitioning of the flood series into rainfall and snowmelt-generated events. In this study, a detailed Hydroclimatic analysis is used to categorize floods on the basis of the synoptic weather patterns which produced them. This procedure represents the first attempt to the classify flood-generating mechanism responsible for each event. The technique identifies mixed distributions by using physically-based information that is independent of the runoff series, rather by defining subpopulations on the basis of the shape of the parent distribution itself.

Katherine K. Hirschboeck
Mixed Flood Distributions in Wisconsin

Traditional flood frequency analysis is predicted on the assumption that the annual flood series can be considered to be a sample from a single population. In Wisconsin this is not a valid assumption. Wisconsin floods are of two types, which are hydrologically and statistically distinct.An analysis of the time of occurrence of Wisconsin floods indicates two dominant flood seasons: spring and summer. Late May is the boundary between these two seasons.Using May 20 as the date of separation, we constructed parallel spring and summer flood series for all Wisconsin partial flood series with ten or more years of record (29 series). (A seasonal series is the series of largest spring floods from each year of record.) Based on two-sample Kolmogorov-Smirnov test the hypothesis that spring and summer floods are identically distributed was rejected for 18 of the 29 pairs of flood series at the 5 percent significance level and for 23 pairs of flood series at the 10 percent significance level. We interpret this as convincing evidence that spring and summer floods in Wisconsin generally have distinct statistical distributions.An analysis of runoff data for eight of the 29 partial series gages demonstrates that the spring and summer floods in Wisconsin are also hydrologically distinct. Spring floods show relatively high runoff precipitation ratios.Future work is needed, both to better understand the hydrology of spring and summer floods and to determine if and how this understanding can improve quantile estimation.

Tim Diehl, Kenneth W. Potter
Flood Data, Underlying Distribution, Analysis, and Refinement

Annual flood peak data must conform to certain standards of quality control, sample size, and homogeneity for use in flood-frequency analysis. Sometimes the values for the highest observed floods of an annual flood series are much higher or much lower than expected; these values are designated as outliers and inliers, respectively. When the values for the lowest observed floods are much higher or lower than expected, these are designated inliers and outliers, respectively. Analyses of storms producing high floods perceived as outliers and droughts containing low floods perceived as outliers provide a physical basis for their being outliers. However, these outliers and inliers need to be detected and properly modified to derive unbiased design flood estimates. Test statistics have been developed for objective detection of any outliers/inliers in a given flood series after converting it to a normally distributed series with the power transformation. A design flood estimation methodology is presented.

Krishan P. Singh

Rainfall Frequency Analysis

Very Low Probability Precipitation-Frequency Estimates — A Perspective

Attempts to quantitatively evaluate the level of risk associated with nuclear power plants have led to an increasing desire to assign probabilities to rare precipitation events. Because it was felt that purely statistical estimates of events with return periods greatly in excess of the period of record susceptible to potentially unacceptable levels of uncertainty, an approach that combined statistical analysis with meteorological interpretation was taken to determine a 24-hr point precipitation amount judged to be associated with a 0.001 annual exceedance probability of occurrence (1000-yr event). The paper also considers the possibility of using joint probabilities associated storm centering, depth-area reduction and storm intensity to determine the probability of average depths of precipitation over larger areas.

Frank Richards, Rex G. Wescott
SQRT-Exponential Type Distribution of Maximum

The authors present the square-root exponential type distribution of the maximum, or SQRT-ET-max distribution, as the distribution to be applied to the annual maximum series of the total amount (or depth) of a single rainstorm. The presented distribution is theortically derived, and is as simple in expression as conventional ones. It as two parameters and, thus esimates them more stably than distributions with three or more parameters. Futhermore, it is an good as or superior to conventional 3-parameter distributions in overall fitness to frequency distribution of annual maximum depths of a single rainstorms and rainfalls of about 24 hours.

Takeharu Etoh, Akira Murota, Masanori Nakanishi
On the Probabilistic Characteristics of Point and Areal Rainfall

The time and spatial concentration of heavy rainfall results in the differences between point and areal averaging rainfall. The rainfall which is required for the flood control design in a drainage system should be discussed in its areal properties. If the probabilistic realationships between point and areal rainfall are clearly established probabilistic characterstics of areal rainfall can be estimated from the probability density function of point rainfall at a key-gauging station.From this point of view, the present paper aims to estabilish the relationship between the probabilistic characterstics of point and areal rainfalls. For this purpose, the simulation technique of the rainfall event over a certain region has been used, after discussing the basic properties of rainfall. It is then demonstrated that the shift of the probability density function of point rainfall towards a lower rainfall intensity may give a good approximation to that of areal rainfall. The fectors which govern the shift have been also discussed.

U. Matsubayashi, F. Takagi
Comparison of Three Methods of Estimating Rainfall Frequency Parameters According to the Duration of Accumulation

The ‘Gradex’ method estimates high return period discharges by assuming that the two marginal increases in rainfall and flood volumes are equal for return periods greater than a hundred years. This requires an asymptotically exponential decay of the rainfall distribution, the gradex being the gradient of this exponential. Among several statistical relationships satisfying this requirement, the Gumbel distributin is widely acknowledged to properly fit the extreme value distribution of observed daily rainfalls. However, difficulties may arise for small time steps(from less than one hour up to twelve hours).Among the many reasons for these difficulties, seasonal variation of the distribution parameters and the mixing of rainstorms issuing from different weather conditions are the most frequent. Rainfall distributions should therefore be analyzed using data from a homogeneous season. However, the mixing of weather patterns remains whatever the season considered, and it is important to find a way to estimate the gradex parameter even when the experimental distribution of extreme values does not allow a good Gumbel model fit. One solutions is to fit a two component negative exponential distribution to the complete data set. Such a theoretical distribution is asymptotically parallel to a Gumbel distribution, thus providing an estimate of the gradex.In this communication, three gradex estimators are compared from bias and efficiency standpoints: the moment and maximum likelihood methods used to fit a Gumbel distribution to an extreme value data set, and the moment method used to fit a two component exponential distribution to the complete data set of observed rainfalls. This comparsion can be used to determine the most suitable estimator for subsequent regional regression and mapping of rainfall frequency parameters(see: liner relations between rainfall and morphometric parameters by Slimani and Obled, this symposium).

M. Slimani, T. Lebel
Frequency Analysis of Australian Rainfall Data as used for Flood Analysis and Design

The procedures used to obtain accurate, temporally and spatially consistent, intensity-frequency-duration (IFD) design rainfall data for Australia are discussed. These IFD design curves are used by design enigneers and scientists as input to a wide range of design flood models and other environmental studies. The basic annual maximum rainfall data for durations of 6 minutes to 72 hours are fitted using a log-Pearson Type III distribution with a small Positive regional skewness up to 0.7. Most of Australia is close to zero skewness ie. a lognormal distribution. Due to the sparsity of recording raingauges, various regression techniques were used to estimate short duration data at daily read raingauge sites. These procedures are discussed, along with the production of six master charts of rainfall intensity for various durations and average recurrence intervals (ARI) covering all Australia. From these six charts,plus a map of regionalized skewness, a full set of IFD curves can be obtained for any location using appropriate extrapolation and interpolation procedures. The IFD design curves extend from six minutes to 72 hours and ARI from one year to 100 years. Comparisons are made with USA work in this field. The paper also outlines the steps taken to automate this development work and produce CDIRS(Computerized Design IFD Rainfall System). CDIRS allows automatic determination of a full set of IFD curves (also in tabular form) for any location simply by supplying its latitude and longitude.

R. P. Canterford, N. R. Pescod, H. J. Pearce, L. H. Turner, R. J. Atkinson
Stochastic Formulation of Storm Pattern and Rainfall Intensity-Duration Curve for Design Flood

A stochastic single storm pattern, which preserves stochastic properties of actual storm rainfall, is theoretically derived from Freund’s bivariate probability density function. Two typical design storm patterns, namely, the last and the central peaked types are defined by the three parameters: the reduced variate yp of the peak rainfall intensity, the autocorrelation index k related to the autocorrelation coefficient of the rainfall intensities, and the conditional probability F. Integration of the given stochastic design hyetograph gives a new ‘conditional probability’ intensity-duration formula. Furthermore, a practical estimation method of the three parameters: k, F, and yp is clearly shown. The conditional probability intensity-duration curve is demonstrated duration curve is demonstrated by using actual hourly rainfall data. Design intensities for shorter duration than 1 hr can be easily estimated from the available hourly data.

Michio Hashino
Rainfall Frequency Studies for Central Saudi Arabia

A frequency analysis of rainfall data for six stations in Central Saudi Arabia is presented. The Gumbel type-1 distribution is used to predict design storms for particular durations (10-min to 12-hr) and for particular return periods (2 to 50-yrs). Analysis of data for each individual station as well as regional analysis are carried out using annual series. The area under study is divided into two zones by considering the rainfall and topographical characteristics, and depth-duration-frequency relationships are derived for each zone. These relationships are recommended for the design of hydraulic structures (i.e. storm sewers and drainage channels) in the area.

Uygur Sendil, Abdin M. A. Salih
Analysis of Flood Occurrence through Characterization of Precipitation Patterns

This study aims to investigate the occurrences of floods by analyzing the monthly precipitation time sequence utilizing ordinary Kalman filter (OKF) and adaptive Kalman Filter (AKF). OKF identifies the abnormal precipitation periods in the sequence by comparing the observed and average precipitation patterns. AKF detects the changes in the precipitation pattern by associating them with the abrupt changes in the parameters of the periodic model of the precipitation time sequence. The 92-year long precipitation record at Fukuoka City, Japan shows three types of abnormal precipitation periods, which exhibit different degrees of possibility of flood occurrence. In addition, it shows nine precipitation epochs with different precipitation patterns. The model parameters estimated by AKF in one epoch characterize its precipitation pattern and describe the occurrences of the abnormal precipitation periods, revealing whether the risk of flood occurrence is high or not in the epoch.

A. Kawamura, K. Jinno, T. Ueda, R. R. Medina
The Sampling Theory of the Binary Random Field Decides the Relation about Point and Plane of the Extra Rainstorms

This paper studies the theoretical relation of rainstorm point-area coefficient and raises a correlation function sampling theorem. For the white noise of low frequency band-limitedand the stochastic process of moving average mold, the conclusion to the sampling theorem of correlation function and to the cut-off frequency is entirely the same. But the cut-off frequency doesn’t exist in the autoregressive mold and the like, therefore the random process of the cut-off frequency can’t be used. The sampling interval obtained from the sampling theorem of correlation function can make the variance of the mean computed by the sampling value approach the real variance. In this paper, on the basis of the sampling principle, the statistic average theoretical relation of the rainstorm point area coefficient has been reasoned out with the help of correlation function of the rainfall field.

Zheng-guo Pei

Entropy in Flood Frequency Analysis

Some New Perspectives on Maximum Entropy Techniques in Water Resources Research

After a brief expository account of the Shannon-Jaynes principle of maximum entropy (POME) for discrete and continuous variables, we give here an account of some recent research work which (i) a “histogram” method to contrast the discrete and continuous modes computation and the role of histogram in actual practice when dealing with continuous probability distributions. (ii) The idea of mean logarithmic decrement associated with a probability distribution is introduced and is shown to be related to the concept of differential entropy. The mean with respect to an arbitrary probability distribution of the logarithm of the ratio of the probability density function for an exponential distribution is discussed in the context of hydrological investigations. Unlike the entropy of a continuous probability distribution introduced in (i), this quantity which is an example of the Kullback-Leibler (KL) Information, is always positive and invariant under coordinate transformation. (iii) The constraints entering into POME as well as a minimum K L information are identified as a class of sufficient statistics which determine the unknown parameters in the probability density functions that occur in the most commonly used hydrological models. (iv) An example of (iii) where only the first two moments in a semi-infinite domain are givens is discussed to shed light on the limitations of POME, recentky recognized by Wragg and coworkers, and is made relevant to the work of Sonuga on rainfall-run off relationship. Finally, (v) a method of generating probability distributions starting from one basic distribution employing coordinate transformations is given. This in conjuction with (iii) leads to the notions of “Physical constraints” in contrast to the “mathematical constraints” in examing parameter estimation.

A. K. Rajagopal, S. Teitler, Vijay P. Singh
Entropy and Probability Distributions

Entropy was employed to investigate probability distribution functions and estimation of their parameters. Two curve-fitting methods, one of which is based on entropy, were compared using Monte Carlo simulation. A procedure to compare different distributions using entropy was suggested.

Y. Li, Vijay P. Singh, S. Cong

Parameter Estimation

An Evaluation of Seven Methods for Estimating Parameters of EV1 Distribution

Extreme value type 1 distribution parameters and quantiles were estimated by methods of moments, maximum likelihood estimation, Probability weighted moments, entropy, mixed moments, least squares and incomplete means for Monte Carlo samples generated from two Sampling cases: purely random process and serially correlated process. The performance of these estimators was statistically evaluated. The methods of maximum likelihood estimation and entropy provided most efficient quantile estimates. The methods of moments and probability weighted moments were comparable in efficiency of estimating the quantiles for small samples. The methods of mixed moments and incomplete means resulted in poor estimation of parameters and the quantiles.

Kishore Arora, Vijay P. Singh
Fitting Log Pearson Type 3 Distribution by Maximum Likelihood

Problems encountered in finding a maximum likelihood solution for log Pearson type 3 distribution are investigated. For a given sample at least two solutions are obtained, one with an upper bound and the other with a lower bound. In most cases the best solution is obvious from a comparison of sample statistics with the maximum likelihood estimates of statistical parameters, mean and variance. When these estimates differed greatly from the sample statistics, the sample contained either outliers or the solution is not satisfactory from a practical standpoint. Monte Carlo tests have shown that, in general, the maximum likelihood method may not give satisfactory Log Pearson fit for small samples.

Donthamsetti Veerabhadra Rao
Estimating the Parameters of the Generalized Gamma Distribution by Mixed Moments

Having three parameters but many possible forms, the generalized gamma (GG) distribution can be a good candidate for flood frequency analysis. In this study, four methods of parameter estimation were introduced, and sampling variances and covariances of the parameter estimators ewrw analytically derived along with the variance of the T-year flood event. For 45 sets of annual flood data taken from different sources, the GG distribution was found to provide good fits when the above methods were employed. Moreover, it was also found that although the sampling variances of the estimators were high, the percent standard error of the T-year flood was relatively small. Thus use of the GG distribution and these methods would provide good design flood magnitudes at appropriately determined return periods.

Huynh Ngoc Phien, T. V. Van Nguyen, Juang-Hua Kuo
Entropy Principle in the Estimation of Gumbel Parameters

The performance of the priniciples of a maximum entropy (POME) in the estimation of Gumbel parameters is compared with the existing procedures, viz., the method of moments, principle of a maximum likelihood, the method of least squares and the probability weighted moments (PWM). Simulation studies are performed to compare the parameters and flood quantile bias. It is shown that POME compares favourably with the PWM and performs better than the other methods. The potentialities of POME in hydrological analysis is highlighted.

Arun Kumar, B. P. Parida, Rema Devi

Selection of Flood Frequency Models

Assessment of Use of At-site and Regional Flood Data for Flood Frequency Estimation

The Standard error of flood quantiles estimated by three different methods were obtained by simulation. The methods were based on (a) at-site, (b) at-site and regional data combined and (c) regional data alone. Data were generated from GEV distributions. Since method (c) involves a regional regression relation an element of physical realism was preserved by basing the generating distributions on those estimated from real data of rivers in S. E. England and by using the corresponding catchment characteristic values in regional regression relations. The generated regional data set included 17 sites of 20 years and one site of variable record length n=1, 3, 6, 10 years. Three sites (low, medium, high Cv) were investigated separately. The at-site/regional method is considerably better than either (a) or (c) especially at high return periods. For large T(c) can be better than (a) especially if n is small. In general, the mean annual flood obtained by regional regression is less precise than $$\overline {\text{Q}} $$ obtained from one year of record.

C. S. Hebson, C. Cunnane
An Empiricial Study of Probability Distributions of Annual Maximum Floods

The Wakeby distribution has been proposed for flood frequency analysis because of its attractive properties. Algorithms have been developed and tested to estimate the parameters of the Wakeby distribution from observed data.Considerable theoretical and developmental work has been conducted on the Wakeby distribution. Besides, its parameters can be easily estimated. However not many studies have been reported in the literature about fitting the Wakeby distribution to empirical data and comparing the results to those obtained from other well known distributions such as the extreme value or the log person type III distribution in the analysis of observed data.In view of these considerations, annual maximum data from twenty-five watersheds have been analyzed by using the log pearson type III,mixture of extreme values and Wakeby distributions and the results are discussed in this paper. The data are from watersheds in the United States and Europe and cover a variety of climatic conditions.The parameters of the distributions are estimated by the maximum likelihood method or method of moments and the fitted distributions are compared to the histograms of observed data by using the goodness of fit tests In several instances, the log pearson type III distribution estimates failed to converge. In other instances the fit of the mixture distribution to histograms of observed data were much poorer than those of log pearson type III or the Wakeby distribution. Thr performance of the Wakeby distribution was in general as good as that of the log pearson type III distribution and in some cases superior.In view of these considerations, it is concluded that the Wakeby distribution should be strongly considered for flood frequency analysis, especially in view of the fact it is so easy to use.

A. Ramachandra Rao, P. S. Arora
Comparison of Some Flood Frequency Distributions Using Empirical Data

The Gumbel, 3-parameter lognormal, Pearson type 3, log-Pearson type 3, and Boughton distributions as well as power transformation were evaluated and compared using 55 flood data sets from various sizes of drainage basins representing various parts of the U.S.A. No distribution was found to fit all flood data sets accurately. Based on empirical criteria, the log-Pearson type 3 distribution emerged as the best distribution.

D. Jain, Vijay P. Singh
Use of Historical Data in Flood-Frequency Analysis

It is presented an assessment of several flood frequency procedures for the estimation of the 10,000 years flood which takes into account historical information, as for example when one knows the maximum flow in a period of 100 years. Monte Carlo experiments are employed. The conclusion favours the use of the exponential distribution.

Jorge Machado Damazio, Jerson Kelman
The 1983 Iguaçu River Flood Effect of a Rare Flood on Frequency Analyses

Methods of flood frequency analysis commonly used in engineering studies are evaluated and criticizied on the basis of actual hydrological data which include a recent extraordinary flood. The analysis is restricted to the 67,300 km2 Iguaçu river basin, almost entirely situated in Brazillian territory.The criteria for selection of the best theorectical frequency distribution are analysed and verified to be clearly dependent on the extension of the series and on subjective decisions such as the selection of class intervals.Design flood estimates are found to be dramatically influenced by the extension of the hydrological series or even by the methods used to define the parameters of the best fit theorectical probability distribution function. The relativity of such a concept as the 10,000 yr design flood, either due to limitations of the existing sample or to the subjectivity of the methodology used, has become evident.

Heinz D. Fill, Martha R. v. Borstel Sugai, Nelson L. S. de Pinto
Analysis of Flood Frequencies in the Cauvery Valley

Annual Peak flood records from ten subbasins in the upper reaches of the river Cauvery in southIndia were analysed. It was found That floods of longer return periods were correlated well with the drainage area of the subbasin. While the correlation for floods of Short return periods was poor. Correlation with normal annual rainfall in the subbasin was also found to be poor. Return periods for design floods based on empirical formulae, used in the region for small reservoirs (called ‘tanks’). were calculated using the fitted distributions. These are much too high in relation to the expected life of the tanks in all cases but one. This would suggest that while the majority to the tank Spillways were overdesigned, some would be underdesigned. This conclusion is also supported by the fact that breach due to inadequate spillway Capacity is rare.

M. Ramesh, M. C. Srinivasa Murthy, Rama Prasad

Multivariate Stochastic Models

A Multivariate Stochastic Flood Analysis Using Entropy

The principle of maximum entropy (POME) was used to derive a multivariate stochastic model for flood analysis. By specifying appropriate constraints in terms of covariances, variances, and cross-covariances, multivariate Gaussian and exponential distributions were derived. As a special case, the bivariate process of flood peaks and volumes was investigated for three cases: (1) the peaks and volumes are independent and occur the same number of times; (2) the number of peaks is more than that of volumes in the same time interval; and (3) peaks and volumes exhibit dependence Special emphasis was given to the structure of the matrix to Language multipliers in the model. Marginal distributions of flood characteristics were obtained, first with no restrictions imposed, and then with assumptions of independent occurrences and a high threshold value. The conditional distribution of flood volume given the peak was then discussed.This multivariate stochastic model was related to maximum entropy spectral analysis (MESA). This relationship was shown by deriving power spectrum from marginal distributions of flood characteristics and cross-spectrum from the bivariate distribution of peaks and volumes. This connection has two useful practical applications: use of the derived distributions in statistical inferences and use of the spectral analysis for predication and reconstruction of historical records.

P. F. Krstanovic, Vijay P. Singh
Multivariate Partial Duration Series in Flood Risk Analysis

Flood frequency and risk analysis play a crucial role in the design of many hydraulic structures and water resources systems. Traditionally only the annual flood peak discharges are considered in the flood risk assessment process. An alternative approach is the use of partial duration series. This approach has proved to be particularly suitable when short records are available. In this method not only the annual maxima are considered but all the peaks over a given threshold. When using this method it is very attractive to extend the results presented literature to include the flood volume and flood duration in the risk assessment process, instead of considering only the flood peak discharges. In this paper this extensions is attemptes and its merits are discussed. Data from twelve portuguese river basins are used to documet the appilicablity of multivariate partial duration series and its limitations.

Francisco Nunes Correia
Another Look at the Joint Probability of Rainfall and Runoff

Considerable discussion exists in the literature dealing with the relationship between the return period of a runoff event and the return period of the rainfall that produced the runoff. In most instances the assumption is made that the two return periods are the same when a design flow is estimated. This assumption has been widely criticized but continues to be used because of the lack of suitable alternative. Differences in the return period of runoff and rainfall can be attributed to variation in the antecedent conditions that exist on a catchment at the time of a storm. Thus the procedure that can simultaneously account for rainfall probability and antecedent condition probability is needed.This paper develops a procedure for estimating the magnitude of a flow event for a given return period that incorporates the joint probability of rainfall depths and antecedent soil water conditions. Rainfall probabilities are determined from the Extreme Value distribution, and soil water probabilities are based on 22 years of data from an experimental watershed near Stillwater, OK. The resulting flow estimates for a given return period are compared with similar estimates based on the assumption of equality of return periods for rainfall and runoff.

C. T. Haan, B. N. Wilson
A Bivariate Flood Model and Its Application

In this paper, a bivariate model will be presented, using flood peak and flood volume of direct runoff as the two characteristic variables of a flood event. The model can also be applied to other bivariate problems in hydrology.The bivariate normal distribution was chosen as the parent bivariate distribution function, which seems to be suitable for the practical user. It is necessary to transform the marginal distributions of both samples into normal distributions. The theoretical bi-normal distributions is fitted and tested by using the equi-lines of probability density function (ISO-PDF-lines).The model offers various possibilities of probability interpretation ISO-PDF-lines show a range within which a certain percentage of events lie. The conditional distributions of one variable can be computed for a constant value of the other variable, which shows the probability by which the variable is reached or exceeeded for the other fixed variable. A useful calculation is given by the so called “Probability of quadrants” For instance, the “upper-right probability” of a certain pair of values defines the probability of events with both characteristics being higher than those of the presumed values.Applying this method to the planning of flood protection projects, design flood events of chosen probabilities are determined by a certain equi-line of quadrants probability.

B. Sackl, H. Bergmann
Analysis and Simulation of Three-Component Floods in the Ohio River Basin

Flood series including three components, namely, the occurrence time, the discharge volume, and the flood duration, are formed from observed daily streamflows. The intensity function describing the flood occurrence rate is derived assuming the nonhomogeneous Poisson process. A fitted equation is applied to the estimated intensities by the least squares method. The Thinning method based on controlled deletion of points in a poisson process uses the fitted intensity function to generate the flood occurrence times which are expressed by a sequence of time intervals. The procedure of the marked point process is introduced to link the flood duration and the discharge volume, which are modeled, respectively, by exponential distributions, of the flood occurrence time. Therefore the simulated result consists of three-component flood series including the occurrence time, the flood duration and the discharge volume, which can be used for environmental pollution protection and flood-control purposes. The statistical comparisons made between the generated and the observed series are reasonably agreeable.

Tiao J. Chang
A Probabilistic Model for Flooding Downstream of the Junction of Two Rivers

Flood frequency analysis downstream of the junction of two rivers, when flood information is available upstream of the confluence is the main subject of this paper. For this purpose, it is assumed that the joint distribution function of the floods at two upstream gaging stations follows a bivariate extreme value distribution. The distribution at the junction is obtained by the sum of two random variables. The model is applied successfully to an actual case study.

J. A. Raynal, J. D. Salas
Concurrent Flooding Probabilities

Concurrent flooding the flooding at a location due to two or more causative factors, has not been adequately addressed in the literature. The estimation of stage probabilities is important to hydrologic engineers and becomes complex where a location can be inundated due to flooding of two or more rivers. A probability concept of concurrent flooding is developed to determine flooding probabilities on a tributary river where the stage-flow relationship on the tributary is affected by backwater due to a flooding event on a nearby main river. A systematic approach for determining concurrent flooding probabilities call the”critical combination” method was developed and applied to the Mearmec River near its confluence with the Mississippi River in Missorui. A “critical combination” is a tributary river flow and main river stage pair that could combine to cause a yearly peak stage at a point on the tributary. The stage resulting from each critical combination,computed using a backwater program, produces an annual stage series to which a probability distribution can be fitted. This procedure was used in the Meramec-Mississippi case and shown to be a valid approach.

Charles D. Morris, Lloyd Chris Wilson
Bivariate Analysis of Concurrent Flooding

This study investigates the probability of concurrent flooding near and around the confluence of the Meramec ans Mississippi Rivers in Missouri. A new general approach called the “bivariate probability method” was developed as a result of this investigation.The “bivariate probability method” utilizes the fit of a bivariate probability distribution in order to describe the probability relationship between tributary river flow and main river stage. A backwater program is used for the conversion of the tributary river. The method also the implements the theory of total probability so that all flow/stage combinations which could have created a given stage on the tributary river are accounted for in the probability computations.An important aspect of the proposed methodology is the applicability to basins which have limited periods of record. This application is accomplished by using relationships developed between the short-term gage and nearby longer-term gage to estimate the parameters required to fit the bivariate probability distribution.

C. D. Morris, S. J. Calise
Backmatter
Metadata
Title
Hydrologic Frequency Modeling
Editor
Vijay P. Singh
Copyright Year
1987
Publisher
Springer Netherlands
Electronic ISBN
978-94-009-3953-0
Print ISBN
978-94-010-8253-2
DOI
https://doi.org/10.1007/978-94-009-3953-0