COMPARING MODELS TO DATA FOR NATURAL HAZARDS
Landslides and floods play essential roles in the evolution of landforms. They also constitute severe natural hazards. However, modelling approaches to the two phenomena differ greatly, with empirical statistical models primarily used for flood hazard estimates and deterministic slope stability models dominating landslide hazard estimates.
The primary approach to the flood hazard is to carry out statistical flood-frequency studies. The application of a particular statistical distribution to flood-frequency forecasting is strictly empirical, as no applicable model has been proposed. A variety of statistical distributions are fit to available flood records. In the USA the log Pearson type III distribution has been adopted as the standard for federal flood frequency estimates. This is a thin-tail (exponential) distribution. We discuss the problems with using annual data vs partial duration data and argue that a fractal (power-law, fat-tail) distribution is preferable and provides more conservative estimates of the flood hazard.
Studies of the landslide hazard are dominated by deterministic studies of slope stability. Statistical frequency-magnitude studies of landslides have only been carried out recently and show that the medium and large landslides are well approximated by power-law distributions in a wide range of environments. We argue that a statistical approach is valid, with resultant significant conclusions relevant to policy makers.
Finally, we compare the resultant frequency-size statistics of natural hazards with cellular-automata models that have been proposed as exhibiting self-organized criticality (SOC). In the case of landslides the relevant model is the sand pile model. Other simple cellular-automata models that have been proposed for natural hazards include the slider-block model for earthquakes and the forest-fire model for forest fires. We discuss the limitations and strengths of these comparisons.