2005 Salt Lake City Annual Meeting (October 16–19, 2005)

Paper No. 15
Presentation Time: 1:30 PM-5:30 PM


DUNCAN, Ian J., Texas Bureau of Economic Geology, Univ of Texas, University Staion Box X, Austin, TX 78713 and CAMPBELL, Elizabeth V., DMR, 900 Natural Resources Dr, Charlottesville, VA 22903, ian.duncan@beg.utexas.edu

Many end-users of digital geologic map data are not concerned with estimates of map error but rather they want data with sufficient reliability to support sound decision making. The international standard CEN/TC 287 defines the reliability of spatial data as “the meta-quality element describing the likelihood that a sample of a geographic subset is representative of the whole subset”. A different approach taken in this study is to assume the definition of reliability used in quality engineering and surveying. In these fields reliability is associated with the relative occurrence rate of blunders or gross errors in the data set. Blunders not only destroy the confidence of the end-user in the data, they also distort any attempt at statistical analysis of the data. The standard approach in geology to correct blunders, while at the same time not revealing to end users any information on the blunder rate, fails to help end-users to assess reliability. Examples of blunders in regional map compilations from Virginia show the potential for both these effects. In compilations blunders detection can be accomplished by duplicating the compilation process. Determining the blunder rate in geologic mapping through redundant observation may not be practical except as a research project. Developing a qualitative proxy for the relative reliability of the original geologic maps can be made using factors such as the data density, evaluation of the quality portion of the metadata, and an estimate of the credibility of the mapper and the map publisher's quality control processes. Any meaningful estimate of reliability can be useful to end-users in making informed decisions based on geologic map data.