GETTING BACK TO BASICS: REAFFIRMING TWO IMPORTANT THEORETICAL UNDERPINNINGS RELEVANT TO OSL DATA ANALYSIS
Underpinning 2: When attempting to evaluate the statistical properties of an infinitely large data population, the more samples you can obtain from that population the better you can determine/approximate the population’s true properties. Quite simply, large data sets are better. We now have the experimental capability to easily collect statistically significant OSL data sets, but numerous laboratories have opted for minimal data set sizes. Why?
There seems to be a philosophical schism. Many OSL labs have taken the track of small data set sizes with precision weighting guiding the selection of the representative dose for age calculation. This approach does not reflect the actual characteristics of the infinitely large data population. It selects a subpopulation, which as discussed above, does not necessarily equate to accuracy. The other less traveled path is to collect large ED data sets, ignore individual aliquot precision (because all samples from the unknown population are equally valid with some minor exceptions; discussed in the presentation), and allow the properties of the data set to guide the selection of the representative dose for age calculation. This approach produces a data set that is a genuine sampling from the unknown population and it produces better statistical approximations of that unknown population.