DISCRETE EVENT PROCESS MODELS AND MUSEUM CURATION
The Texas Natural Science Center uses the software SimPy, a Python-based Open Source discrete event simulation package, to model a project to develop a web-enabled digital database of the NPL type and figured collection (approximately 22,000 specimens). The project is broken down into a series of processes, each of which can be modeled independently as a series of discrete steps. The model considers specimens and products (images, data records, etc.) as inputs and outputs to the system; staff and equipment as resources; and the individual activities (cleaning, photography, data input, etc.) as processing events. In its simplest form, a process step consists of an input requiring certain resources for processing and consumes some amount of time and resources (defined by probability distributions) to result in an output. The individual steps can be linked, the output of one step comprising the input of the next, or can run in parallel. The probability distributions are estimated by timing the various process steps as they are performed on sample input sets, and by sampling subsets of the input domain (e.g., counting the number of specimens in a random drawer). Model adjustment and verification is an on-going process as the project proceeds.
This methodology results in a standardized description of all the individual processes involved, from the initial handling of individual specimens to the final publication on the Web, as well as resource requirements and vagaries in performance time. The strength of simulation is the ability to perform “what-if” experiments, manipulating resources and connections of the individual steps. Changes in the project can be tested for benefit, and methods to eliminate bottlenecks can be developed without otherwise impacting the performance of the project itself.