|2010 GSA Denver Annual Meeting (31 October –3 November 2010)|
|Paper No. 287-7|
|Presentation Time: 3:00 PM-3:15 PM|
DISCRETE EVENT PROCESS MODELS AND MUSEUM CURATION
ZACHOS, Louis G., Non-Vertebrate Paleontology Laboratory, Texas Natural Science Center, J.J. Pickle Research Campus, 10100 Burnet Road, Austin, TX 78758-4445, email@example.com and MOLINEUX, Ann, Non-vertebrate Paleontology Laboratory, Texas Natural Science Center, The University of Texas at Austin, Austin, TX 78758|
Museum curation activities often include projects with ill-defined variables (including staffing, number of specimens, and equipment needs), performed over extended time periods. This is particularly problematic when applying for funding to support these activities. These activities have one advantage: they can be subdivided into a series of discrete steps or subactivities. By defining these steps within a computational discrete event model, it is possible to simulate realistic scenarios of actual and proposed projects.
The Texas Natural Science Center uses the software SimPy, a Python-based Open Source discrete event simulation package, to model a project to develop a web-enabled digital database of the NPL type and figured collection (approximately 22,000 specimens). The project is broken down into a series of processes, each of which can be modeled independently as a series of discrete steps. The model considers specimens and products (images, data records, etc.) as inputs and outputs to the system; staff and equipment as resources; and the individual activities (cleaning, photography, data input, etc.) as processing events. In its simplest form, a process step consists of an input requiring certain resources for processing and consumes some amount of time and resources (defined by probability distributions) to result in an output. The individual steps can be linked, the output of one step comprising the input of the next, or can run in parallel. The probability distributions are estimated by timing the various process steps as they are performed on sample input sets, and by sampling subsets of the input domain (e.g., counting the number of specimens in a random drawer). Model adjustment and verification is an on-going process as the project proceeds.
This methodology results in a standardized description of all the individual processes involved, from the initial handling of individual specimens to the final publication on the Web, as well as resource requirements and vagaries in performance time. The strength of simulation is the ability to perform “what-if” experiments, manipulating resources and connections of the individual steps. Changes in the project can be tested for benefit, and methods to eliminate bottlenecks can be developed without otherwise impacting the performance of the project itself.
2010 GSA Denver Annual Meeting (31 October –3 November 2010)
General Information for this Meeting
|Presentation Handout (.ppt format, 547.0 kb)|
|Session No. 287|
Geological and Paleobiological Collections: Best Practices for Preservation, Access, and Use in a Changing World II
Colorado Convention Center: Room 603
1:30 PM-5:30 PM, Wednesday, 3 November 2010
Geological Society of America Abstracts with Programs, Vol. 42, No. 5, p. 669
© Copyright 2010 The Geological Society of America (GSA), all rights reserved. Permission is hereby granted to the author(s) of this abstract to reproduce and distribute it freely, for noncommercial purposes. Permission is hereby granted to any individual scientist to download a single copy of this electronic file and reproduce up to 20 paper copies for noncommercial purposes advancing science and education, including classroom use, providing all reproductions include the complete content shown here, including the author information. All other forms of reproduction and/or transmittal are prohibited without written permission from GSA Copyright Permissions.