2006 Philadelphia Annual Meeting (22–25 October 2006)

Paper No. 6
Presentation Time: 2:45 PM

A NEW PARALLEL MULTI-COMPONENT, FINITE ELEMENT MODEL FOR SIMULATING CRUSTAL-SCALE GROUNDWATER FLOW, HEAT, ISOTOPE, AND NOBLE GAS, AND SILICA TRANSPORT


PERSON, Mark, Geoloigcal Sciences, Indiana University, 1001 E. 10th St, Bloomington, IN 47405, ZHANG, Ye, Geological Sciences, Indiana Univ, 1001 East 10th St, Bloomington, IN 47405-1405, WANG, Peng, University Information Technology Services, Indiana University, Bloomington, IN 47405, GAO, Yongli, Department of Physics, Astronomy, and Geology, East Tennessee State University, Johnson City, TN 37614 and HOFSTRA, Albert, CR Minerals Team, US Geol Survey, Denver Federal Center, Lakewood, 80225, maperson@indiana.edu

Increasingly, Pentium-based, multi-processor servers are becoming wide-spread and inexpensive. Concurrently, hydrologists are increasingly relying on multi-component transport codes which utilize groundwater residence times, environmental isotopes, solute concentrations, heat transfer, and noble gas composition as a means of testing conceptual models of groundwater flow. With support from the US Geological Survey, we have developed a new parallel, cross-sectional hydrothermal model capable of utilizing the resources of multi-processor servers. Traditionally, high performance computing involves splitting up the A matrix of a large three-dimensional problem. Instead one processor is allocated to each transport equation in our approach. The dependent variable is found by inverting the A matrix using a conventional reduced bandwidth Gaussian elimination solver. We first solve the groundwater flow equation at the beginning of each time step. Then the resulting velocity field is used to solve each transport equation in parallel. At the end of each time step, the values of the dependent variable of each transport equation are broadcast to each processor. Benchmark simulations we have conducted on different 4- and 8-processor servers indicate an almost perfect linear scaling between simulation run time and the number of processors. This approach only required using a handful of calls to the MPI library and about 20 lines of additional code all in the main program. The main task involved broadcasting and receiving the values of the dependent variables stored on each processor (e.g. hydraulic heads, temperatures, dissolved silica, noble gas, and oxygen-18 isotopic composition) at the end of each time step. This approach could be easily implemented in reactive transport codes that track dozens of solute species.