Paper No. 50-16
Presentation Time: 3:35 PM
GEODEEPDIVE: AUTOMATING THE LOCATION AND EXTRACTION OF DATA AND INFORMATION FROM DIGITAL PUBLICATIONS
Modern scientific databases simplify access to data and information, but a large body of knowledge remains within the published literature and is therefore difficult to access and leverage at scale in scientific workflows. Recent advances in machine reading and learning approaches to converting unstructured text, tables, and figures into structured knowledge bases are promising, but these software tools cannot be deployed for scientific research purposes without access to new and old publications and computing resources. Automation of such approaches is also necessary in order to keep pace with the ever-growing scientific literature. GeoDeepDive bridges the gap between scientists needing to locate and extract information from large numbers of publications and the millions of documents that are distributed by multiple different publishers every year. As of August 2018, GeoDeepDive (GDD) had ingested over 7.4 million full-text documents from multiple commercial, professional society, and open-access publishers. In accordance with GDD-negotiated publisher agreements, original documents and citation metadata are stored locally and prepared for common data mining activities by running software tools that parse and annotate their contents linguistically (natural language processing) and visually (optical character recognition). Vocabularies of terms in domain-specific databases can be labeled throughout the full-text of documents, with results exposed to users via an API. New vocabularies and versions of parsing and annotation tools can be deployed rapidly across all original documents using the distributed computing capacities provided by HTCondor. Downloading, storing, and pre-processing original PDF content from distributed publishers and making these data products available to user applications provides new mechanisms for discovering and using information in publications, augmenting existing databases with new information, and reducing time-to-science.