Geoscientists frequently make decisions under uncertainty, where data are incomplete and direct observation is impossible due to the large time spans of geologic processes that leave evidence lost or buried beneath the Earth’s surface. When faced with uncertainty, ordinary people and experts rely on prior experience to guide choice and employ heuristics (rules of thumb). Research within cognitive psychology demonstrates that heuristics can lead to systematic errors in decision-making, i.e., biases. Mobile robotic platforms – increasingly used by geoscientists to collect data sets with rich spatial resolution – have great potential to aid geologic decision-making. Robots can “nudge” geoscientists to update/refine hypotheses and research plans based on incoming measurement data, reducing vulnerability to bias and improving scientific predictions. However, to be truly effective, robotic nudges must be explainable, i.e., the robot must be capable of explaining how a decision was generated. Ifa robot is not transparent about the intentions behind a decision, it can be perceived as untrustworthy and attributed blame for alleged errors. To develop explainable robotic nudges that will be understood and trusted by geoscientists, it is first necessary to determine how geoscientists explain decisions and behavior to each other. In this study, we examine how expert geoscientists constrain sampling decisions with heuristics, in the service of designing explainable robotic nudges for geoscience field work.
Geoscience experts completed a simulated geologic decision-making scenario in which they were asked to evaluate a hypothesis by collecting environmental data using a legged robot. Participants reported an initial sampling strategy that was executed by the robot, and measurement data was provided in real time – at any point, participants were able to stop the robot and change their sampling strategy or make a conclusion about the hypothesis. Participants evidenced strong use of heuristics that resulted in biased decision-making; for example, a tendency to take a consistent number of samples at each environmental location (regardless of the variability in measurement at said location) led to systematic under- and over-sampling. Biases were robust, occurring across all levels of expertise.