GSA Annual Meeting in Denver, Colorado, USA - 2016

Paper No. 97-12
Presentation Time: 11:15 AM

A TIME FOR NUMERACY: RESOLVING WHETHER OR NOT PEOPLE CAN ACCURATELY SELF-ASSESS THEIR COMPETENCE


NUHFER, Edward, California State University (retired), Niwot, CO 80503, WIRTH, Karl R., Geology Department, Macalester College, Saint Paul, MN 55105, FLEISHER, Steven C., Department of Psychology, California State University Channel Islands, One University Drive, Camarillo, CA 93012, COGAN, Christopher B., Independent Consultant, Camarillo, CA 80503 and GAZE, Eric, Mathematics, Bowdoin College, Brunswick, ME 04011, enuhfer@earthlink.net

The peer-reviewed literature of self-assessment features three hypotheses: 1) measures of self-assessed competence have no meaningful relationship with demonstrated competence; 2) most people are unskilled and unaware of it and have a strong propensity toward overestimating their abilities, and 3) most people's self-assessed competence is generally in accord with direct measures of their competence. Behavioral scientists currently favor the second hypothesis.

We tested these competing hypotheses by measuring the accuracy of self-assessed competence in understanding science as a way of knowing in 1154 participants, who consisted of novices (e.g., first and second year undergraduates), developing experts (upper level undergraduates, graduate students), and experts (faculty). Our instruments (global queries, Science Literacy Concept Inventory and knowledge survey of same) delivered data of high reliability, and we confirmed that self-assessment data consists of mixtures of a meaningful self-assessment signal and random noise. The noise within the data affects the numerical analyses of data, patterns displayed in graphs, and thus interpretations based on both. We show how the second hypothesis became the favored explanation through researchers repeatedly mistaking graphical patterns generated by random noise as patterns that portrayed the authentic character of self-assessment.

Our results show that the relationship between self-assessed competence and demonstrated competence is meaningful and significant. Instead of being "unskilled and unaware of it,” our participants furnished data that confirms that their self-assessments are more often correct than not. This result refutes the first and second hypotheses and supports the third. Our study indicates that self-assessment skill is measurable, teachable, and valuable. Gains in self-assessment skill increase the capacity for improved learning, problem solving, and decision-making. Thus, increased self-assessment skill is a worthy educational outcome for all disciplines. Knowledge surveys, reading reflections, rubrics, exam wrappers and embedded confidence ratings are practical innovations that offer ways to promote and assess growing skills in self-assessment competence.

Handouts
  • nuhferetal.pptx (7.8 MB)