A TIME FOR NUMERACY: RESOLVING WHETHER OR NOT PEOPLE CAN ACCURATELY SELF-ASSESS THEIR COMPETENCE
We tested these competing hypotheses by measuring the accuracy of self-assessed competence in understanding science as a way of knowing in 1154 participants, who consisted of novices (e.g., first and second year undergraduates), developing experts (upper level undergraduates, graduate students), and experts (faculty). Our instruments (global queries, Science Literacy Concept Inventory and knowledge survey of same) delivered data of high reliability, and we confirmed that self-assessment data consists of mixtures of a meaningful self-assessment signal and random noise. The noise within the data affects the numerical analyses of data, patterns displayed in graphs, and thus interpretations based on both. We show how the second hypothesis became the favored explanation through researchers repeatedly mistaking graphical patterns generated by random noise as patterns that portrayed the authentic character of self-assessment.
Our results show that the relationship between self-assessed competence and demonstrated competence is meaningful and significant. Instead of being "unskilled and unaware of it,” our participants furnished data that confirms that their self-assessments are more often correct than not. This result refutes the first and second hypotheses and supports the third. Our study indicates that self-assessment skill is measurable, teachable, and valuable. Gains in self-assessment skill increase the capacity for improved learning, problem solving, and decision-making. Thus, increased self-assessment skill is a worthy educational outcome for all disciplines. Knowledge surveys, reading reflections, rubrics, exam wrappers and embedded confidence ratings are practical innovations that offer ways to promote and assess growing skills in self-assessment competence.