Southeastern Section - 66th Annual Meeting - 2017

Paper No. 25-7
Presentation Time: 3:20 PM

ARE OPEN ENDED QUESTIONS WORTH THE EXTRA TIME? A COMPARISON OF OPEN-ENDED, MULTIPLE CHOICE, AND LIKERT SCALE ITEMS FOR ASSESSING THE GENERAL PUBLIC’S UNDERSTANDING OF EVOLUTION


FORCINO, Frank L., Geosciences & Natural Resources Department, Western Carolina University, 331 Stillwell Building, Cullowhee, NC 28723 and SALTER, Rachel L., Department of Biological Sciences, North Dakota State University, 223 Stevens Hall, Dept. 2715, P.O. Box 6050, Fargo, ND 58108, FLForcino@email.wcu.edu

Before attempting to address deficiencies in science knowledge, researchers and educators must assess the general public’s current understanding of science. Three heavily employed assessment item types include open-ended, multiple choice, and Likert scale. There are positives and negatives associated with each type. For example, open-ended questions provide a wealth of detail about the test taker’s understanding of the content; however, analyzing the responses is labor intensive because they are longer, individualized, and open to the scorer’s interpretation.

Here, we examine the evolution content understanding of 300 participants from the general public (gathered from Amazon Mechanical Turk) using open-ended responses, multiple choice, and Likert questions. The goal is to determine if these three methods provide the same or disparate feedback on the participants’ understanding of evolution. We used the Cheetah item from the Bishop & Anderson Open Response Items (ORI), six multiple choice (MC) questions that we created and tested for validity and reliability, and 6 Likert questions from the Measurement of Acceptance of the Theory of Evolution instrument (MATE).

The mean amount of correct content using the ORI was 12%. The mean MC correct was 54%. The mean response for the MATE was 1.9 (out of 5, with 1 being correct and 5 being incorrect). Using a paired t-test, participants scored significantly lower on the ORI compared with the MC (p < 0.001). Out of the 300 participants, 209 (70%) averaged 1 to 2 on the MATE. In comparison, 12 (4%) answered all six MC items correctly. Again for the MC, 193 (64%) answered at least three of the six questions correctly. This higher likelihood of answering the Likert items correctly suggests that Likert questions may be the easiest to answer correctly, even if the participants did not know the correct answer. With an increased content knowledge required to answer the MC correctly compared with the Likert, the MC items may be getting a better picture of what people actually know. When the most in-depth responses are gathered using the ORI, it is clear how little the general public understands about the intricacies of evolution (12% correct). Thus, using the open-ended item is more likely to demonstrate what the participants know (or do not know) when compared with the Likert and MC.