ARE OPEN ENDED QUESTIONS WORTH THE EXTRA TIME? A COMPARISON OF OPEN-ENDED, MULTIPLE CHOICE, AND LIKERT SCALE ITEMS FOR ASSESSING THE GENERAL PUBLIC’S UNDERSTANDING OF EVOLUTION
Here, we examine the evolution content understanding of 300 participants from the general public (gathered from Amazon Mechanical Turk) using open-ended responses, multiple choice, and Likert questions. The goal is to determine if these three methods provide the same or disparate feedback on the participants’ understanding of evolution. We used the Cheetah item from the Bishop & Anderson Open Response Items (ORI), six multiple choice (MC) questions that we created and tested for validity and reliability, and 6 Likert questions from the Measurement of Acceptance of the Theory of Evolution instrument (MATE).
The mean amount of correct content using the ORI was 12%. The mean MC correct was 54%. The mean response for the MATE was 1.9 (out of 5, with 1 being correct and 5 being incorrect). Using a paired t-test, participants scored significantly lower on the ORI compared with the MC (p < 0.001). Out of the 300 participants, 209 (70%) averaged 1 to 2 on the MATE. In comparison, 12 (4%) answered all six MC items correctly. Again for the MC, 193 (64%) answered at least three of the six questions correctly. This higher likelihood of answering the Likert items correctly suggests that Likert questions may be the easiest to answer correctly, even if the participants did not know the correct answer. With an increased content knowledge required to answer the MC correctly compared with the Likert, the MC items may be getting a better picture of what people actually know. When the most in-depth responses are gathered using the ORI, it is clear how little the general public understands about the intricacies of evolution (12% correct). Thus, using the open-ended item is more likely to demonstrate what the participants know (or do not know) when compared with the Likert and MC.