Paper No. 10
Presentation Time: 10:35 AM
DEVELOPING A GEOSCIENCE LITERACY EXAM FOR ASSESSING STUDENTS' EARTH, OCEAN, ATMOSPHERIC AND CLIMATE SCIENCE LITERACY
STEER, David N., Department of Geology and Environmental Science, The University of Akron, Akron, OH 44325-4101, IVERSON, Ellen, Science Education Resource Center, Carleton College, 1 North College Street, Northfield, MN 55057 and MANDUCA, Cathryn, Science Education Resource Center, Carleton College, B-SERC, 1 North College St, Northfield, MN 55057, steer@uakron.edu
This research seeks to develop valid and reliable questions that faculty can use to assess student learning from curricula developed to teach geoscience in the context of societal issues across the disciplines. This effort is part of the InTeGrate project designed to create a population of college graduates who are poised to use geoscience knowledge in developing solutions to societal challenges. A team of 14 community members developed the questions to probe concepts elucidated in recently released earth, ocean, atmosphere and climate science literacy documents. When complete, the Geoscience Literacy Exam (GLE) will include a suite of at least 30, single answer, multiple choice questions aimed at the understanding or application cognitive levels (called Level 1 questions). Those questions will address the core content areas typically covered in introductory classes. The exam set will also include the same number of more challenging questions at the understanding through analyzing cognitive levels (Level 2). Those questions will include multiple correct answer questions, matching or images as prompts. The more complex questions could be answered by some introductory students, but target students who have taken upper-level courses in the content areas. Lastly, a set of 30 short essay questions are being developed (Level 3). Those analyzing- through creating-level questions could be answered by students at any level, but the responses are expected to show significant growth across the curriculum.
When complete, faculty will be able to assemble sets of questions to track progress toward meeting literacy goals. Faculty can choose questions to best fit the objectives of their program for use as pre/post measures. A typical test design might include 5-8 Level 1 questions, 3-5 Level 2 questions and 2 short essays (Level 3). Scoring of Level 1 and Level 2 questions could be computer automated in most course management systems since those questions have known correct responses. Initially, essays will be human graded using rubrics. Research is underway to use those student responses to develop databases that allow reliable and accurate computer grading of the same or similar essays answered by future students. We seek partners interested in testing existing questions and developing a more comprehensive question set.