2015 GSA Annual Meeting in Baltimore, Maryland, USA (1-4 November 2015)

Paper No. 88-6
Presentation Time: 9:15 AM

USING AN ITEM RESPONSE MODELING APPROACH TO ASSESS STUDENTS' PROFICIENCY AROUND ANALOG MODELS IN HIGH SCHOOL EARTH SCIENCE


KASTENS, Kim A., Lamont-Doherty Earth Observatory, Columbia University, 61 Route 9W, Palisades, NY 10964-8000 and RIVET, Ann, Teachers College, Columbia University, New York, NY 10027, kastens@ldeo.columbia.edu

We used item response theory (IRT) (Wilson, 2005) to develop an assessment of students' proficiency with physical (analog) models used in a high-school level Earth & Space Science course. In developing an IRT-based instrument, one first articulates the competency that one wishes to measure (the "construct"), and specifies how students' proficiency would vary from novice to more expert in the form of a "construct map." Our construct map is grounded in analogical reasoning (Gentner, 1983), and has three levels: (a) ability to relate entities in the model to the corresponding phenomena in the real world, e.g. the big ball represents the Earth; (b) ability to relate motions and configurations, e.g. the motion of the small ball around the big ball represents the motion of the moon around the Earth; (c) ability to relate mechanisms, e.g. the geometry by which the illumination of the small ball changes represents the geometry that causes moon phases. The second step is to develop assessment items that elicit observable student behaviors or products indicative of levels of proficiency on the construct. We developed assessment items mapped to the construct in three different contexts: causes of the seasons, phases of the moon, and sedimentary deposition; each instrument referenced a physical model being run by an experimenter at the front of the classroom. The third step is to develop an "outcome space," which categorizes how student responses to each item are to be interpreted in terms of the construct map. As most of our items are free-response, our outcome space is in the form of a scoring rubric, developed iteratively through examination of student responses. The final step is to utilize a measurement model (we employed Rasch analysis) to relate students' item response scores to their demonstrated proficiency on the construct.

This technique enabled us to make powerful claims from our data: that student proficiency does appear to develop through the trajectory we conjectured; that students tend to be more proficient at mapping correspondences than non-correspondences between model and reality; that student at all levels of initial proficiency improved across instruction; and that professional development improved teachers’ ability to support this aspect of modeling practice.