2014 GSA Annual Meeting in Vancouver, British Columbia (19–22 October 2014)

Paper No. 183-13
Presentation Time: 11:20 AM

MULTIPLE DIMENSIONS OF ASSESSMENT IN FIELD COURSE EXERCISES:  THE DEVELOPMENT OF RUBRICS FOR VALID AND RELIABLE ASSESSMENT


PYLE, Eric J., Department of Geology & Environmental Science, James Madison University, MSC 6903, Harrisonburg, VA 22807 and WHITMEYER, Steve, Geology & Environmental Science, James Madison University, 395 S. High St, MSC 6903, Harrisonburg, VA 22807

Extensive field learning opportunities are arguably a cornerstone in students’ preparation as geoscientists, often culminating with a field camp experience. These learning experiences are often intense and address cognitive, affective, and psychomotor domains. Given the complexity of assessing learning across all of these domains, it is difficult to generate valid and reliable measures for student learning across multiple instructors, tasks, and from year to year. To address this problem, the faculty of the James Madison University Field Course in Ireland developed a series of rubrics for field notebooks, maps, cross-sections, lithologic descriptions, and memoirs. To establish content validity, the rubric descriptor statements were vetted by multiple field course faculty members, selecting dimensions appropriate to each of the assignments in the field course. For example, an assignment for a particular site might include field notes, a map and cross-section, but not a memoir. Implemented in 2009, the consistency of measurement across the same assignments provides at least a limited level of reliability, with instructors applying the same rubrics across multiple offerings of the course. Through the application of the rubrics, statistically significant differences between components of assignments have been noted and reported to faculty, demonstrating the sensitivity of rubrics to student performance and resulting in mindful adjustments to assignments to ensure that more optimal learning is attained in subsequent offerings. Furthermore, student performance has been cross-referenced to pre-post self-reports of their own perceptions of the assignments across domains. Continued work with these rubrics includes establishing a more quantitative basis for reliability and establishing construct validity by comparing student performance with other independent measurements. With reliability and validity strengthened, this suite of rubrics has value beyond the assessment of student learning in the JMU field course and could be used or adapted to other field courses of similar format. Solid assessment data have the potential to contribute to the evaluation of the field course experiences as a whole, offering educational justification in the face of the resource-intensive nature of field courses.