CALL FOR PROPOSALS:

ORGANIZERS

  • Harvey Thorleifson, Chair
    Minnesota Geological Survey
  • Carrie Jennings, Vice Chair
    Minnesota Geological Survey
  • David Bush, Technical Program Chair
    University of West Georgia
  • Jim Miller, Field Trip Chair
    University of Minnesota Duluth
  • Curtis M. Hudak, Sponsorship Chair
    Foth Infrastructure & Environment, LLC

 

Paper No. 1
Presentation Time: 9:00 AM-6:00 PM

LOW-STAKES EVALUATION VS. HIGH-STAKES ASSESSMENT: STUDENT PERFORMANCE MOTIVATION FOR PALEOCLIMATE CONTENT IN A GENERAL EDUCATION GEOSCIENCE COURSE


PYLE, Eric J., Department of Geology & Environmental Science, James Madison University, MSC 6903, Harrisonburg, VA 22807 and ST. JOHN, Kristen, Geology and Environmental Science, James Madison University, MSC 6903, Harrisonburg, VA 22807, pyleej@jmu.edu

Field-testing of instructional materials is needed to meet the demand for high quality materials and to document effectiveness. Large field trials of instructional materials require quantitative measures. In effort to collect study data efficiently, online surveys and tests have become increasingly relied upon. In effort to assure that a significant proportion of the target population participates, small incentives are routinely used in such studies. Not often considered is how students’ test-taking efforts impact study results when collected in studies that are high-stakes for the instructional materials development team, yet low stakes for the students. The questions of how test-taking contexts and students’ motivations impact assessment results are the focus of this evaluation. As a part of the field-testing of an NSF-funded CCLI project, students in a large, introductory general education geoscience course were taught using new paleoclimate instructional materials. Students were asked to complete for extra credit, a pre- and post-test of content and skills in an online format. Responses were collected across 5 semesters, and involved 4 higher education institutions. Analysis revealed statistically significant improvement in student performance, but overall scores were lower than expected. Testing the hypothesis that student motivation to perform was strongly influenced by the stakes associated with administration, a subset of the same questions were embedded in 2 regular, in-class examinations during the final semester of the project at one of the institutions. Subsequent ANOVA testing revealed not just a significant improvement in student scores between the pre- and post-test administrations, but also a significant difference between the online post-test evaluation scores and the in-class assessment scores, such that the average in-class scores on the question subset were nearly double that of the online scores. Multiple factors can account, in part, for this difference, but the motivational aspects of a grade-related focus can be inferred to have the strongest impact on student performance. These results suggest that the relationship between the evaluation of instructional innovations and student assessment should be closely aligned and developmentally linked.
Meeting Home page GSA Home Page