2005 Salt Lake City Annual Meeting (October 16–19, 2005)

Paper No. 11
Presentation Time: 4:30 PM


SMITH, Gary A., Earth and Planetary Sciences, Univ of New Mexico, MSC03 2040, Albuquerque, NM 87131-0001, gsmith@unm.edu

I undertook a retrospective analysis of assessment scores in a lower-division earth history (EH) and an upper-division environmental geology (EG) class. Short-answer exams (1/3 of course grade) emphasize authentic testing of higher-order learning and serve as summative assessments of learning performance for course segments. Online quizzes (EH, EG), in-class pair/group assignments (EH, EG), lab exercises (EH), multi-week research projects (EG), and evaluative and researched reading/writing assignments (EG) provide at last twice weekly formative assessments of student learning with opportunities for feedback and improvement. I grade these assignments to assure student participation but they primarily serve a formative purpose. EH exam grades do not correlate with lab grades (R2=0.02) or in-class-exercise grades (R2=0.07). Exams only partly cover learning objectives for lab exercises, but noncorrelation may also relate to students not learning equally during group lab work and/or to lenient TA grading. In-class exercises cover the same learning objectives as the exams but also are not completed individually. Individual online quiz grades directly correlate to exam grades (R2=0.86, p<0.0001). Most high-scoring students took advantage of opportunities to reanswer complex quiz questions after receiving instructor feedback. EG exam grades correlate poorly with in-class exercises (R2=0.4; p=0.12) or on-line quizzes (R2=0.5, p=0.04). Poor correlation with quiz grades partly results from much better exam performances than quiz performances for most of those students who regularly consulted their quiz results or reanswered questions. EG exam grades correlate directly with grades on research problems and writing assignments (R2=0.99), even though these assignments emphasize synthesis, analysis, and evaluation along with 20-25% of rubric points awarded for writing quality, whereas exams focus subequally on comprehension, application, analysis, and synthesis without directly scoring writing. I tentatively conclude that formative assessments predict overall student learning assessed by exams when the formative assessments are completed individually, are equal or greater in cognitive rigor than exams, and when students use assessment feedback to address learning deficiencies.