GSA Annual Meeting in Indianapolis, Indiana, USA - 2018

Paper No. 102-7
Presentation Time: 9:00 AM-6:30 PM

EVALUATEUR: AN INNOVATIVE APPROACH TO FOSTERING AND DOCUMENTING GROWTH IN STUDENT OUTCOMES IN AN UNDERGRADUATE RESEARCH PROGRAM


SINGER, Jill K., Earth Sciences, SUNY-Buffalo State, 1300 Elmwood Avenue, Buffalo, NY 14222, FOX, Sean P., Science Education Resource Center, Carleton College, 1 North College Street, Northfield, MN 55057, WEILER, Daniel, Daniel Weiler Associates, Berkeley, CA 94707, ZIMMERMAN, Bridget, Nautilus Evaluation Services, 4840 County Road 11, Rushville, NY 14544, AMBOS, Elizabeth, Council on Undergraduate Research, 734 15th St NW, Suite 550, Washington, DC 20005 and HEWLETT, James, Biology, Finger Lakes Community College, Canandaigua, NY 14424

EvaluateUR is a unique approach to teaching and learning that seeks to improve student outcomes by making evaluation an integral part of the undergraduate research experience. The project obtains reliable independent assessments of program impact without creating a measurement burden and uses these assessments to help participating students gain new insights into their academic strengths and weaknesses and a new appreciation of the broad range of academic and personal skills for which they should assume responsibility. The project is a partnership among Buffalo State, the Council on Undergraduate Research, SERC, and Finger Lakes Community College. With funding from the NSF WIDER program, EvaluateUR is currently being refined and piloted at 14 institutions with plans to recruit another group of project participants in 2019. EvaluateUR creates multiple assessments of student knowledge and skills in 11 outcome categories. Each outcome category is defined by several discrete components designed to provide explanatory detail about outcome meanings. Student accomplishments on each component are assessed using a five-point scale linked to an explanatory rubric. Faculty mentors and students each complete these assessments three times – at the outset of the research, in the middle of the research, and at the end of the research experience. Faculty mentors rate students on each component and students evaluate their own progress using an identical instrument. Following the completion of each assessment, the student and mentor meet to discuss the reasons for any significant differences in their respective assessment scores. To facilitate these structured interactions, a score report is generated that highlights components with a score difference of two or more points. A web-based tool helps site administrators track the progress of student/mentor pairs and automated messages remind the student and mentor when to complete each step in the evaluation. A data analysis and reporting function prepares summary statistics that can be used to provide institutional assessment data.