Skip to main content

Section 1.5 The Paradigms Model for Evaluation and Assessment

We understand that people, especially journal reviewers and NSF program reviewers would like a simple quantitative measure that shows that the Paradigms Project is successful. So would we! However, since the Paradigms is multifaceted and complex, and the number of students in any given year is small (now in the high 30s), no single measure can possibly serve as a scientifically compelling assessment. For example, we have found that D, F, W rates vary wildly across the Paradigms courses. Similarly, our small class size made it impossible for us to make sense of tracking GRE scores.

Nationally validated pre/post assessments for upper-division courses are only now being developed by other research groups. A couple exist for quantum mechanics but they mostly look at the modern physics level and/or do not (yet) respect our spins first approach. The one such assessment that we have employed is the CUE (developed by the PER group at the University of Colorado), modified to contain content relevant to our paradigm on static vector fields. This study [19] showed that although OSU students' pretest scores were about 15% lower than CU, their post test scores showed an equivalent normalized gain. We were also intrigued to see that OSU students were essentially unable to answer two questions. We were able to tie that result to our content rearrangement and address the omission.

Our grant funded work has always included rigorous formative and summative evaluation plans appropriate for each phase of the project. Important sources of information which we triangulate for our local formative and (ongoing) summative assessment include exit interviews with seniors, formal department assessment looking at student work, exam and homework performance of students, classroom observations, and reflections from the teaching team, especially TA/LAs and graders.