Do constructed-response and multiple-choice questions measure the same thing?
Our study empirically investigates the relationship between constructed-response (CR) and multiple-choice (MC) questions using a unique data set compiled from several years of university introductory economics classes. We conclude that CR and MC questions do not measure the same thing. Our main contribution is that we show that CR questions contain independent information that is related to student learning. Specifically, we find that the component of CR scores that cannot be explained by MC responses is positively and significantly related to (i) performance on a subsequent exam in the same economics course, and (ii) academic performance in other courses. Further, we present evidence that CR questions provide information that could not be obtained by expanding the set of MC questions. A final contribution of our study is that we demonstrate that empirical approaches that rely on factor analyses or Walstad-Becker (1994)-type regressions are unreliable in the following sense: It is possible for these empirical procedures to lead to the conclusion that CR and MC questions measure the same thing, even when the underlying data contain strong, contrary evidence.