Linking U.S. school district test score distributions to a common scale
Andrew Ho, professor in education at Harvard University
Open to PhD students and faculty engaged in causal inference in education research.
About the speaker:
Andrew Ho is appointed as a professor of education. He is a psychometrician interested in educational accountability metrics, an intersection between educational statistics and educational policies. His current projects include articulating meaningful contrasts between state growth model approaches and developing new gap trend and growth metrics for cross-test comparison and validation.
In the U.S., there is no recent database of district-level test scores that is comparable across states. We construct and evaluate such a database for years 2009-2013 to support large-scale educational research. First, we derive transformations that link each state test score scale to the scale of the National Assessment of Educational Progress (NAEP). Next, we apply these transformations to a unique nationwide database of district-level means and standard deviations, obtaining estimates of each districts’ test score distribution expressed on the NAEP measurement scale. We then conduct a series of validation analyses designed to assess the validity of key assumptions underlying the methods and to assess the extent to which the districts’ transformed distributions match the districts’ actual NAEP score distributions (for a small subset of districts where the NAEP assessments are administered). We also examine the correlations of our estimates with district test score distributions on a second “audit test”—the NWEA MAP test, which is administered to populations of students in several thousand school districts nationwide. Our linking method yields estimated district means with a root mean square deviation from actual NAEP scores of roughly 1/10th of a standard deviation unit in any single year or grade. The correlations of our estimates with average district means over years and grades are .97-.98 for NAEP and 0.93 for the NWEA test. We conclude that the linking method is accurate enough to be used in large-scale educational research about national variation in district achievement, but that the small amount of linking error in the methods renders fine-grained distinctions or rankings among districts in different states invalid.
Co-authors: Andrew Ho with Sean Reardon and Demetra Kalogrides
The objective of the Causal Inference in Education Research Seminar (CIERS) is to engage students and faculty from across the university in conversations around education research using various research methodologies. This seminar provides a space for doctoral students and faculty from the School of Education, Ford School of Public Policy, and the Departments of Economics, Sociology, Statistics, and Political Science to discuss current research and receive feedback on works-in-progress. Discourse between these schools and departments creates a more complete community of education scholars, and provides a networking opportunity for students enrolled in a variety of academic programs who share common research interests. Open to PhD students and faculty engaged in causal inference in education research.