Brian Jacob on student test scores—how they’re assembled and why that matters

August 16, 2016

In “Student test scores: How the sausage is made and why you should care,” Brian Jacob describes how student ability is measured; how various test makers analyze and report student performance; and why this matters for policymakers and practitioners hoping to improve educational outcomes.

Jacob’s piece appears in Evidence Speaks, a weekly publication of the Brookings Institution’s Center on Children and Families, and is cited by Valerie Strauss in the August 12 Washington Post story, “Student test scores: How they are actually calculated and why you should care.”

“Contrary to popular belief, modern cognitive assessments—including the new Common Core tests—produce test scores based on sophisticated statistical models rather than the simple percent of items a student answers correctly,” writes Jacob. “While there are good reasons for this, it means that reported test scores depend on many decisions made by test designers, some of which have important implications for education policy.”

In the piece, Jacob describes a variety of statistical methods used by today's test makers, including “shrunken” test scores, scaling, and standardization—all of which have have important implications for researchers and policymakers interested in addressing socioeconomic achievement gaps. He closes with advice for researchers and policymakers and a call for greater transparency about how test scores are generated. “Researchers, policy analysts, and the public need to better understand the tradeoffs embedded in the various decisions underlying test scores,” he writes. “Only then can we have a productive discussion of the direction to take.”


Brian Jacob is the Walter H. Annenberg Professor of Education Policy and co-director of the Ford School’s Education Policy Initiative (EPI), which engages in applied, policy-relevant education research designed to help improve overall educational achievement and outcomes.