Sometimes finding nothing means more: Recognizing the value of failure, understanding null results | Gerald R. Ford School of Public Policy
 
International Policy Center Home Page
 
 
WHAT WE DO NEWS & EVENTS PEOPLE OPPORTUNITIES WEISER DIPLOMACY CENTER
 

Sometimes finding nothing means more: Recognizing the value of failure, understanding null results

April 27, 2016

In 2014, a promising early-literacy program was implemented in seven Michigan charter schools. Over the next year, Brian Jacob, the Walter H. Annenberg Professor of Education Policy, compared the progress of students who learned reading skills through this new curriculum with a control group of students at the same schools taught through traditional methods.

Jacob’s hypothesis? That the students in the new early-literacy program would do better than their peers in the control group. But that’s not what the data showed. At the end of the year, Jacob found that the new program had no significant impact on reading performance.

These days, social scientists often use experimentation—formulating a hypothesis and gathering data to see if the evidence supports it—to analyze policies and interventions. Not surprisingly, only a fraction of these hypotheses are borne out by the data. But while we regularly hear about experiments that support their hypothesis, we rarely hear about those that don’t, often described as “null results.”

While no one knows exactly how many experiments end in null results, we do know that the vast majority of published research findings—in the sciences and social sciences—describe experiments that had positive outcomes. There are a number of reasons for this, several of which fall under the umbrella of publication bias.

Publication bias

The first party implicated in publication bias might be the researchers themselves. Senior researchers often have multiple projects underway and perhaps even a backlog of research waiting to be written-up and submitted to a journal. Because null results are generally less exciting than positive results, these researchers may simply move on, turning their attention to other projects.

Photo of Elisabeth Gerber

For junior researchers, particularly those trying to earn tenure, the situation may be more fraught. Elisabeth Gerber, the Jack L. Walker, Jr. Collegiate Professor of Public Policy, says that the profession values and rewards theory building. Junior researchers only get credit for null results, she says, if “they have a good explanation, and that explanation advances the science.”

Another source of publication bias stems from the academic journals whose editors or referees may be more inclined to publish research that shows an interesting effect rather than research that doesn’t.

Photo of Dean Yang

In 2010, Dean Yang, a professor of public policy and economics, worked with Emily Beam (PhD ’13) and David McKenzie (a World Bank economist) on a new research project designed to encourage rural Filipinos, many of whom live in deep poverty, to emigrate to countries with better job opportunities.

The interventions they tested—designed to remove real and perceived barriers to immigration in hopes of increasing remittances (the money immigrants send to friends and family back home)—didn’t prove effective.

In pitching the work to journals (eventually with success), Yang argued that the research was worth sharing with scholars and policymakers because the interventions and approach were novel. “It wasn’t a situation where we expected to find results and we didn’t find any,” says Yang. “It was more that no one had any idea what we would find.”

The value of 'failure'

Photo of Richard Hall

“It’s easy to get null results if you’ve got bad measurements or if your statistical model is not specified correctly,” says Richard Hall, a professor of public policy and political science. “There’s a whole range of things that can go wrong.”

But if the research, like Yang’s and Jacob’s, is solid and methodologically sound, a null result can be quite interesting, even important, in moving forward understanding.

Sometimes, too, null results are so surprising, so counter to popular presumption, that they’re even more important than positive ones.

Hall and Molly Reynolds (PhD ’15), who studied the effects of $200 million in issue advertisements during the debate over the Affordable Care Act of 2009, were unable to detect any effect from the ads.

“Everyone says all this money matters [in politics],” says Hall. “Maybe it does. But we can’t show it.”

Last fall, Brian Jacob discussed “the value of ‘failure’” in an article for Brookings’ Evidence Speaks.

He wrote about experiments like his own analysis of the early literacy program rolled out in Michigan, then he discussed 77 educational interventions evaluated through demonstration trials commissioned by the Institute for Education Sciences. Only seven of these studies, he explained, produced a positive result.

“In education research, most things we’ve rigorously tested have not been successful, but there’s value in finding out what doesn’t work,” says Jacob. “That there is so much failure means we should be rethinking what sorts of studies we do and how we develop programs.”

“Given the importance of education as a vehicle of social mobility and driver of economic growth, along with the fact that we spend hundreds of millions of dollars on education research in the U.S. each year,” says Jacob, “it is imperative that we do more to learn from these ‘failures.’”

By Bob Brustman for State & Hill, the magazine of the Gerald R. Ford School of Public Policy


Below is a formatted version of this article from State & Hill, the magazine of the Ford School. View the entire Spring 2016 State & Hill here.