How Many Friends Do You Have? An Empirical Investigation into Censoring-Induced Bias in Social Network Data | Gerald R. Ford School of Public Policy
 
International Policy Center Home Page
 
 
WHAT WE DO NEWS & EVENTS PEOPLE OPPORTUNITIES WEISER DIPLOMACY CENTER
 
Type: Seminar

How Many Friends Do You Have? An Empirical Investigation into Censoring-Induced Bias in Social Network Data

Speaker

Alan Griffith, PhD Candidate in Economics

Date & time

Apr 12, 2017, 8:30-10:00 am EDT

Location

Open to PhD students and faculty engaged in causal inference in education research.

About CIERS:

The objective of the Causal Inference in Education Research Seminar (CIERS) is to engage students and faculty from across the university in conversations around education research using various research methodologies. This seminar provides a space for doctoral students and faculty from the School of Education, Ford School of Public Policy, and the Departments of Economics, Sociology, Statistics, and Political Science to discuss current research and receive feedback on works-in-progress. Discourse between these schools and departments creates a more complete community of education scholars, and provides a networking opportunity for students enrolled in a variety of academic programs who share common research interests. Open to PhD students and faculty engaged in causal inference in education research.

Abstract:

In collecting data on network connections, a common practice is to prompt respondents to name up to a certain number of network links, potentially leading to censoring. This censored data is then used to estimate parameters of peer effects models in a wide variety of economic applications. In this paper, I first provide an analytic form of the bias induced by this practice, showing that this bias decreases as the number of observed links increases. I then conduct a series of Monte Carlo experiments to demonstrate the magnitude of the bias, providing suggestive evidence that the it may be substantively meaningful. Using network data from Add Health, I show that different censoring rules induce substantially different estimates of peer-effects parameters. After documenting the possible bias, I propose a number of strategies for researchers working with censored network data. These findings and proposed solutions have potentially wide-ranging applications to research on peer effects through networks as well as the practice of collecting network data.