Algorithms are making their way into public systems. They serve to make data-based, automated decisions when applied to public system models, but those computerized decisions don’t come without bias or ethical implications. In the podcast series Flash Forward, host Rose Eveleth speaks with the Ford School’s Shobita Parthasarathy, professor and director of the Science, Technology and Public Policy Program, about the inherent biases humans possess when creating algorithms that then extend to the softwares themselves in the episode titled “CRIME: Can You Sue an Algorithm?”.
The episode explores one of the most controversial usages of algorithms within the court systems to determine whether to offer parole to prisoners. “We, as human beings, have forever wanted to find ways to simplify human judgment and complex decision making,” Parthasarathy remarks. However, because algorithms are created by humans, their biases are weaved into the data. “As they say, the robots are us. The algorithms are as biased as we are. And by we, I don’t mean we as individuals, I mean we as societies.” Algorithms were more likely to deny parole to black prisoners than white, despite committing similar crimes. “Technology is not neutral. Even when we think about how data is collected and stored and how we measure things, even that in and of itself has a bias.”
Data scientists are in charge of creating and inputting the data into algorithms in large systematic models. One solution to taming biases is hiring regulators to review the data for ethical and social biases before it is used. Parthasarthy does not foresee regulators being used in the near future, but went on to explain: “I think that it’s important for regulators, for the government, to be actively involved with people, companies, who are developing the algorithms.”
Listen to the full episode here.
Shobita Parthasarathy is a professor of public policy. Her research focuses on the comparative and international politics and policy related to science and technology. She is interested in how to develop innovation, and innovation policy, to better achieve public interest and social justice goals. Much of her previous work has focused on the governance of emerging science and technology, particularly those that have uncertain environmental, social, ethical, political, and health implications.