Green calls for increased regulation with government use of algorithms | Gerald R. Ford School of Public Policy
 
International Policy Center Home Page
 
 
WHAT WE DO NEWS & EVENTS PEOPLE OPPORTUNITIES WEISER DIPLOMACY CENTER
 

Green calls for increased regulation with government use of algorithms

September 22, 2021

Do algorithms help or hinder human decision-making? In a recent op-ed for The Hill, Ben Green, an assistant professor and postdoctoral scholar, argues that policymakers need to consider how humans interact with AI when regulating the use of AI in high-stakes decisions.

"Although it is necessary to increase the scrutiny placed on the quality of government algorithms, this approach fails to fully consider how algorithmic predictions affect policy decisions," Green argues. "Algorithms do not operate autonomously. Instead, they are provided as aids to people who make the final decisions. Most policy decisions require more than a straightforward prediction. Instead, decisionmakers must balance predictions of future outcomes with other, competing goals."

He uses the example of pretrial assessments, explaining how algorithmic risk assessments are presented to judges who must make a decision on how to act. "Yet even if pretrial risk assessments could make accurate and fair predictions (which many scholars and reform advocates doubt), this alone would not guarantee that these algorithms improve pretrial outcomes."

So the central question becomes, he says, if these algorithm risk predictions improve human decision making. In his recent study, Green finds they do not. " Tools like pretrial risk assessments can generate unexpected and undemocratic shifts in the normative balancing act that is central to decisionmaking in many areas of public policy."

Green calls for expanded regulations regarding government algorithms and provides a suggestion for a path forward.

"Prior to deployment, vendors and agencies should run experiments to test how people interact with a proposed algorithm. If an algorithm is operationalized in practice, its use should be continuously monitored to ensure that it generates the intended outcomes," he concludes. 

"Before government agencies implement algorithms, there must be rigorous evidence regarding what impacts these tools are likely to generate and democratic deliberation supporting those impacts."

This op-ed was originally published in The Hill. Read it in its entirety here.