Parthasarathy discusses implications of Large Language Models | Gerald R. Ford School of Public Policy
 
International Policy Center Home Page
 
 
WHAT WE DO NEWS & EVENTS PEOPLE OPPORTUNITIES WEISER DIPLOMACY CENTER
 

Parthasarathy discusses implications of Large Language Models

November 7, 2022

Large Language Models (LLMs) are artificial intelligence tools that can read, summarize and translate texts and predict future words in a sentence letting them generate sentences similar to how humans talk and write. Shobita Parthasarathy, professor of public policy and director of the Science, Technology, and Public Policy Program, recently released a report about how LLMs could exacerbate existing inequalities.

"Big companies are all doing it because they assume that there is a very large lucrative market out there. History is often full of racism, sexism, colonialism and various forms of injustice. So the technology can actually reinforce and may even exacerbate those issues," Parthasarathy told Asian Scientist. "They’re all privately driven and privately tested, and companies get to decide what they think a good large language model is. We really need broader public scrutiny for large language model regulation because they are likely to have enormous societal impact."