Ford School professor Shobita Parthasarathy was highlighted in a Q&A with Nature magazine, acknowledging recent research on Large Language Models (LLMs) by the Science, Technology, and Public Policy program's Technology Assessment Project. Parthasarathy "warns that software designed to summarize, translate and write like humans might exacerbate distrust in science," it states.
The Q&A covers the emergence of LLMs and the potential benefits as well as pitfalls that could emerge from their widespread use, including embedding bias and influencing scientific research.
“The algorithmic summaries could make errors, include outdated information or remove nuance and uncertainty, without users appreciating this,” she said.
Nature also tweeted to its 2.3 million followers. Scientific American also republished the article in its monthly magazine.
You can read the whole Q&A:
How language-generation AIs could transform science, Nature, April 28, 2022