U-M experts: We need to emphasize AI’s societal impacts over technological advancements

May 12, 2023

Artificial intelligence—AI for short—is all over the news lately. And for good or ill, it has implications for us all.

Two University of Michigan professors—Nigel Melville from the Ross School of Business and Shobita Parthasarathy from the Ford School of Public Policy—have studied AI’s rise and implications across business, society and the culture at large.

They share their insights on the topic, including what they would do if appointed government science and technology policy czars. They discuss the need or value of declaring a moratorium as well as imposing regulations and performing risk assessments.

Most importantly, both say we need to be less in awe of the technological advancements and more focused on the societal risks and benefits.

This conversation is excerpted from the podcast, Business and Society with Michigan Ross.

What is the first thing you would do to regulate AI if appointed a government science and technology policy czar?

 

Parthasarathy: Well, after celebrating I had gotten such an appointment, I would do two things. The first is to issue a moratorium. It’s something that a number of tech giants have been talking about recently and I think that’s a good idea. And then I would start to develop a risk assessment system.

This is similar to something that I know the European Union has been considering: They’ve been talking about a tiered system where they classify AI in terms of the level of social importance as well as where the AI is being used in terms of the kinds of impacts it might have on marginalized communities.

They basically say, if it’s a low-risk technology where maybe it doesn’t have a significant social impact and isn’t really going to be used significantly among marginalized communities, perhaps you wouldn’t have an extensive regulation. You might have transparency and disclosure requirements. And then you ramp up the level of oversight based on the importance of the technology to society and in particular the direct impacts it might have on already marginalized communities.

I like that kind of tiered approach. And I should say, it wouldn’t be the first time that we’ve done something like that in the U.S. We’ve developed regulatory frameworks like that, for example, in the case of biotechnology, so it wouldn’t be wholly new.

 

Melville: I would think about the question in a slightly different way. Instead of thinking first about regulation, I would think about how we get to regulation that makes sense.

Being the program director for Design Science, I would think of design and how we design regulations. I would think about three design principles in particular. One is protecting vulnerable populations first and foremost. The second one is holding those responsible for harms and damage liable for that damage and harm. And the third is transparency. So vulnerability, liability, and transparency.

What have you observed or seen that’s good about AI?

Parthasarathy: Specifically when it comes to generative AI, like ChatGPT or GPT-4, I think one of the things that you’re already perhaps starting to see, and I think you could definitely see more of, is that there’s a way in which it continues to democratize expertise.

One thing I think the internet has done is it really allows people to access information in all sorts of ways. Clearly that’s been a benefit. And I think that these kinds of new generative AI technologies have that potential as well to digest information and to present it to the user in a way that’s potentially comprehensible in all sorts of technical areas. And I think that that’s empowering. It’s important. It’s wonderful.

The second thing I think is potentially useful, but I would have to say, despite the promises we haven’t really gotten there, is potential standardization in decision making. So the hope of AI is it takes human bias out of the equation and it ensures standardization across a range of really important decisions, whether that’s in health care, social services or criminal justice.

The problem is that in reality, it’s really just baking in already the biases that exist and making them difficult for us to root out. So in the promise, in the sort of abstract, that standardization is great. But we have to think far more carefully about how to actually achieve it than we’re doing right now.

Melville: I would look at two sectors in particular I think hold great promise for AI, generative AI in particular. One would be pharmaceutical and drug development. For example, coming up with new treatments for Alzheimer’s disease. These are monumental problems and challenges facing millions of people throughout the world. AI can and is starting to now address those problems. Working with scientists, so this is a sort of a collaborative enterprise.

The second one is the immense opportunity to help educators K-12 and beyond remake the basic idea of learning. How is learning done and how should it be done and to what end? So for example, I think it’s a great opportunity to help our students focus first and foremost on critical thinking skills, how to gather data, how to evaluate information for its veracity, and then how to make proper judgments based on that data.

We’ve talked a little bit about policy. Italy has temporarily blocked ChatGPT over data privacy concerns, and EU lawmakers have been negotiating the passage of new rules to limit high-risk AI products. In the United States, the Biden administration has unveiled a set of far-reaching goals aimed at averting harms caused by AI, but stopped short of any enforcement actions. What do you make of all this?

Melville: I think these are positive first steps. I’m not aware of the details of these policies and I’m not a law professor, so put that into context. But I do think from what I am aware of that we might be looking at regulations and moratoriums in a kind of overly narrow way. And what I mean by that is one of the findings of my research is that we need to think broader about what AI really is and get away from technology language.

So in our research, we reframe that and we talk about machine capabilities that emulate human capabilities in two areas. One is cognition, which is decision-making and generativity, but the other is communication. And this is something that people don’t think too often about, but machines are doing two kinds of communication that are like humans. One is they’re working and collaborating among themselves to enable all the services, including AI. The second one is they are developing relationships, a sort of relationship, a certain type of with humans. And that has really major implications.

So I think we should think broader about this emulation of human capabilities so that we can develop regulations that have the appropriate footprint.

I’ll give you one specific example. Recently, a popular social media platform for sharing photos introduced ChatGPT as one of the friends of everyone on the platform. They pinned it. So one day, you look at your phone and you see your friends. You wake up the next morning and you see this new friend that’s pinned and that’s ChatGPT.

Very little was said about this, but here’s the problem, and this is where the relationship with humans comes in: It was shown by some of the founders of what’s called the Center for Humane Technology through a real demonstration, that if you sort of pretend that you are a teenage girl, that this would give you harmful advice. This is not right. This is outrageous. It’s completely unacceptable. And so that’s why I say thinking just about AI, we have to go beyond that and think about, what sort of capabilities are they emulating and what can we do now? This is happening now.

Parthasarathy: I think it’s odd to me that the president is talking about how AI could be dangerous as though we don’t know. I have found that really interesting over the last six months or so as the discussions about ChatGPT have exploded.

ChatGPT emerges, and it’s released to the public, frankly, in a pretty irresponsible way, and, of course, all of the other kinds of large language models are released as consumer technologies. So there’s a lot of interest and curiosity among the public and people are engaging with it and they are sort of framing it. It’s being framed in the pages of the newspaper and, frankly, at my faculty meetings, as though it’s something new that we have to contend with, “Oh no. Students are gonna cheat and they’re gonna use these technologies.”

But the bottom line is that they’re just one (of) a suite of technologies of the kind that Nigel talked about, but over the last five to 10 years, we are increasingly seeing machine learning algorithms being used across social services, pre-trial risk assessment, assessing risk for children in foster care, the famous controversial case in Michigan itself around the unemployment fraud scheme that led to people losing their homes as a result of a crazy algorithm, in determining medical leave.

The set of places where algorithms are being used, facial recognition technology, which we’ve also done research on, it’s pervasive and we know the consequences and, frankly, they’re not good. There’s something perverse about the fact that we’re not talking about that full set that could be dangerous or we don’t know, is restricted to something that everybody’s paying attention to and not the full suite of technologies.

And it’s in fact become mundane in those other cases. You have, as a colleague of mine said, procurement has become policy. You have state government administrators … confronted with a company who’s offering this great algorithm that it looks like is going to make their lives easier and the service more effective, when in fact, they don’t even know the questions to ask to avoid the kinds of problems that we saw in Michigan. And so that’s, I think, a really dangerous situation that we’re in and the kind of construction of novelty is disturbing from the side of the president.

All of that said, I will say that I know that there are in fact experts within the government who are taking these things seriously. The White House Office of Science and Technology Policy has been doing some important work around AI. Last year, they released an AI Bill of Rights.

The Federal Trade Commission has been doing incredibly important work. They have an office of technology now dedicated to this. And in fact, just yesterday, the Federal Trade Commission, the Equal Employment Opportunity Commission, Consumer Financial Protection Bureau, all basically said, “Oh, by the way, you know we have those laws around disparate impact. Don’t think that technology is somehow exempt from that.”

What that means is the folks who are purveying and using these technologies need to be asking those questions and they’re going to be held liable: “Even if you unknowingly engage in discriminatory practices, we’re gonna be looking for you.” I’m happy that people are at least starting to think about it, but I think that we need to be doing a lot more, a lot more quickly.

We find it incredibly hard to draw ethical lines in the sand, especially when it’s the proverbial frog in the pot of boiling water. I don’t mean to justify horrible outcomes because they are inevitable, because we can chart our future at least to a certain degree, but how do you keep innovation free of unintended side effects? What can you do as the snowball gains velocity going down the hill?

Melville: We need guardrails for AI. We need to decide collectively as a society, what are appropriate guardrails? We’ve always had guardrails. I mean, here, we’re in the center of the automotive innovation revolution 100-plus years ago. What happened before we had road signs and street signs? Like what did we do? There were a lot of accidents. What happened in the early aviation days before we had the FAA and rules about who goes where?

The industry itself, in the case of aviation, came together and said, “We’ve got a problem here. People are scared to get on our commercial flights because they think they’re gonna fall out of the sky. And it has happened. So we actually should be proposing and developing regulations for safety.” This is the model for AI, in my opinion.

It will only happen if we hold those responsible for harms and damages liable. In my opinion, that is the only language that business leaders understand in large part. There are good people who have ethics, yes, but by and large, guardrails, a level playing field that applies to everyone that are fair, business leaders are okay with that.

Parthasarathy: I want to return to this point about responsibility, which I think is really, really important. I think one of the challenges that I see is that tech leaders are punting to government and saying, “Oh yeah, you should regulate some things, you should do something. We are not responsible. We are just producing this technology.”

I don’t like the language I have to say of unintended or unanticipated side effects, because I think at this point, yeah, no, you can’t say that. You know that there are these kinds of potential risks and you need to be doing differently. And on the other hand, you have the tech folks who are like, “Oh, not my problem.” And then you have the government folks who are saying, “Oh, that’s technical, it’s not our problem, or at least we don’t know what to do.”

And so here, I’m gonna take the privilege and advertise the Science, Technology and Public Policy Program and what we do, because I actually think that in addition, and I agree with what Nigel is saying, I think language of liability is really important, but I also think education is really important here.

At STPP, we have this graduate certificate program that’s open to any student across the university, and we have scientists and engineers and policy students, a lot of folks from computer science and engineering in our School of Information. A lot of folks who are developing these technologies or will develop them someday, and we’re teaching them about how values are embedded in design and how political context matters when you’re shaping technologies, and what are the ways in which you can anticipate the implications of technology and then build better technologies. What are the kinds of policies you might develop?

I think we need programs like that everywhere. You shouldn’t be able to either graduate with a degree in computer science or public policy without having those sensitivities because you are going to be increasingly confronted with these questions in your careers. It’s our job as educators, I think, to prepare them for that world.

This Q&A, and the podcast, were produced by Jeff Karoub of Michigan News