Bill de Blasio: Are smart cities smart enough? | Gerald R. Ford School of Public Policy
 
International Policy Center Home Page
 
 
WHAT WE DO NEWS & EVENTS PEOPLE OPPORTUNITIES WEISER DIPLOMACY CENTER
 

Bill de Blasio: Are smart cities smart enough?

October 30, 2023 1:18:39
Kaltura Video

Former New York City mayor Bill de Blasio explores how urban tech is shaping social policy in “smart cities” like New York and beyond, how we can ensure that emerging technology serves the public interest, and what role local, state, national, and even international policy can play. October, 2023.

Transcript:

0:00:00.6 Celeste Watkins-Hayes: Good afternoon. Wonderful to see you all. I am Celeste Watkins-Hayes, the Joan and Sanford Weill Dean of the Gerald R. Ford School of Public Policy here at the University of Michigan. And I'm delighted to welcome all of you this evening to our policy talks at the Ford School event featuring former New York City Mayor Bill de Blasio. Welcome.

[applause]

0:00:31.4 CW: Today's event is presented in partnership with the Science, Technology and Public Policy program housed here in the Ford school. With the support from the U of M Urban Technology program at Taubman College and our media partners at Detroit Public Television and PBS Books. Bill de Blasio served as the 109th Mayor of New York City from 2014 to 2021. He guided the country's largest city through the Pandemic and among his many other accomplishments, he created a groundbreaking early childhood education initiative which became a nationwide model. Financed the construction and preservation of more affordable homes than any other New York City administration, and drove the City Council to pass a New York City Green Deal.

0:01:18.5 Bill De Blasio: Green New deal.

0:01:20.4 CW: Green New deal. Make sure I say that. Green New deal. We are happy to have him this afternoon to look at smart-cities. During his administration, the city rolled out a number of smart city solutions and tech education initiatives in every borough, earning New York the best smart city accolade in 2016. In recent months, he's also become increasingly concerned about the rise of AI and other emerging technologies and their impacts. We look forward to hearing about his continued observations and engagement with the topic. Mr. De Blasio will be in conversation with Professor Shobita Parthasarathy Ford School Professor of Public Policy and Director of the Science, Technology and Public Policy Program. Her research focuses on the comparative and international politics and policy related to science and technology, and specifically how to develop innovation and innovation policy to better achieve public interest and social justice goals. There will be time for questions at the end, so please scan the question-card QR-code to submit your questions throughout tonight's event. Those tuning in virtually can Tweet your questions to #policytalks.

0:02:32.2 CW: And those of us in this room who don't have access to the QR-code, I bet you can Tweet there as well. #policytalks. I have two undergraduate students here to help us with facilitating the Q&A Audrey Melillo from the Ford School and Enzo Mangano from Urban Technology. With that, please join me in welcoming Shobita Parthasarathy and Bill de Blasio.

0:03:04.4 Shobita Parthasarathy: Hi, Bill. How are you? 

0:03:05.7 BB: Hi, Shobita. How are you? 

0:03:07.0 SP: I'm good. You know, it's a very auspicious day, and so I wanted to kind of scramble our agenda a little bit. I know we had a sort of discussion about the discussion, but as many of you may know, today, just like an hour ago, I was watching this on YouTube, President Biden issued an executive order on AI. Some of you may have heard about that, and some people, I mean it is for the US the most comprehensive AI strategy at the national level that we have. We don't have any overarching regulation and as the dean said, we are certainly very interested and you are very concerned about questions of AI. And for those of you who haven't looked at exactly what it is that this strategy does, I encourage you to do so. If you're interested in questions around AI. But it includes things like requiring companies that are going to develop foundation models I.e Generative AI to submit information about that to the federal government and in particular to submit the requirements of the Red Team, that is any kinds of potential problems with the technology. It includes guidance on how to address concerns around bias and discrimination. There are provisions around privacy, around potentially misinformation in terms of content and watermarking that content. It sounds fabulous, so...

0:04:45.5 BB: But...

0:04:46.0 SP: What do you think? 

0:04:46.0 BB: But you left it hanging there. It sounds fabulous but will it be fabulous? Look, I do want to give President Biden a lot of credit because a lot of times folks who are concerned about technology, concerned about artificial intelligence make parallels to the reality of the debate. And the public policy actions around climate change like here are two vastly important topics both with existential ramification and we can safely say vis-a-vis climate change. In my view, the Biden administration has done amazing work particularly through the Inflation Reduction Act. On AI, this is a good start because having an AI executive order is a major, major step for the president of the United States to say I am laying down some law, some structure, some order. I commend him, the but is two points. One, as Shobita pointed out, how are you going to enforce requiring the companies effectively to share their secrets now? I think it's exactly the right thing to do. I think their secrets, their proprietary information has a lot to do with our futures and they are making decisions for us without consulting us as if they are their own government. That's not acceptable. So I think it's exactly right for the federal government to say whoa, whoa, whoa, you're going to have to share this information with us including the things that you see that might be going wrong.

0:06:13.0 BB: The Red Team analysis and that that information has national security ramifications as the executive order makes clear, in addition to the impact on our lives day-to-day. But the enforceability and what's the mechanism going to be, what's the consequences going to be and I didn't see so far yet in the executive order a clear illustration of consequence because I think we can all say about corporate America that if there is no consequence, good luck getting them to comply. And if there's a modest consequence, corporate America treats that as cost of doing business and they violate the law, take the hit and move on with their business. So the consequence question is huge. An executive order, I suspect, inherently has limits on what it can do consequence wise.

0:06:57.7 BB: And the real action is on the legislative front and part of why this conversation is so important and everything, I want to thank you for all you do and your colleagues do in science, technology and public policy, because we have to have a very different conversation in this country because we have to actually get to legislation if we expect to rein in some of the problems of artificial intelligence and have coherent approaches to the future. That can only happen with legislation. The European Union is way, way, way far ahead of us. I'm glad they are, but we need to catch up. So good that this happened today. Question mark on what it actually will mean and no substitute for the real thing, which is for our Congress to actually pass laws related to AI.

0:07:42.2 SP: So we started out a little bit wonky there.

0:07:45.8 BB: We were so wonky.

0:07:46.2 SP: So, usually that happens in about 45 minutes...

0:07:48.0 BB: Half the audience already fell asleep.

0:07:48.9 SP: I know they're like what technology? I think it's nap time. No. So one of the things you talked about there was existential consequences and I'm wondering maybe if you can talk about this. I mean, we build this as an event about smart-cities and now we've started out at the 1 million mile level and so I think it's worth at least starting there. And then eventually I want to get to the consequences in the local level as well. But what are, from your perspective, when you think about AI, what are the benefits? Why are you worried? How did that worry evolve for you? 

0:08:29.3 BB: And look I want to, we will definitely talk about cities and we'll talk about the smart-cities concept. But I do appreciate that the title of this talks was Are Smart-cities Really Smart? The reality is because I think this really gets to the AI question, by definition there are things Artificial Intelligence can do that will help us all, that can. There are such things. I like to use healthcare as an obvious example. I don't have a question in my mind. I'm not a technologist, I'm not a healthcare expert. I don't have a question in my mind that there will be ways in which AI will improve healthcare. But at the same time, the reason I got more and more deeply involved in the discussion is I looked at this combination of factors. I looked at the absolute and total lack of democracy in decision making. Unlike the vast majority of things that we all care about, where we can identify someone somewhere in public life, someone elected, someone accountable, who has to take some responsibility for the decision, right now the vast majority of decisions related to Artificial Intelligence are happening in big tech headquarters with no reference point to all of us. They are not asking us, they're not consulting us, they are not going to Washington for permission. They're not going anywhere for permission.

0:09:45.7 BB: That for something of this magnitude that is going to reach every corner of our life, the sheer absence of democracy and debate, of any kind of oversight, of any kind of checks and balances alone should scare us Half to death, in my humble opinion. But then let's talk about what this will do. AI unquestionably is baking in biases that already exist in our society because it's taking in the exact same reality that govern the status quo in our country and in our world, it's feeding it those facts into its systems. The folks who are feeding it into the AI databases and are replicating our status quo and reinforcing it, that's a problem. AI unquestionably is going to play a role in the displacement of initially millions and ultimately tens of millions of workers in America alone. When you talk about the rest of the world, hundreds of millions over a very short period of time, no comparison in history as to how quickly this is going to move. Will those workers be consulted at this rate? No. Will they be compensated? Will they be retrained? Will they have any guarantees today? No. And that's going to affect also every corner of America and then the more damning and ultimate dangers, a technology that can easily fall into the wrong hands.

0:11:02.8 BB: I mean, we've already seen just with our current technological reality, the extraordinary acts of hacking that have disabled major parts of government and the private sector. Imagine, as AI advances, what could be done with it in the wrong hands. Imagine what happens, the level of malfunction, because all technology and machinery malfunctions. But what a malfunction of a very sophisticated AI system can mean. And then, of course, ultimately the giant unanswered question. But the fact that it's unanswered is all you need to know. Could this technology become self aware and come up with its own choices that could affect the fate of humanity? Well, the technologists developing AI are the first to say, in many cases, yes, that's really a possibility. And then they continue developing it. So everyone in this room is smart. If I said to you, hey, I'm over here developing something here in the room that's really interesting, but it might poison everyone in the room, you all would say, hey, hey, hey, wait a minute. Let's stop that right now and find out how to get the interesting part out, but not the poisoning everyone in the room part. That's not happening with AI. That's why I'm a little worried.

0:12:13.4 SP: A little. So I'm wondering, you sort of talked about this, but a lot of the people in the room, I mean, this is an event happening in a policy school and I think some of the stuff that we've talked about is that these kinds of conversations tend to be happening in our geographic kind of context on North campus, that is in the engineering school, in the computer science department, and not necessarily in a policy school. And in part because I think for a lot of the folks who are here, we focus on questions around social policy or security policy, or housing, criminal justice, environment. And I'm wondering if there's a particular case that you as a non technologist, right? Is there a case or something that you saw in this set of questions that opened your eyes about the connections which you were just drawing between AI and technology and these other areas of policy? 

0:13:07.1 BB: Yeah, I mean, obviously the worker piece I ran for mayor on a platform of addressing inequality, and a lot of that was economic inequality. And to me, I tried to use, as you heard, affordable housing and pre-K for all and a number of tools to try and create some greater parity, some greater possibility for economic fairness. Well, when you juxtapose that against a technological change that could cause massive job displacement, it kind of takes us backward potentially. And that was extremely upsetting to me because it would be one thing and it's like your two campus analogy. I ran on A Tale of Two Cities in terms of income disparity. And also you're talking about tale of two campuses. Your Tale of Two campuses plays out nationally. The folks making the decisions are extremely economically secure. If you want to come with me to Silicon Valley and we can ask people, each of the ones making the ultimate decisions about their net-worth, I think we can put them in the category of making more than the rest of us. And so they're fine, but they're making decisions for a lot of people who are barely scraping by. So right there I'm worried. But then the notion, do I trust and I'm being very plain about this, do I trust executives in Silicon Valley to think about the truck driver in Michigan and whether that person is going to have a livelihood once an autonomous vehicle takes over their route? 

0:14:41.1 BB: I don't have any reason to trust that tech executive that they even could think what the life of that truck driver is like. And that's a tale of two countries, really. That's a tale of two Americas that I don't, I imagine the tech executive thinks they're doing something good for the world because it's right to say there are efficiencies and there are things that AI will do for us that will better certain parts of our life. I agree with that. What is shocking to me is to think, how do you miss the part about all the pain that could come with it? And I do think it's a very pertinent American problem. And it's well documented that sort of our common experience has dissipated. Not so long ago in America, most people did not get a college education, and not so long ago in America, people were much more commonly from pretty humble origins. Now we have this massive skew where again, that truck driver, they may well feel that there's not a plan B in their life, whereas the tech executive, you cannot even imagine ever being without resources and a livelihood. So I don't want the tech executive making decisions for any of us. I certainly don't want the tech executive making decisions for someone they cannot possibly understand nor do I think they particularly care about. And again, when you're talking to the tune of millions of jobs.

0:16:06.7 SP: Right.

0:16:06.9 BB: There's 13 million, take every kind of truck driver in America, there is 13 million truck drivers and autonomous vehicles are one of the most obvious examples of how AI and automation are going to displace employment. Now so let's say you say, oh no, it's only going to be a quarter of the industry, okay, 2 million, 3 million people displaced in one industry and it won't be slow.

0:16:32.6 SP: So what are the solutions from your perspective, for example, particularly to this question, but other questions around AI and its potential impacts? What are the kinds of things that you've seen or the ideas that you've seen that you think that hold promise? 

0:16:47.4 BB: I think it is amazing. And I'm not trying to say what the European Union is doing is perfect, but I think it's amazing that by the end of this year, at least it's projected they will pass legislation that has a lot of teeth, some of the same things in President Biden's executive order, but much more extensive and much more teeth. And I'm trying to remember the exact percentage. I think it's 6%, forgive me if I'm wrong, their penalty for certain violations of the law, the European Union proposed law, the penalty for corporations that violate the law is 6% of global earnings. So not the penalties in the thousands or tens of thousands. These are penalties in the billions. Now that's how you get someone's attention. That's how you get a big company's attention. So I was super impressed at that kind of model. And it takes 27 nations in consensus, in unity, in the European Union to agree to anything. That's kind of stunning that that's happening. And meanwhile, we are not even to first base in terms of legislation in Washington. So I'm glad that...

0:17:52.8 SP: Do you want to talk a little bit about the framework of the AI act that you're talking about...

0:17:57.9 BB: The simplest way I can say it is some of the same points that Shobita talked about with the Biden executive order that the companies have to, and you'll certainly add whatever I missed. The companies have to divulge their work. They have to be accountable for problems that they find in their work, that the, there's ideas of liability in terms of the impact of the products that are being created. It's very wide ranging. And I think the way to think about it is that it kind of, I don't want to say it reverses the situation, that wouldn't be fair. But it creates a dynamic where the reckless development of AI is penalized and it creates guardrails in my view, wherein again get in the corporate mindset, get in the profit making mindset.

0:18:52.5 BB: Now they have real reasons to think much more carefully about what they're doing and to feel that they are directly and specifically accountable. So I think that's the sort of world of difference I see in what's been put together there. But I think the solution in this country, since we all understand that there's a challenge getting anything done in Washington right now, the solution is state and local actions, whether they be legislation, executive orders, whatever might be done through state and local governments and by the way, institutions, every major institution, including universities, have to come up with their own policies governing the Use AI. And every time they put some kind of limitations, some kind of transparency, some kind of oversight in place that helps the overall trajectory.

0:19:42.0 BB: And the last thing to say is activism. Now, look, I want to just offer this in the spirit of recognizing that everyone in the room has a voice, and everyone in the room can make a difference. There is no singular activist movement that I can identify in the world right now related to artificial intelligence. There are activist movements that touch artificial intelligence, and I think one of the most obvious examples is around privacy and addressing surveillance, government surveillance, police surveillance. Certainly, there are activists working on that issue. I'm not trying to diminish that. Certainly not trying to diminish the folks in the labor movement who are working to ensure that displaced workers are protected. But there is not, if you... And I've tried this, if you go try and look up a movement that is about artificial intelligence and what it will do to all of us, and ensuring the public's voice in that debate, I don't think you'll have an easy time finding anything of any scale.

0:20:42.6 BB: If I said you go look up movements and organizations around climate, the list would be long. If I said go look up regarding the rights of women and whatever side you want to say of the abortion issue, the list would be long. So how is it that on an issue that will touch all of our lives, every single one has massive unknowns and dangers and is guaranteed to displace a huge number of workers? By the admission of the people creating it, they're creating it to displace millions of workers. It's not a mystery. How is there not a social movement? I think the answer is there needs to be a social movement, because social movements and activism ultimately drive policy more than any other factor, and I think people, including in this room, can be part of building that.

0:21:33.2 SP: That's super interesting, and I want to get back to the social movement question in a minute, in part because I think the language of disruption in Silicon Valley is at least 20 years old, if not longer, right? 

0:21:44.5 BB: Yeah.

0:21:45.0 SP: So we've been talking about disruption for a long time, but there's sort of assumptions around that about the fact that everybody will benefit and there are no risks.

0:21:53.2 BB: I think we need to disrupt their disruption. [laughter]

0:21:55.8 SP: So I want to get back to that in a minute, but I'm wondering if you can talk a little bit more about the state and local level, so policy today, right? So social movements often take a long time to build, a long time to produce change, but what are things that states and localities can do now? Are there states and localities that are doing passing policies that you think are a step in the right direction? 

0:22:25.3 BB: I mean, look, I'm happy in New York, and I'll be transparent about happy, unhappy. I'm happy in New York that we took initial actions to recognize the use of algorithms by government and to try to address some of the ramification of that, obviously, particularly in terms of bias. I'm happy we passed legislation, the first one happened in my administration, the second legislation, the POST Act, to address police surveillance and to create some guardrails and some transparency around that process, a step in the right direction. Since I left office, legislation that puts real limitations on the use of AI in the hiring process. These are steps in the right direction, unquestionably and I think any time a state or local government did that kind of thing, it is starting to create consequence, it's starting to create limitation, which is what we need. But I think we're only scratching the surface. I look back, my confession today is, I wish I understood better, I wish I had prioritized more of these issues when I took office almost 10 years ago, because I now see how hugely this will affect our lives and that we've got to write down to what we're spending our money on and what we're demanding of these companies.

0:23:49.4 BB: State and local government and institutions need to use their financial power, their economic power, to start to put limitation into the equation. We've got to start grappling with this question around employment. We didn't feel it a lot in the time I was mayor, the displacement in terms of employment, but we could see it over the horizon. I wish we had moved more aggressively then, but there's still time to do it. For example, what states and localities could do is say, "Here are the rules related to when a worker is displaced by automation and artificial intelligence. Here's how they get compensated, here's how they get retrained," and they could put that obligation on a company. We in New York in previous situations have passed worker displacement legislation that requires companies to account for the workers, and think about that for a moment. And I want to blow your collective minds here, imagine a world in which we said a working person and their job is more important than the profit of the company that displaces them.

0:24:56.5 BB: What have we said? That is a societal value. We're not telling the company they can't create their new product. We're not telling them they can't profit. We're saying the old ground rule that the working person is simply cannon fodder or collateral damage in technological progress, we don't accept that ground rule anymore. We actually want that worker to be protected in an ever more unstable world. Because by the way, if you could ever have argued that technological progress and its negative elements were okay because of the thriving American middle class, I mean, who believes there's a thriving American middle class anymore? There's tremendous economic insecurity. And so I think it's a great time to revalue things and say as a matter of policy, if the worker is not protected, then there needs to be a consequence for that company. That's a very different way of thinking. But states and localities have started down that road, and the more that do it, the more possible it becomes for others.

0:25:56.1 SP: So if you don't mind, my disclosing something you said earlier today, to the whole...

0:26:02.1 BB: How dare you? 

0:26:02.6 SP: To the Internet.

0:26:03.1 BB: To the Internet. Oh, my God.

0:26:07.1 SP: One of the things that you said that...

0:26:11.3 BB: That the Detroit Lions would win the Super Bowl? Yes. I admit it. I admit that I said it, so that... I wish you had kept it to us, but...

0:26:19.6 SP: We appreciate the pandering. We absolutely appreciate the pandering. Is what you said this morning was that when you stepped into the Meryl T that the tech companies came calling, right? That they were there telling you... Singing you sweet lullabies about the fantastic future that the investments would bring in tech, right? And I'm wondering, you're obviously painting a slightly more dystopic picture, but to get to that place, to see the tech in more color, you need capacity. You need different kinds of people at the table. You need different kinds of processes and different kinds of policies. So what can cash-strapped cities do to take off the rose-colored glasses when from their perspective, the only way they know how to see is through those rose-colored glasses? 

0:27:29.9 BB: It's a great question. Look, I think the... First of all, to the opening, yes, that suddenly all these tech executives wanted to be my friend and it was not because we shared values. It was because they saw dollar signs. And if they could get New York City to be a customer or to regulate in a way that they liked, which actually meant not regulating at all, and it quickly became apparent that we had very, very different views of the world. And I had my tangles with Uber and Airbnb and others. But I think your really good underlying point is, for so much of America, what we've been told, for so many cities, states, everyone, we've been told, more technology equals more efficiency, equals more effective government, equals lower costs for the taxpayers. And sometimes that's true. I don't even mind at all saying, "Yeah, when that is the case, that's cool." But there's another side of the rainbow, which is when the price of admission means destroying your regulatory frameworks around things that were actually meant to protect people in communities or protect working people or provide revenue for the government so the government could provide services.

0:28:45.3 BB: I mean, basically, the story around a lot of these companies is trying to evade health and safety rules, regulation, rules related to worker's safety and working conditions, wages, and not paying their taxes. That was their model that they thought was a great business model. And the rest of the business community, even folks that I had issues with, were paying their taxes, were abiding by all these other rules, right? So this is where what started as a friendly conversation soured pretty damn quickly. But I think for a lot of America, what we've got to do is say, of course we'll take the efficiencies. No one's trying to throw out the baby with the bathwater. But don't take the poison pill. Don't take the other things that are going to come back to bite you. And, and this is where I actually, I'm an eternal optimist, I really am. I was mayor of godforsaken New York City, so I'm an optimist after that. [laughter] And I love my city, but I like to say I governed over 9 million highly opinionated people. [laughter]

0:29:47.8 BB: And the most diverse place on earth, and like the strangest things would happen every day, and I'm still an optimist. I do think this kind of one day will not be a red or blue issue. I think one day it will transcend, because what's going to happen is, first of all, liberty and privacy are valued very strongly on left and right. For all of you who want just a kind of beautiful example, the Patriot Act, which was passed after 9/11, and then was later on renewed, and had an underlying value, but also in some ways a lot of civil libertarians felt went too far. The alteration of the Patriot Act, to kind of tone it down, was actually a left-right co-production in the US Senate, where Rand Paul coalesced with some of the most left-wing senators to force changes, because they both were coming from their version of a civil libertarian worldview.

0:30:42.1 BB: Well, I think on this one, I mean, that's child's play compared to what AI is going to do vis-a-vis privacy. So I think there's a left-right possibility there. I think there's another left-right possibility of... And I hate to say this, but the displacement of millions of workers is going to be equal opportunity, red state, blue state, rural and urban. I think there's going to be a cause there for people across the spectrum to say, "Wait a minute, this changes the equation," and to demand accountability and compensation of some form from the companies. This kind of thing where I could see even more conservative leaders and jurisdictions saying, "We're not going to just take this wholesale."

0:31:17.4 SP: And one thing you just said about efficiency, the role of efficiency, I was thinking... It just sort of popped into my head as you were talking, that for example, in the policy school, we teach students about public management, right? That's one of the required courses is public management. And there was a time, and I think to some degree it's still there, that the focus is on efficiency, better practices, etcetera., which kind of leads us to technological solutions. But in fact, I think in the Ford School and in many places around campus, but I'll just say specifically I know the Ford School a little bit better.

0:31:50.5 SP: So I know at the Ford School, we're training people to go and understand, for example, criminal justice, housing, what are the real problems that are happening to people that get lost when these decisions become algorithmic, right? So that's a way in which it's actually what... I think implicitly, it sounds like what you're saying is we need to reintroduce the experts, the specialty experts, right? The people who understand housing or criminal justice or foster care, as opposed to assuming that the technology is the expert or the technologist is the expert. In the process of bringing in algorithms, we're destabilizing, for example, the, sorry to say, jobs of a lot of you all too, right? I mean, that's part of what's going on. And I presume, there's that kind of dynamic and what you sort of started to see probably, given the timing in New York, the early stages of that perhaps.

0:32:55.6 BB: Yeah. Yeah, I mean, how about humanity makes a comeback? Like bring the human voice and the human perception back into these decisions? Because, no, the algorithms cannot pick up on an immense amount of human reality and nuance, and they have incredible baked-in biases by definition. And this is, again, the false idol of efficiency. I'm not saying there aren't examples that, again, don't work. You can find some examples where you can have efficiency that's actually just efficiency without bias and without negativity. But what I don't like is how undiscerning it is. It's like, "Oh, we've got a tech solution. Oh, it must be good." If I said that to you about anything else in the world, you'd be like, "What the hell are you talking about?" Like show me the thing before I tell you it's good, right? But we have been where I think this is a fair explanation of the American psyche going back a good century or more.

0:33:57.3 BB: We've all been told progress equals technology, and just let it flow. And it was never true. It was never a pure truth. It's become even less of a pure truth. And the other thing to say, and I use this analogy, previous technology came with oversight. I kind of love repeating this and I don't think I said it yet in this session. I said it earlier, nuclear power, nuclear weapons, which if anyone says, "Oh, AI is so complicated. That's why we can't get in the way of big tech, or we can't regulate them, or we can't oversee them, because it's so complicated, only they understand." Well, I assure you, the people who created nuclear weapons, that was complicated. It wasn't like everyone could just figure it out. But they knew, and it was very public, they knew it needed to be very, very tightly controlled by the government. And then you say, "Oh, those are weapons, but what... " That doesn't count.

0:34:58.7 BB: Well, nuclear power, it's the same exact thing. Nuclear power from the very beginning had an extremely tight regulatory schema, and when nuclear power started to cause immense problems, there was a lot of public activism, there was a lot of legislative action, and then there was real decisions made to reduce the role of nuclear power in our country. Now, no one in... And I saw some of that play out, no one said, "Oh, civilians, you don't belong in this discussion because you can't explain nuclear power to me." So I can't explain the nuances of AI, but I damn well belong in the discussion because it's going to affect me. That's democracy. So there's an attempt right now and I put it at the feet of Silicon Valley, but I think there's many, many enablers in all sectors of our society. There's an attempt to say, this should be treated unlike anything else in our history. And it's technology, therefore it's good, but in this case, it's technology, and it's so good, you should have nothing to say about it. And that's where we should all be freaking out, because no one's ever told us to butt out before in such an overt and aggressive manner. And when, I don't know about the rest of you, if someone tells me, you don't belong in this discussion, you don't belong in this room, you don't belong at this table, that's the table I need to be at.

0:36:18.1 SP: So let's talk about disrupting the disruptors a little bit. So what is your... Tell us about your vision for a global AI social movement. What would that need to look like? What would it... Who should it include? What should it do? 

0:36:33.0 BB: I mean, I don't want to pretend to invent the wheel here. I mean, just take the climate movement as a real easy parallel. I mean, it should be something felt in places like this for sure. A lot of the stuff, by definition, college campuses and progressive local jurisdictions often light the match for this kind of thing. But the notion of having a people's movement that puts a simple set of demands on the table. I certainly think the notion that companies have to be held responsible for the products they create, we would say that about any other product. We'd say about everything automotive, right? If something's wrong, someone's going to get sued. And everyone accepts that liability. Well, if AI leads to something that's harmful to humans, shouldn't that liability come right back on those same companies? That's something we can demand. We can demand that workers who are displaced, the companies involved, whether it be the companies in that particular industry, or the companies that created the AI that displaced a worker, or both have a responsibility to that worker and their family.

0:37:40.9 BB: That's a financial material equation. That's something we can put on the table. We can demand a certain level of transparency and democratic decision-making. We can demand that our Congress pass real legislation. We can demand that states and localities pass the kind of legislation I mentioned before that we did in New York. It's not perfect, but it's a very important start. There's plenty to demand. The reason I'm agitated, besides being a New Yorker, the reason... [laughter] This is how we normally talk. [laughter] Is that the reason I'm agitated is, none of that demand is on the table right now in the public debate. Now, the good news is we could have that by next year. This is not a, it's so complex that we can't build some of this quickly if people get involved. But we've got to start having those focal points of opposition. I'll use an example, which I think is a really good one. The fossil fuel divestment movement. I'm very proud to say that New York City and my administration divested $5 billion of our pension fund money from fossil fuel investments. And that got the attention of some oil and gas companies for sure.

0:39:00.0 BB: Divestment has been a classic example. I'm sure there's a variation of that you could apply to AI and big-tech. But the point is people took it upon themselves to say, okay, what do we have in the venue we're in? Do we have the power to invest differently? Do we have the power to get contracts and procurement differently? Do we have the power to demand different limitations? I think that kind of movement can be started quickly and then as we saw with the climate movement, it can be knitted together around the world. Right now you would say the climate movement is absolutely international and making very coherent demands in common across many nations. That's a great model to apply to the AI situation.

0:39:42.0 SP: Where does, so we've seen in recent, even just weeks and days I mean, today, right? We've got the Writers Guild that incorporated provisions around, fought about AI, incorporated provisions into the new contract. The auto workers are doing the same thing. Where do you see those both in terms of this budding potential social movement? What role might they play? But also what do you think about the ways in which they negotiated and the kinds of things that they achieved is that...

0:40:16.1 BB: I think it's fantastic. I mean, the fact that the writers obviously this is particularly a discussion about Hollywood when it comes to the writers and the actors, but the writers really got some significant guarantees about their work not being displaced by AI. Is it perfect? No, but is it really important that that's a step towards making this the norm? Also, the actors are still not back and I think it's really important this country kind of runs through its streaming services at this point. The fact that the people we all love to watch are on strike right now and are on strike in part because of AI and they're taking a very, very tough stance is very encouraging to me. So I think the labor movement is actually, this has been an amazing year for the rebirth of the American labor movement. I think it's going to grow. But I think a lot of it is the way AI is coming into the picture and actually galvanizing working people to feel that there's a threat they must confront.

0:41:15.9 SP: So what, you've sort of talked about the role of universities generally and I know you are not a university employee to my knowledge are you...

0:41:24.8 BB: No, I am a university employee.

0:41:27.1 SP: Oh, are you technically? 

0:41:28.7 BB: Damn it, damn it Shobita. Why would you think I wasn't? What are you saying about me? 

0:41:33.6 SP: Oh, no you're...

0:41:33.9 BB: I'm a visiting fellow...

0:41:34.0 SP: I see that as a...

0:41:34.7 BB: As a compliment.

0:41:35.8 SP: As a compliment.

0:41:36.7 BB: I'm a visiting fellow at New York University.

0:41:40.1 SP: Okay.

0:41:41.1 BB: But they haven't finished my background check. When they do, they won't keep me.

0:41:44.6 SP: So, then that means you're really on the spot. I was going to...

0:41:49.7 BB: Go ahead, do it anyway.

0:41:50.4 SP: I was going to compliment your ignorance. Relative ignorance. But universities...

0:41:55.1 BB: There's a reason, wait a minute. I want to say that wasn't rude. It may sound you're like, what did she just say to him? I said earlier in the room when you were present, I said, I have very strong opinions and sometimes they're opinions on issues I don't know all the nuances of. But my quote is I'm unburdened by knowledge, helps me get to very strong views. Go on.

0:42:18.1 SP: So what can universities do? What can, when students leave this room, what can they do? What are the steps? Where do you think universities can play a role in this? 

0:42:35.1 BB: I just think no one should underestimate their voice. And I don't mean that to be a beautiful romantic statement. I mean it because I've been involved in the political process my whole life and there is no such thing as a voice that can't have an impact if you're consistent enough and persistent enough and you join with others. So just with social media alone, I mean, if you're worried about the issue, start talking about it. If you heard something here today you thought was meaningful, talk about it. If you think this is an important issue, form an organization. Or if one of the organizations you're in now can start to do something on this issue, pull them toward it. As I said, I think this university obviously is a huge university. What role can it play in creating more accountability? What role can it play in putting demands on the tech sector? If tech-folks come to this campus, start asking them these tough questions, they feel that. I've seen the trajectory with some of the masters of the universe here, like, who were very arrogant a couple of years ago. And they're getting more and more getting their asses kicked, as they should, with people saying, what are you guys actually doing? 

0:43:46.5 BB: Like, what are you doing to us? And that's helping because they need to feel accountability too. So I think there's so many prisms for action. And you're also residents of this city, you're residents of the state. What can the city do? What can the state do? I think the important thing is to begin with the notion that just talk about it. Just say to people what you feel. Just say what solutions are starting to make sense to you. Talk about the impact on working people because especially the extremely rich history of the state of Michigan in terms of the rights of workers like this and because again, all things automotive and particularly the trucking industry are going to feel the brunt of AI infused automation. This should be one of the places where there's the most agitation. And one of the things someone said to me a long time ago, which I really appreciate, if you don't see the social movement for the issue you care about, go start it.

0:44:51.3 SP: So one of the things that I sometimes hear, especially among sort of in the policy school is, but I don't understand the technology, right? I'm not a technologist. I don't have an engineering degree or a science degree. How can I have any confidence that I understand the issues? How do I engage? I'm not the one who belongs, other people belong. And of course as you said yourself that's something that the tech industry takes advantage of in various ways. And I'm wondering of course you are in a privileged position. You were in a privileged position certainly when you were mayor and still now. But how did you handle that? When, I presume people pushed back and said, well you don't understand and I'm sure you see that now. Well you don't really understand what's going on. So not only how do you manage it but what guidance might you have for others who are in that situation? 

0:45:50.0 BB: Well, I just think it's always ratification of truth when people tell you to stop talking, when they tell you you don't belong, when they tell you your idea is impossible or whatever. I always assume that's like the guarantee of the opposite. So do I need to understand how to code to be able to see what the impact is going to be on a worker? No. Or do I need to understand their nuances of their technology to know that I'm not in the discussion or none of us are in the discussion? No. That's very, very plain to see and I just don't accept the idea that expertise is what gives you citizenship because think about any topic you want to talk about foreign affairs or the military or transportation or you name it. How many people in this room would, as I took a poll of each room, how many of you are experts in foreign affairs? And yet every voter, every time there's a national election is in effect voting on questions of foreign affairs and national security. That none of us are pretending like I didn't go to West Point. I still have a right to an opinion that's actually baked into our founding documents that it is not a nation of technocrats and, or decision making class of technocrats and experts. It's actually a respect for the everyday person that their lived experience is what matters most.

0:47:16.8 BB: And here we have lived experience on steroids because it's going to hit us from all these different angles and already even before sort of the more intensive application of AI just on privacy alone, just on data alone, people are pissed off. They're perfectly smart about this point. They're like you're using my private information to make money or you're taking information from my kids without our consent or you're feeding information to my kids that are dangerous to them so you can make money or any permutation. This is well understood by average Americans. So actually I trust the average American to get what's really going on much more than the people telling me that we don't belong at the table. I want to hear from these people too.

0:48:11.5 SP: I agree. So why don't we start with some questions.

0:48:19.1 BB: Come on, let's go.

0:48:21.8 Audrey Mello: Hi. So thank you so much again for being here. We really appreciate it. And so the first question submitted before the presentation started was during your time as mayor of New York City, you launched the Link-NYC program to provide free WiFi and Internet access to residents. Building on this experience, how do you believe the United States can expand similar initiatives nationwide while addressing concerns related to data privacy and security? 

0:48:46.6 BB: Well, I think Link-NYC was a mixed bag, first of all, to be honest. So it's kind of one of those things that noble experiment with some good and bad elements. I think it's, though, I think if I understood the question right. It goes right back to the notion of localities have an important role, even that one that helps shape national and international policy. And the thing I would say that's really this is my lived experience now, speaking as a former mayor, to say there's no international governance is an understatement. There's an occasional agreement or treaty that has some impact, but there ain't no world government. And then our national government kind of flickers on and off in terms of its ability to deal with the issues that affect us all. And there are days when our national government seems to be a full government doing full things. And there are many days when it's shutdown or on the verge of shutdown or there's no speaker of the House or whatever it is. And this is not just this year. We've seen variations on this now for the last decade or two. Why do I say that? Because in the end, the only places that actually have to govern all day long are states and localities. Like we don't get to shut down, especially localities. And that's the irony. The responsibility levels, you would think that the higher up you go, the food chain, the more exalted and responsible people are.

0:50:05.6 BB: The least responsible form of government is global because it doesn't really have any shape or impact. Then you go to national government here and in many other countries that are essentially paralyzed a lot of the time, you go to state governments that at least by and large take some responsibility. But the place where all day, every day, responsibility is being taken and some kind of solutions are being attempted is local government. And so I think that's how to recognize, even on an issue of this vastness go where who's got the hot hand, right? The local government's, the place where you can get something done, at least.

0:50:47.3 Enzo Majano: This question was also submitted before the talk today. It's often said that technology can be a great enabler but also a divider. How can we guarantee inclusivity in the advancement of smart-city technology, ensuring that no one is excluded? Or is it inevitable to leave a minority behind in the pursuit of progress? 

0:51:05.3 BB: Oh wow. No, it's not inevitable. It's not progress if you're leaving people behind. And that's I think beautifully summarizes everything we're talking about right now we have a form of "progress" that is inherently built on leaving a lot of people behind. I think the way to see it is to turn that equation around and say it's not progress if there isn't a democratic process. It's not progress if there isn't transparency. It's not progress if there isn't a debate. And to recognize that we do not have to accept things this way. I mean, I think that's the thing that really I want to drive home. We are being told this is the only status quo, there cannot be another. History tells us that's a falsehood. There is no such thing as only one status quo. There's no such thing as an unmovable object in terms of a democratic society. So we have to say no. If your discussion leaves out a lot of people, it's not a real discussion.

0:52:01.1 AM: Now we're moving on to questions from people who are present or watching the event live. So our first question is from Angelie, and it kind of dovetails with our prior conversation about having a seat at the table and making sure your voice is heard. So they want to know oftentimes, even when tech-companies are more transparent relative to their peers about what they're developing, the takeaway is still so jargon filled or difficult for a layman to understand. How do you think cities can start translating across the language barrier to create change? 

0:52:28.6 BB: I love that, I love that. Yeah. There are people who use language you can't understand on purpose. Many people come from families with different backgrounds. And the classic example of if the kid doesn't speak the language from the home country and the parents want to say something about the kid, they say it in their native language that the kid doesn't understand. Well, that's like what we're dealing with here. The tech industry is using a language they know we don't understand on purpose. If they wanted to make this conversation more accessible, if they wanted to bring us in, there's a lot of ways to do it. I do agree it's up to government to do the translation and to empower people by saying we're going to take these concepts and make them much plainer, much more simple for people. That does not mean dumbing down, but trying to portray the ramification. Because I don't want to hear about the glorious theory or the elegance of the formulas. I want to hear what's it going to do to human beings. And that we can put into plain language.

0:53:35.2 EM: Here's another question submitted. Many of the perils of AI come from its place within a high capitalistic system. The loss of job is important, but not because of its inherent value of the job itself, but rather what the worker can access with their income. Would stronger safety nets I.e Universal healthcare, education, or even the idea of minimum monthly income create the conditions in which we could take advantage of these technologies to capture more efficiency in the business world without the threat of worker survival? 

0:54:07.0 BB: I would say a forceful yes and no. There is an absolute grain of truth in that theory that so much of what I'm saying is about the insecurity people live with all the time. And if you address that core insecurity, you'd be having a different discussion about the impact of technology. Fair statement. Yeah. Someone call me when we have universal health care and those kind of income supports and proper education for our children, the day we get to that, if you said we're suspending all development of AI until we get to that. That would be a very interesting conversation. Obviously, what's happening is AI is racing forward and social progress is not. In fact, we're going backwards, sadly, on education because of the pandemic. Housing costs are going up for so many people, not down. So we're totally out of balance. The other thing I would say is a lot of people I think the no part is a lot of people try and use that kind of formulation to say this is just about the economics, when in fact, I'm saying there's a democracy problem here, too. If you took away worker displacement out of this equation, all workers did great, but still these decisions were being made without us and our privacy was being invaded and the technology was being developed in ways that could be dangerous to our human future. I'd still have the feelings I had, and I'd still say, unless there's, this thing is subject to real transparency and democratic debate, I ain't buying it.

0:55:41.9 AM: Alright, so you also spoke a lot about how there isn't a coherent or cohesive social movement around AI right now, and perhaps there should be. So the next question asks, if the answer is a social movement around AI, how do you get those negatively impacted? So the truck driver in Michigan, for instance, to care about and act on these issues? 

0:56:00.9 BB: Well, we just saw some truck drivers take the country to school with the UPS strike. That didn't have to be a strike. I mean that was, as a progressive and someone who believes in the labor movement, I'm watching the labor movement right now. Labor movement got game. Didn't used to, and now it does. I mean, there was a great, there was a Wall Street Journal headline yesterday that said, the UAW keeps the big three off balance again. Because they're just, they're so on the move and they're so agile and obviously the teamsters with UPS and in a different way, the writers with the Hollywood studios. So no, I think there's so much to be said for the notion that working people can assert themselves effectively in this situation. I've lost track of the original question because I got so agitated.

0:56:57.5 AM: So the original question was, how do we get those negatively impacted to care about and act on these issues? 

0:57:02.2 BB: So, my point is, thank you. My point is, one of the things I learned in public life is when you don't know what the hell's going on, just say it. And so...

0:57:11.2 AM: Leadership tips.

0:57:12.0 BB: Yes, leadership tips. Where am I? What exactly is this gathering? My point is, I think working people are already asserting themselves. If you're talking about here in the state of Michigan, obviously with UPS being a great example, they're already asserting themselves, already UAW being a great example. And I think the notion that they can assert themselves a lot more is live Now in a way I might not have said five or 10 years ago for totally different reasons than AI for a lot of other disruptions. There's a moment of movement in terms of workers demanding their rights. Now, can they coalesce with other movements? 

0:57:55.7 BB: I always believe that like-minded people can find their way to each other, that common enemy and I hate to use that phrase, but it just needs to be used in this case, that the person who cares about privacy can find common ground with the worker who cares about displacement, can find common ground with the person who worries about democracy being undermined. And I think, again, one of the things I encourage people to do is if you feel some of these things that we're talking about here, go talk to people who share those views, including people who share those views but have another motivation. Because I use that example with the Patriot Act. I mean, you'd be amazed when you start talking to someone that you think is different, but you know you have something in common with how much that one commonality can drive action. And the only way to find out is to try.

0:58:52.3 EM: So you mentioned in your talk that this AI discussion is very similar to the climate discussion. If AI is bigger than climate, the risk, of course, is that AI becomes politicized in the same way as climate and eventually actions on the topic is deadlocked. How can we avoid this? 

0:59:09.6 BB: It's a great question, but I would say I'm a climate, not only am I not a climate denier, I am a climate change action believer. I think the trajectory, and look it was a tough stretch when the previous president took us out of the Paris Agreement, and time was lost, and all that. But there is a not unfair or overly optimistic view that says, look, a lot of countries have started to move more than they were. A lot of the private sector has started to move. Inflation Reduction Act was huge and surprising in its scope. And that all comes from the climate movement. And it all comes from the climate movement's ability to work at the local level and bubble up the solutions and create demand. So I would say I'd love to see the AI movement mirror climate movement. Will there be politics around it? Of course. Will there be divisions? Of course. But I actually think the climate movement is broadly bridging right now. There's fewer and fewer deniers, thanks unfortunately, to hurricanes, mudslides, wildfires. All the extreme weather is kind of wiping out the denialism. But the climate movement is showing the ability to make a lot of impact, even if it has a high number of detractors out there. So I think that's a good model for an AI movement.

1:00:36.7 AM: So speaking a little more generally, Eli wants to know, if you could wave a magic wand, what are some smart technologies or climate solutions that you'd like to see implemented today in every major city? 

1:00:48.7 BB: Well, look, I think there are elements of both AI and renewable energy that could go together and really be helpful. I mean, if you had to simplify, how do we overcome the climate crisis, it would be a really, really aggressive transition to renewables. And there are issues around renewable energy, even like the transmission of the energy, the storage of the energy, or how to help people do their own part with renewable, with solar and all, and link it to larger networks that I do think something like AI plays a role in addressing. So again, I have these benevolent moments where I'm like, oh, look at this puppy, AI. And it's going to do nice things for us, right? And then it turns into a Rottweiler. [laughter] And so it's like, the good part can be applied in a way that I think could help us move forward, so long as all those other things, the democracy, the checks and balances, et cetera, are in place.

1:01:57.8 EM: In relation to AI, with the revival of the Office of Technology Assessment in US Congress, which was shuttered during the Contract with America movement in the 1999 era, be effective in effectively regulating AI. Is there a way to translate this to work on the local level? 

1:02:17.6 BB: Yes, there's a way to do it on the local level. But we had this conversation in one of the earlier meetings. It should not be an Office of Technology Assessment off to the side somewhere issuing an occasional report that someone might or might not read. You need very central figures, like cabinet officer, cabinet official level, or in the case of a city, a deputy mayor. You have to be really serious now about the kind of weight that would be given to an official and their office that's assessing what the hell's really going on with these technologies, what they're going to do to us. And I think it goes beyond the traditional sense of assessment. It has to have a sort of action focus, because the real time dynamics now, I mean, it's kind of what back when there was an Office of Technology Assessment, it was kind of quaint what the assumptions were about how the conveyor belt of new ideas to action to impact on humans would be, and how much time we had to sort these things out.

1:03:10.4 BB: We do not have time anymore. Stuff's moving real fast, and it has to be real time decision making. And folks have said, for example, should there be, rather than attempt at legislation alone, should there be like an FDA for AI or something like that, a bunch of experts that could make decisions. And I think it's actually a very productive idea except the FDA takes for fricking ever. And we don't have that in the case of AI. So it has to be more centrally located in the government and more supercharged in terms of timing.

1:03:45.9 AM: NYC supported tech hubs that were focused on bringing in underrepresented minorities into the tech sector and expanding computer science education for every student. Do you feel that these were successful in meeting these goals? 

1:03:55.4 BB: Who are all these smart question askers? 

[laughter]

1:04:00.6 SP: Welcome to the University of Michigan. [laughter]

1:04:04.0 BB: I'm like, wow. Shobita, what is this place I'm in? 

[laughter]

1:04:09.5 BB: It's magical. Yes, computer science for all has had a huge impact. And with all my tech angst, I 100% believe this is the way of the future to retrain a lot of teachers in how to teach computer science skills and make it a much more broadly experienced subject in public schools. And also ensure that the tech sector is more representative, which only happens if you reach across the spectrum in the public schools. Like the tech talent type line and the other attempts at employment, good. I don't think great, but good. I think we could go a lot farther because, again, now on the happy side of tech, a very high percentage of tech jobs can be reached with an associate's degree. And obviously, some require higher. But there's a lot of opportunity that still is not being offered to a more broad constituency. So I think that was directionally right. I don't think the results were stellar. Obviously, the industries are going through evolution now. But I think it's directionally right to say, how do you look at the way that industries are staffed and not just leave it the way it is but try and create some kind of governmental presence to ensure both broader opportunity and broader diversity in these industries.

1:05:43.6 EM: Joe submitted this question. You mentioned that over time, different party interests can converge to make real change, such as the renewal of the Patriot Act. How can we ensure that such a convergence of interest happens with respect to AI before it's too late and millions lose their job without a safety net? 

1:06:01.2 BB: Well, look, this is what I worry about all the time and think about all the time. And I think, again, by all the things I've said, they're raising our voices and acting locally and acting in our institutions and building a social movement. But I think this point about before people lose their jobs is a really important one. We are not on that trajectory to stop the first wave of massive job loss. And I hate saying that, but I can count. I will say that is going to change American politics profoundly. And my kind of evocative way of saying it is regardless of what you think of President Trump, if you think Trumpism was a response to economic and social dislocation and cultural dislocation, if you make that analysis and say Trumpism was the result, the residue, whatever way you want to say it, of rapid social change, that was positively like glacial social change compared to what we're about to see. So I'm saying millions of people will lose their jobs in the course of just years, not decades, years. And that is going to reshuffle the deck of American politics very rapidly. Now, I don't like that. It's not what I wish for. I just think it's what's going to happen. And when it happens, I do think that will open the door for a different kind of coalition and a different kind of activism and action.

1:07:18.8 BB: Because you cannot have, if millions of people get dislocated, that means there are families beyond them, which means now you're starting to talk about, is it 10 million people? Is it 15 million people? What is it? It's a hell of a lot of people. It's going to cause people to question. And it either is going to be regressive in ways we might have said was true in the last few years, or it's going to be an opportunity for a new kind of coalition for change or some rich mixture. But it's going to be a moment for action in any scenario.

1:07:56.2 AM: Okay. So the idea of just portraying the harms of AI to constituents feels like it could potentially create a tech dystopia that while more warranted than the current AI hype is still falling on the pendulum of tech extremes. Do you think it is possible or even worth it to try and communicate both the pros and cons of the technology to everyday people? 

1:08:15.0 BB: Who are these highly intelligent people? Nuanced. They're so nuanced. They're so nuanced. It's a great question. And I will always tip my cap to some balance. I am never going to say it's just one thing, AI. But I think we are so out of alignment in the public debate that I would rather take a more militant view because it's fact-based. There is no democratic process around AI. It literally doesn't exist. There's no accountability. There's no transparency. Decisions are being made about us, without us, every hour of every day. And look, I am not a technologist. When I say there's existential threat up to the level of the machines becoming self-aware and taking actions against us, that's not because I saw Terminator 2 with Arnold Schwarzenegger. [laughter] It's because the technologists who developed this technology acknowledge their own fears about that. Again, the parallel to nuclear weapons, and go see the movie Oppenheimer, which is extremely powerful and moving, and the fact that the people making the nuclear weapons acknowledged in real time their fears, but then they went and acted on their fears and immediately tried to limit the production and the proliferation of nuclear weapons very energetically. The people who created them put their backs into trying to stop this from becoming the bigger problem it could be. And by the way, as of this hour, they succeeded.

1:09:57.0 BB: So where is the parallel in the tech industry of people saying, you know what? There are things that are good that are coming out of this but there's a hell of a lot of problems here. And we're not just going to go and say, oh, government please regulate. We're actually going to energetically fight for change and put some limits on ourselves and have this discussion in a way that's honest about the dangers. That's not happening. So I think it is incumbent upon us citizens to say, this is broken. And if you want to accuse me of some dystopia you can accuse me of that. But it's a hell of a lot better than a techno-optimism that leads us on the road to hell. Let me be a little extreme to help be one of the people who balances the equation because the other team right now has all the points on the board. And we better even up the score a bit in favor of the citizen, right? Ask me how I really feel.

[laughter]

1:10:56.0 EM: How do you feel about the current...

1:10:56.2 BB: He's like, how do you really feel? I like that.

1:10:58.4 EM: How do you feel about the current NYC administration's changes to your past policies to regulate technology or make technology more accessible to residents? 

1:11:06.7 BB: Nice try.

[laughter]

1:11:11.3 BB: I don't get into that whole discussion. Look, I think every administration is going to have a different approach, and every administration is working with current knowledge. And I'm not inside the building hearing all the latest. I do think if any jurisdiction attempts to be sort of tech-friendly, they need to also ask the tough questions about the dangers and be ready, if they don't like what they see to move quickly. So again, I don't mind any sort of the traditionalism of we would like tech to work out for all of us. That's kind of, I get it. Just as Ronald Reagan used to say. I don't have a Gerald Ford quote for you but as Ronald Reagan used to say, trust but verify.

1:11:56.6 AM: So we've spoken extensively about the potential future harms of AI, but someone in the audience wants to know what we're doing to address the more immediate harms of AI that are already here. So you've talked about autonomous vehicles. And California just revoked Cruise autonomous taxi license because they kept hitting people. So how should we be addressing these kinds of AI downsides right now? 

1:12:13.3 BB: Just shoot me. [laughter] Because they kept hitting people. [laughter] Like, this was a movie. You knew what was going to happen before it was over right? [laughter] It was like, come on. Let's create a car without a human in it and assume it won't hit people. I'm like, come on. This is not mysterious. So I'm sorry, what was the question? [laughter]

1:12:41.5 AM: What we should be doing to address those immediate harms right now.

1:12:45.8 BB: I think California did the right thing after a long time not doing the right thing. I had a team of very earnest technocrats come to me years ago in City Hall and say hey, we've got this great idea for autonomous vehicles in part of Brooklyn. I was like, no, you don't. [laughter] I was like, no. I'm like, no, no, no. Of all places in Brooklyn, the cars would get beaten up. [laughter] I'm worried for the cars. They're like, are you worried for the pedestrians? No, I'm worried for the cars. So no, I think the right now limitations the worker displacement piece is a right now thing. As I mentioned, we had New York City worker displacement legislation that we passed in certain industries that delineated the company's responsibility when they changed a job and they put someone out what they owed that worker. That's something could be done at state and local level all over the country right now vis-a-vis the impacts of AI.

1:13:49.1 BB: The transparency dynamics, again there's so many institutions and localities that have big contracts that are meaningful to the tech companies. They can start demanding more transparency in the process in exchange. Or they can threaten to pull their contracts. This is kind of the oldest story in terms of the political process. There's that great phrase, also a very New York phrase it's not about the money, it's about the money. And [laughter] it's just kind of follow any political discussion and it will often lead you there. And so purchasing power and economic power can be brought to bear starting right now because I think these companies need to understand that there'll be a consequence. And the consequence can come from the consumers. It can come from the institutions. So there's many, many things you could do. But the point is, I'm just riffing a few. There's a whole host of right now actions. I'd like to see that demand. I'd like to see people start to issue that demand. And I don't care who those people are in the sense of, I don't care if you're a college professor or if you own a business or you work in a business or you're a student or you're a city council member, I'd like to see people start to issue the demand.

1:15:12.5 EM: So in addition to AI, another thing that makes a lot of tech smart-cities is the use of cameras. And you mentioned in your talk that people's personal information is at risk with this new AI technology. New York police departments spend a tremendous amount of money on surveillance technology like ShotSpotter. What changes would you make? And is there technology that you feel is beneficial? 

1:15:35.9 BB: Yeah, I think ShotSpotter is beneficial [laughter] So ShotSpotter to me, and look, from my vantage point, I'm happy to be educated otherwise but I authorized it and I did not see civil liberties or a privacy problem because it became an incredible tool for knowing that it basically tells you when a gun has gone off and speeds the response to it. Also, what I did not know in my beloved city was a lot of times a gun went off. And we didn't know anything about it because no one got hit. Turns out a lot of people who carry guns are not particularly good shots. And I'm really happy about that fact. So no, there was all sorts of gunfire that never got reported to the police because there wasn't a person with a bullet in them.

1:16:17.8 BB: But actually knowing where the gunfire was and being able to recover the evidence of it often related to how you stopped other crimes from happening or caught criminals for other reasons. So ShotSpotter to me was a virtue. The bigger surveillance question which I think is a very sensitive one, very real one, clearly New York has a particular concern about terrorism and always has since 9-11. That's not changing. And it shouldn't change in the sense of we are, in many ways the number one global target. We have the UN, the capital of American finance and everything else. But one of the things that happened with the POST Act for example, coming out of city council was to say the police have to have really stringent rules about how they monitor the use of technology. They have to be able to publish their approaches.

1:17:08.7 BB: They have to be able to answer questions about anything that might be used inappropriately. It created a notion of you start with even if you have to use this technology in certain circumstances, it's not a blank check. And it should be altered over time if you find excesses or mistakes. And I think at least we have a framework for that. So this is a classic example. We have a framework in New York City by law that says these things are not already written. There's no 10 commandments of how to use AI that you can't violate. In fact, it's the other way around. Right now, surveillance technology is being used. We want to constantly see the updated ground rules. We want the ability to question them. We want to know if, for some reason there's actions that are not consistent with our values. Or consistent with the stated objectives. And we want the ability to change when we see something that needs to be changed. That's the law. I'd like to see that be the law everywhere.

1:18:09.1 AM: And those were the last of our questions. Thank you so much.

1:18:10.6 BB: Thank you. Well done.

1:18:12.4 SP: Wonderful.

1:18:14.7 BB: Wonderful.

[applause]

1:18:16.8 SP: Thank you so much.

1:18:16.9 BB: Thank you. Well done. God! 

1:18:18.8 SP: Perfect timing.

1:18:20.3 BB: What an incredibly smart erudite, incredibly good looking audience. [laughter]

1:18:27.6 SP: Wow. We already invited you to campus.

1:18:30.5 BB: That's right. That's right. It happened already. Thank you, everyone. I hope you found it helpful and appreciate very much. Excellent conversation. Thank you.

1:18:38.4 SP: Thank you very much.

[applause]