New York Times technology reporter Kashmir Hill and Shobita Parthasarathy explore the intersection of technology and privacy, addressing some of today's most salient issues. October, 2024.
Transcript:
0:00:01.2 Lynette Clemetson: So good afternoon, I'm Lynette Clemetson, and I'm Director of the Wallace House Center for Journalists, home of the Night Wallace Fellowships, and the Livingston Awards here at the university. Wallace House hosts many events. With our partners at the Ford School, we spend so much time in this room, I've come to think of myself as an honorary Ford School person, and we're always happy to work with our colleagues here, so thank you. This turnout is encouraging, which is maybe an indication of how terrified we all are by the subject, I think the conversation today is gonna be enlightening and bracing. I wanna start by acknowledging that today is October 7th, and it's a difficult day for many, many of us, all of us in so many ways.
0:00:49.4 LC: I admire the wisdom of your Ford School dean, my friend Celeste Watkins-Hayes, who encourages us all in difficult times like this, to recognize the thicket of the forest while also attending to the struggle of the tree, those of you who read Celeste weekly messages will recognize that language, and I hope we can take some time today and in coming days, and weeks to pull away from our devices and listen to each other as individuals, not just as the thicket but as the tree, to acknowledge each other individually, the deep pain that some people are in, and acknowledge each other with listening and care, so much of what pulls us into our devices is designed to insight and divide us.
0:01:42.6 LC: I don't have my phone in my hand, but you know it's probably within reach of you right now, and so much of what it's designed to do is to make us angry or divide us. And that brings us to the topic of our discussion today. A dialogue at the intersection of technology surveillance and privacy, today's conversation will be led by Ford School Professor and Director of Science Technology and Public Policy Program here at the Ford School, Shobita Parthasarathy. And STPP, that's the acronym, is an interdisciplinary program, university-wide dedicated to training students conducting cutting edge research and informing policy makers and the public on issues related to technology, science, equity, society and public policy.
0:02:37.7 LC: For students interested in the graduate certificate program of STPP, there will be a virtual information session this Thursday, October 10th at noon. You can sign up on the STPP website. The next deadline to apply for the certificate program is November 1st. So I hope that some of you are interested and you will tell your friends, if I was in a place to go back to school, this is one of the programs I would be most interested in. We're delighted to have joining us today our special guest, Kashmir Hill, from the New York Times, she is the author of... With the pink cover, Your Face Belongs To Us, a book that is appropriately terrifying as we approach Halloween.
0:03:29.5 LC: I stayed up late watching it and thought I might actually have nightmares reading this book [chuckle] It made me... And then I read her stories too, and I told her before we started that I own a Chevy GM car and I'm scared of my car and how it's listening to me and tracking me and helping to set my insurance rates. None of the things I thought my car would ever do. Kashmir joined The New York Times in 2019 after having worked at Gizmodo, Fusion, Forbs and Above The Law. Her writing has also appeared in The New Yorker and the Washington Post. She writes about the unexpected and sometimes ominous ways technology is changing our lives, particularly when it comes to surveillance and privacy in our own communities. One of the stories that she did recently that really caught my attention, and hopefully if you haven't read it, some of you will read it after this talk, she brought national attention to wrongful arrests in Detroit from facial recognition, false matches that revealed partnerships between automakers and data brokers to collect detailed driving data resulting in surprising hikes and consumer insurance rates and arrests of people that have since sued and one.
0:04:56.9 LC: And we're fortunate to have two students with us today, undergraduate students in public policy, Audrey Melillo and Terry Wynn to facilitate the question and answer portion after today's program, with support from STPP student Farah Pitcher. Where's Farah, hello. If you have not had the opportunity to submit a question, we encourage you to do so, using the QR codes on the cards that are found throughout the room, do you all see the cards placed around you? Following the talk, Kashmir will be staying for a book signing of Your face Belongs To Us. Our local bookseller Book Suite will be outside selling books, and at Wallace House, we support writing, and I hope that you will all think about purchasing a book, it's a great one. With that, I will turn it over to our conversants for the day.
[applause]
0:06:10.7 Speaker 2: So, thank you Lynette, for that wonderful introduction. Kashmir, it is wonderful to have you here at Michigan. Obviously, many people agree with me on that. And thank you very much for writing this book. It is detailed, but such an important set of stories, not just one story, one story I suppose, about the tech industry, and in particular, Clearview AI, an important facial recognition company, and the role of the tech industry in developing the technology of facial recognition, but also stories about the impacts of facial recognition on you, but also on many of our neighbors in Detroit in the Project Greenlight program that some of you may know about, and hopefully we'll talk a little bit more about today.
0:07:06.2 S2: As someone who thinks about technologies all the time and technology policies all the time. It's often really difficult to get people to become interested, to think about the social dimensions of these technologies, people often think about technologies and see them as highly technical, it's not our problem, it's the computer scientist problem or the engineer's problem, but I think your book does an extraordinary job of demonstrating why we should all be caring about technology and in particular facial recognition.
0:07:46.9 S2: So thank you, and I hope all of you will go out after this and purchase the book, but maybe I would love to start out by just asking you what led you to write this book? How did you become interested in facial recognition technology, and what made you think that it was important enough to write a book about?
0:08:08.3 Kashmir Hill: Yeah, am I on... Just to make sure you guys can hear me. So the book started for me in a couple of different ways. So when I first got to New York Times, I got to the New York Times in the summer of 2019, and it was kind of like, what was I gonna write about? And one thing that I saw happening, I thought was really interesting is there was this kind of artist/technologist who was tracking academic papers about facial recognition, and he was looking at what are they studying? Where are they getting the data? How are these people making facial recognition better? And he was tracking where the databases were, and he found out that there are all these photos that were being shared, there was one from Duke University where actually I was a student, where a professor had just set up a camera on campus and had kept track of students going by and created this database of their faces and create a public database, most of the students didn't know about it, and we're sharing it with academic researchers all over the place, we're using it. There was another database, one of the biggest ones that had come from Flickr, I don't know, do any of you use Flickr or this is crowd too young.
[laughter]
0:09:20.4 KH: It was one of the first photo sharing sites, in that kind of early days of the Internet, back when people were really excited with their digital cameras, they would take a bunch of photos and then they would just upload them to Flickr to share with their friends and it was public by default, and so lots of people had uploaded photos and then kind of forgotten that they existed on the Internet, and some researchers had decided, "Oh, let's get a whole bunch of faces from Flickr and create this big face database." And it was getting used by lots of researchers around the world, it was getting used by the Turkish Secret Service, it was getting used by Northrop Grumman, there was all these people that were using these faces, and so I started going through the database and reaching out to particularly parents of children whose faces were in the database, and I said, "Hey, did you realize that your old Flickr photos are now this like vast database that's used by all these people, and they were very shocked.
0:10:17.8 KH: And so that's how I first started thinking about facial recognition, and then a couple of months later, I got a tip about a face recognition company that has appeared in a public records request that claim that they had scraped billions of photos from the internet, they now say they have 40 billion faces in their database to create a facial recognition tool, where you could take a photo of anyone and it would show you all their photos on the Internet, and they had started selling this to police, and according to this memo that I've shown up in a public records request, hundreds of police departments around the country are using it.
0:10:56.4 KH: And there was this legal memo from a lawyer that they had hired to basically write a memo for police reassuring them that it was legal to use the tool. And he said, this works incredibly well. It has something like 99% accuracy. And I'm reading this and I'm just like, how does this exist? The company was called Clearview AI. I had never heard of it. No one I talked to in the startup industry had heard of them, and as far as we understood that time, facial recognition technology didn't work that well, and so when I first heard about it, I thought it was like a Theranos, that it was a startup that said yeah, we can do this magical thing, and they're trying to sell it to police, but as I started digging into it. It turned out to be real. They were so secretive.
0:11:45.8 KH: They didn't want to talk to me, they did not want a New York Times story about them. Which is pretty unusual in Silicon Valley. Startups usually love a big New York Times splashy story, and I just got so fascinated with how did this happen, how did this small technology company get this super power, why did another big tech company not do this first? And I kept pitching story after story to my editors and I said, you just, you should probably go write a book about this.
0:12:14.6 S2: And here we are...
0:12:15.3 KH: Yes.
[chuckle]
0:12:17.8 S2: So it might be worth stepping back for a second and just kind of covering maybe not necessarily the technical ground, although I think probably some of that is warranted, but just help us understand, help the audience understand what is facial recognition technology?
0:12:34.4 KH: Yeah, so facial recognition technology is when you search... I mean there's two different kinds, so one, there's one-to-one facial recognition, which is what most of you probably have on your phones, where you look at your iPhone or your Android phone, and it's checking to see whether your face matches the face that it has in it's system. It's called a one-to-one search, it's less controversial. Most people like that use of facial recognition technology, and then there's something else that's called a one-to-end facial recognition search, and this is when you take a photo of somebody and look for them in a big database of other people's photos.
0:13:18.7 KH: So you might search the earliest use of this by police as they would try to identify people among a database of mugshots of people who have been arrested before, and what Clearview is doing is they're searching for your face in this database of 40 billion faces and then returning everything that they think is a match and it's based on... Essentially, it's kind of like a... Kind of like your fingerprint. They do something that's equivalent to find what your face print is, and they're looking for other faces that match.
0:13:52.3 S2: And it's a type of artificial intelligence that essentially.
0:13:57.6 KH: Yes. So this is part of why facial recognition got so powerful in the last few years is the same kind of research in machine learning and pattern recognition that led to generative AI, it's this process of giving these kind of computer systems lots of data and then they figure out how to find patterns in it, and so with face recognition, the way they did this is... One of the places that came out of was Facebook, so we all went on to Facebook and we uploaded lots of photos of ourselves, and we wanted our friends to see it, so we tagged ourself in photos, we tagged our friends in photos, and we did this hundreds of times. Thousands of times, and this was incredible fodder for people that were trying to make good facial recognition, because computer scientists, something that's really important is labeled data, and so we essentially said, here's thousands of photos of us in dark bars, at parties, turned away from the camera, on a beach, with a light in our face, and over time, computers were able to recognize. Okay, this is a face in this situation, and they just became very powerful at recognizing faces in the same way that ChatGPT can figure out what word should come after every other word, by pursing all of this text from the internet.
0:15:18.9 S2: So in some way, so Facebook was one of the earliest adopters of a facial recognition technology, right? But you said what's interesting in part is that a lot of these tech companies didn't really move very far, so why didn't they move very far, and then why is it that you think that this company Clearview AI decided, all right, we're gonna move forward with this. This is gonna be our primary product.
0:15:47.8 KH: Yeah. I mean, it was really interesting when I found Clearview AI claiming to do this, and people at the time, kind of didn't think that this was possible. Didn't think it would be a startup that were to release this. And so I went back in my reporting and essentially found out that Google, as early as 2011, said that they could take a photo of somebody and find other photos of them, and Eric Schmidt, who is the Chairman of Google, said, this is the one technology that we have developed, that we have decided not to release because we're too concerned about the possible nefarious use cases, what if a dictator gets this power and then they can identify protesters and so yeah, so they sat on it.
0:16:40.8 KH: Facebook, a few years later, developed something similar for the book, I ended up finding this video that had been recorded internally at Facebook, and the engineers were wearing this little baseball cap with a smartphone on the brim, and when you looked at somebody, it would call out their name. And so they had this technology, they had developed it, and then Facebook too decided to sit on it. And this was a little surprising to me 'cause I am a privacy reporter, I have reported on so many privacy intrusions that have happened as a result of Facebook and Google technology, Google is the company that sent cars all over the world to take photos of people's homes and put them on the internet as part of Google Street View, but this was the one thing where they kind of said there's a line that we don't wanna cross, and what set kind of Clearview apart was not that they were better at the technology.
0:17:38.8 KH: It was that they were willing to cross that ethical line, they were willing to break that taboo because they only had something to gain by it, because they're the small startup and they were willing to sell this ability and so... It was interesting to me, 'cause it meant that Facebook and Google had essentially held this technology back for about a decade and had bought up the startups that could do this, and so they kind of held it back for so long, but at the same time. Because technologists were sharing the research they were doing, and it eventually got to the point where Clearview AI could come along and build this and release it, and this is kind of a world that we're increasingly in now there was a... Maybe some people in this room saw a very viral video out of Harvard.
0:18:25.1 KH: Did anybody see this video with the Meta Ray-Ban smart glasses? Some of us are still on Twitter and some of us aren't. So there are a couple of undergrads at Harvard who took Meta Ray-Ban smart glasses, which are these classes that have a little camera in them, so you can take photos or take a video, and they create a version where they were live streaming everything they were looking at to Instagram, and then did a little coding such that they were doing face detection and then running people's faces through a public face search engine in identifying them, and in the video, they... I talked to the students and they said that they had done it at Harvard, but they knew most of the people there, so they went on to the subway where there would be strangers, and they started approaching people on the subway and saying, "Oh, hi Betsy, right we met before at that foundation" and the person thinks that they know them.
0:19:29.0 KH: And I talked to the Harvard students, and they said that coding up this project was the easy part. It was actually editing the video that took them more time. So we really have gotten to a point where it's almost trivial to access these powers. So it's not up to the technology companies anymore to hold it back, which is part of why I wrote the book, 'cause I want policymakers to know this, 'cause we really have some questions to address now about, should this happen? Should we be recognizable? How much control should we have over our faces, our anonymity? These are just such big questions.
0:20:06.7 S2: I wanna get to that for sure, but and I have many questions about this Harvard incident. But one of them is, did they disclose to the people that they were talking to that they were part of this experiment?
0:20:19.3 KH: So they did it at the train stop at Cambridge, and these are people that are waiting for their train to come, and so they quickly talk to them, and then they get on the train.
0:20:28.5 S2: So no?
0:20:29.5 KH: So my understanding is that these people found out about it when the video went viral.
0:20:36.2 S2: Okay, okay. Well, that's also, I think, an interesting part of this, implicitly interesting part of this story, right? So as you were recounting the story of Facebook and Google kind of holding this technology back, and Clearview AI going gangbusters, I remembered that in 2020, after George Floyd's murder, that one of the moments, of course, like you presumably, when tech issues break out into the mainstream, my spidey sense goes up. And I remember that that was a moment when these big tech companies said, "Oh, don't worry. We're not using facial recognition. We're gonna hold back on facial recognition." I think Facebook, Google, and Amazon. So, of course, many people were wondering.
0:21:23.5 S2: They didn't even know that these companies were even working on that issue. And I guess two questions about that. I mean, one, it's interesting that do you have any sense of what the folks at Clearview AI were thinking at the time, that on the other hand, these big tech companies were kind of disavowing the technology? And what are the companies, big companies, doing now when it comes to facial recognition technology?
0:21:56.1 KH: Yeah, so this is a big question. So police have been using facial recognition technology. They kind of started using it in 2000. So it goes back a couple of decades now. It really did not work that well for a very long time. And so it was a tool they had in their toolbox, but it was pretty unreliable. And it's gotten more powerful, really, in the last, I would say, five or so years. And so, yes, after George Floyd, Amazon, for example, had been starting to work with police to help develop facial recognition tools. And there was, partly in reaction to concerns about police brutality and police abuse, they said, "Okay, we won't sell facial recognition anymore to police." And it was a big deal.
0:22:44.3 KH: It got a lot of news articles written. But Amazon is not the big provider of facial recognition technology to police. There are big facial recognition technology companies that you have never heard of before. Or maybe you have NEC is a big one in Japan, for example. And they were the provider of facial recognition algorithms that most police were using. But there's actually hundreds of vendors. And so, from that perspective, it wasn't a big deal for the big tech companies to pull back. At the same time, in terms of what Clearview was thinking, I mean, Clearview is just like this little tiny startup. It's based in New York. It was very small. In the early days, it was really just a handful of people that were working on this technology.
0:23:31.5 KH: And when they were thinking of the big tech companies, their major concern was, we don't want them to know about us because we're scraping lots of photos from their site in a way that violates their terms of service. So they're scraping Facebook and Instagram and getting photos from there. They're scraping YouTube. They're scraping LinkedIn. They're specifically targeting sites that will help identify a person. So they were really trying to keep what they were doing secret. And no, they were not deterred by the fact that these big companies weren't working with police. That just was an opportunity for them. Though in the early days when they were building the technology, they weren't building a tool for police to use. They had other ideas. It just turned out that police ended up being kind of the perfect users for what they had built.
0:24:21.6 S2: So that actually brings us to Detroit and Project Greenlight. I'm wondering if you could tell us a little bit about how Detroit is using facial recognition and maybe what are some of the problems with how they're using it.
0:24:36.4 KH: Yeah, so Detroit Police Department was one of, they have been using face recognition since, I wanna say since 2017. I forget, it's in one of my stories. I just don't have it right here. But they've been using it for a while as a way of identifying criminal suspects after a crime. This is why police want to use it. I mean, we have surveillance cameras everywhere. And so if something happens, there's an assault or a murder or shoplifting, and you've got this image of the crime, it's very attractive, very appealing to just run that image through a facial recognition system and find out who the person is, go question them, see if you can solve the crime that way. And so Detroit started using it. And they were not working with Clearview. They were using two different vendors. NEC was one, I believe the other was Rank One.
0:25:36.5 KH: And how it works is you get a photo of a suspect, you run it through the system. It doesn't just tell you this is Kashmir Hill. These systems will return photos of faces that it thinks could be a match. And then there is a human, a facial recognition examiner, who will go through and pick the right person. And so Detroit kind of came on my radar for the first time, when I heard about a man named Robert Williams, who was arrested in January 2020 on his front lawn, in front of his wife and two young daughters, and told that he was being charged with larceny. And he said, "I've never stolen anything, I don't understand what's going on." They take him into the station, and they show him a photo of some guy who looks a little like him. He didn't think he looked anything like him.
0:26:31.3 KH: I mean, the thing that they shared is that they were both black and they were both large men. And they said, "Is that you?" And he's like, "No, that is not me. You think that's me? Look at that guy and look at me. I'm not that guy." And come to find out, he had been identified by the facial recognition system. They'd run this image. And they had done very little additional police work. Namely, they had found out that once Robert Williams, he really likes watches. He wears, like, very expensive Breitling watches. And once he got one he didn't like and he sold it at a pawn shop. And the shoplifting crime he'd been arrested for was somebody who went into the Shinola in downtown Detroit and had stolen watches.
0:27:12.9 KH: And so they said, "Okay, this proves, he must be involved, he sold a watch once. And he's a face recognition match." And that's how he ended up being arrested. A couple days before his birthday, he was held overnight, he was questioned, he was charged, he had to hire a lawyer. And he had an alibi. He was driving home from work the time, at the time the shoplifting crime.
0:27:31.8 S2: Ironically, it was, he knew about the alibi 'cause he went through his own social media, right? To find where he had been.
0:27:40.3 KH: He went through an Instagram post and he had, like, recorded, this song came on that he really loved. And he was, particularly offended by this. Robert Williams is a very, he's a very funny guy. And he somehow finds humor in this horrible experience he had. But he said, "Shinola watches, they are, they do not compare to a Breitling. That's a cheap watch." He said, "If I was gonna steal a watch, it would not be a Shinola watch." But so, and so this was the first, actually, mistaken arrest that we know about in the country because of facial recognition technology. And then after Robert William's story came out, there was another case of a Michael Oliver arrested for snatching a smartphone. Again, a false arrest. And then Detroit Police Department said, "Okay, we'll make some changes. We're only gonna use this for serious crimes."
0:28:32.9 KH: So the hope is that, police would put more kind of investigatory work into a murder or assault or home invasion than they would into shoplifting. But then as I'm finishing my book, I hear about Porcha Woodruff, this woman who was eight months pregnant and arrested for carjacking that had been committed by a woman a month before who was not visibly pregnant. So Detroit, just kept having these issues. And really what it came down to was not recognizing the risk when you're using a technology like this of automation bias, relying too much on what a computer says, and combining two really dangerous things, which is facial recognition technology and eyewitnesses.
0:29:27.4 KH: So they would run a search, get a match, and then ask an eyewitness, "Hey, here's this person and five other people." In a six-pack photo lineup. "Which of these looks like the person who is responsible for the crime against you?" And they agree with the computer. The computer just went through millions of photos. It goes through mugshots and oftentimes driver's license photos, and picked out the person that looks most like the suspect. And then you're asking a person who looks most like the suspect. They would choose the same person. So Detroit has actually made changes to how they do this. They have some new requirements, which they're hoping prevent more false arrests. And they actually worked with the ACLU on that, because the ACLU was representing Robert Williams in his case against the city of Detroit for his false arrest.
0:30:19.9 KH: And so the ACLU is now saying, Detroit Police Department has the best rules in the country for how to use facial recognition tech responsibly. It includes disclosing to somebody that facial recognition was used as part of their arrest, not combining a search with an eyewitness lineup, and mandating that you have more evidence against somebody than just that they look like the criminal suspect. So I hope that it goes better in the future. And I think everyone's now watching Detroit and hoping that we don't hear any more stories of people being falsely arrested.
0:30:50.3 S2: The case with Williams and Woodruff and the other Detroit cases is partially because all Michigan residents driver's license photos are in the database that is used for facial recognition, right?
0:31:06.2 KH: And this is interesting to me. For the book, I spent a lot of time going back in history. I went all the way back to Aristotle and the kind of what the beliefs were about what we can learn about people from their faces alone. But one of the things I looked at is basically when the FBI and states started first using facial recognition technology, and there was concern about it. People are very worried about the idea of just being tracked all the time by the government and how that could be used against you. And that has been the case for a long time.
0:31:37.3 KH: And so when the FBI first said we wanna develop a database, when police said we wanna start searching for people by face, policymakers were really worried. And so there were a lot of hearings. Congress had a big hearing on The Hill, and this is back in, I wanna say 2010 or so. And the people testifying said, "You know, what? We're just using mugshots. We're not interested in normal people going about their days. We're only interested in the criminal element of society. We're just gonna be searching mugshots." And then you fast forward a few years, and then they're like, "Well, actually, we're also gonna start putting people's driver's license photos into these databases."
0:32:16.6 KH: And so you get that kind of slow creep of, we're just gonna do it for this, but then it would be more helpful if we expanded it. And so yes, in Michigan, if you have a driver's license here, or a state ID, you are in this big facial recognition database now that gets searched by the Michigan State Police when they're, anytime they're kind of looking for somebody.
0:32:39.6 S2: I mean, this, even though the logic around the mugshots is interesting, because you talked about the ways in which bias gets introduced, and we know there's lots of evidence of systemic bias embedded in eyewitness identification, but also in, I can imagine, in the humans that are in the loop in these technologies, but also in the mugshot, right? So there's disproportionately going to be black and brown faces in the mugshots, and that's also then gonna mean that those are the people that are more likely that are gonna get picked up by the facial recognition technology, right?
0:33:18.6 KH: Yeah, I mean, there's a lot of issues with bias. When it comes to facial recognition technology, for one thing, I mean, facial recognition technology for a long time was very biased. It was essentially trained with data of mostly people that were working on the technology, which was white faces, and so a lot of these facial recognition algorithms just did not work as well on essentially non-white males. It didn't work as well on women, it didn't work on people who are non-white, it doesn't work as well on children, on older people, so there are all kinds of biases. The tech companies have kind of addressed some of those biases by training with more diverse data, and so the facial recognition algorithms now, the good ones, work quite well and don't display the same kind of bias, as long as you have a good image, probe image, that you're searching with, what's called a probe image. But yeah, you still have the bias of who is it searching through, and so the people that are, the expansion of the databases was in part to say, "Let's fight that bias, let's not only search criminal databases."
0:34:28.3 S2: Right.
0:34:30.2 KH: "Let's look through everybody." And that's what Clearview says. They say, "Hey, we're putting everybody into the search." And that's better than only searching through a subset of people, which might not be a fair subset if it's just criminal mugshots.
0:34:45.6 S2: So, I wanna ask you something that, students and others often ask me, which is, if I'm not doing anything wrong, why should I be worried about facial recognition technology, especially as it's becoming more accurate?
0:35:00.1 KH: Yeah, I mean, I think, I'm curious here, if, are people in favor of facial recognition technology for police purposes? And you don't have to put up your hands if it makes you uncomfortable. You can hum if you're in favor. If you're in favor of police using facial recognition to solve crimes. Wow, okay, so I'm in the minority.
0:35:25.2 S2: I'm just saying, this might be a biased audience, but.
0:35:28.8 KH: Yeah, I mean, I, most audiences I talk to, it's funny, I was on a podcast once, and this person was like, "I hate this, it's so intrusive, why are we doing this as a society?" And then at the end, she was like, "The other day, somebody stole my car out of my driveway, and the police won't use facial recognition to find him." I think a lot of people, like the idea of facial recognition being used to solve crimes, if it is a reliable method, and if it's being used in a responsible way. The thing is, it just never stops there. It keeps creeping out. And so, maybe you're just opposed to police use entirely. Some people are comfortable with that. But then I ask, well, are you comfortable with, like the demonstration that the Harvard students did, of just somebody on the street walking up to you and knowing who you are?
0:36:19.1 KH: Are you comfortable with the idea of you're having a private dinner, you're gossiping, or you're sharing some private, intimate information, and somebody next to you gets interested, and then snaps a photo of you, and now knows who you are, understands the context of your conversation. I mean, right now, that is possible, and we just don't have any laws or regulations against something like that. Something that happened as I was finishing my book, and I thought we were five or 10 years out from it, was that I found out that Madison Square Garden, this events venue in New York, had decided to start using facial recognition technology against its enemies, namely lawyers, who worked at firms that had lawsuits against the venue. Madison Square Garden is a huge company. They own MSG in New York. They own Radio City Music Hall.
0:37:09.5 KH: They own The Sphere in Las Vegas. And they had started using facial recognition around 2018 for security reasons, to keep out kind of rowdy fans. The Taylor Swift model, which is, she was reportedly using facial recognition to keep tabs on stalkers that were trying to get into her concerts. And so they had it in place, and James Dolan, the billionaire who at the head of MSG, said, "I'm really annoyed at all these lawsuits, like slip and fall, and personal injury, and shareholder suits." And so his security people went to the websites of law firms that had cases against them, and they collected all the photos of the lawyers from their bio pages on their websites, and put them on a ban list, and told them that they're not welcome to come to the venue anymore, and so blocked them from being able to buy tickets, and if a friend bought them a ticket.
0:38:06.1 KH: And actually, I did this, not for a friend, but I wanted to see this for myself, how well does it work, this ban that they have. And so there was a personal injury law firm that was on the ban list, and I bought a ticket to a Rangers game for an attorney at the firm. There's thousands of people that are streaming into MSG. We walked through the door, we put our bags on the conveyor belt, and by the time that we picked them up, a security guard came over to us, and he walked up to, her name was Tia, he said, "I need to see some ID." She showed him, and he said, "You're gonna have to wait here for the security manager." And then he came over and gave her a little note, and said she's not welcome to come into the facility. I mean it's, this technology is very powerful.
0:38:49.0 KH: And so, yeah. So, lawyers are banned [laughter] from going there if they have it until they drop their suit. And it just shows you the kind of risk of a technology like this is that it really opens up this new possible era of discrimination, where you can discriminate against people based on things that you can glean about them from the internet. So, maybe on their political views, maybe on the fact that they're a journalist or a lawyer, like yeah, they do something that you don't like. Maybe you write a bad review of a restaurant and then you're never allowed to go in there again.
0:39:25.8 KH: And so I think that kind of possibility of new ways of discriminating is something that, yeah, even if you are not otherwise worried about your privacy, think about these individual ways in which it can get used and how scary that might be. That said, MSG owns a theater in Chicago and they cannot use facial recognition there, because Illinois has a law that says you're not allowed to use people's biometric information without consent. And I tell the history of the law in the book. But it's this good demonstration that laws can work against technology. We don't have to just give up, but we do need to intervene. We have to make choices if we wanna determine what the world looks like, as opposed to just having technology determine what the world looks like.
0:40:10.5 S2: Well, that's where I was going actually, so that's great. So, you talked a little bit about the Illinois law. What, in your mind, would a good privacy law look like? And does it exist in the world? Is Illinois a good model for us?
0:40:29.3 KH: So, Illinois is one of those privacy laws that's very specific. It's about biometric information. I do think it is a good law. There's an exemption for state use, so it doesn't outlaw kind of police use of people's biometrics. But it says that a company can't use a person's biometric information without consent. There's a private right of action. So, if you violate the law, you have to pay $5,000 per person. This has been very expensive for companies. Facebook paid a $650 million fine there to settle a lawsuit there. Yeah. It's an effective way to make sure that companies aren't using, your face print, your voice print which is a real concern now with generative AI. It's like so easy to clone somebody's voice, not to use this information without consent. And so I think something like that should really be a nationwide law, or every state should pass something like that to protect people.
0:41:24.1 KH: But then there's this bigger problem of just people using our private information without consent. I've been reporting, this came up a little bit in the intro, but I've been reporting a lot on cars this year. I had this big story about how car makers now are starting to collect info. You buy your car, you pay $40,000 for it, 40,000, $40 would be great, $40,000 for it [laughter] and you turn on connected services, so that you can use a smartphone app to turn your car on from afar, so that you can navigate when you're out on the roads, but that connects you to the manufacturer. And so some manufacturers are starting to collect information about how you drive, when you're speeding, how hard do you hit the brakes, when do you rapidly accelerate, when are you driving?
0:42:11.1 KH: Are you driving at night, which is a more dangerous time to drive? Are you driving during the day? How many road, how many miles are you kind of driving? And they were starting to sell that information to data brokers, who were then selling it to the insurance industry, and it was affecting people's insurance rates. And people had no idea this was going on. It might have been arguably addressed in some privacy policies, but sometimes it wasn't. And this is kind of the world that we live in, in the United States, where there really is not protection of your private information.
0:42:42.4 KH: And it's different from the regime in Europe, for example, where it says, you can't just, information you collect for one purpose, you can't just use for whatever you want. And I think that's a major difference between the US and Europe. And we just can't seem to pass a privacy law. And this is, I feel like every story I do just keeps coming back to this, that we just don't have a lot of protection for where our data goes. We kind of sign it away when you click okay, on a privacy policy that you never read.
0:43:12.7 S2: And we have sort of a whack-a-mole strategy, right? I mean, so I mean, there's the kind of state by state whack-a-mole strategy that is Illinois has a policy, but Michigan doesn't. But earlier we were talking about how Michigan now has, they're trying to pass, I don't know if it's passed yet, but on deepfakes, we were talking about a senator trying to focus specifically on the car data question. It seems like compared to Europe, we're not good at looking at the problem expansively and really taking it seriously enough. Why do you think that is?
0:43:52.6 KH: Yeah. I mean, I think there's a lot of explanations. I asked, when I was doing the car reporting, I talked to Senator Edward Markey, 'cause he's like a big privacy hawk. And he was really concerned about particularly data collection of cars. And I just asked him, I mean, you've been in DC for a long time, like, how has Congress not managed to pass a privacy bill yet? And he said, it's the tech industry. They lobby against it. And he said, that's the number one reason. I think there's also a kind of privilege of information in the United States, like we believe strongly in free speech. And we have kind of historically privileged that over privacy. Actually Clearview, after I wrote about what they had done this kind of collection of everybody's data and putting it in a database and making it searchable.
0:44:46.6 KH: And none of us had said yes to, none of us had consented to be in that database. There were a lot of lawsuits that were filed against Clearview particularly in Illinois, which had that law that protects people who live there. And one of Clearview's arguments was, we have a free speech, first Amendment right, to do this. This is public information on the internet. We're not scraping your private Facebook account. We are just like Google, instead of making the internet searchable by name, we're making it searchable by face. This is all public information. And that was one of their defenses against what they had done. And that didn't necessarily work for them in Illinois. But everywhere else, what Clearview has done, is I mean considered legal. And they're still operating and still working with lots of different police departments.
0:45:42.0 S2: So, well, not everywhere though, right? I mean, I think it's interesting that you briefly mentioned Europe, and I think, often we think about Europe as being in many respects, similar to the United States, in terms of the way we think about our economy, democracy, etcetera. But Europe has a general data protection act, but it also has had real impacts. I mean, you talk in the book about how Europeans have managed to kind of claw back their privacy a little bit. So, I'm wondering if you could talk a little bit more about that and maybe why you think in Europe they've managed to make more progress on this.
0:46:27.4 KH: Yeah. I mean, and not just Europe. So, when I reported on Clearview AI, it led to privacy regulators around the world launching investigations into Clearview, because the rest of the world has privacy regulators, we don't really have one in the US. The Federal Trade Commission is kind of our defacto privacy regulator. And the way that they regulate privacy is through a prohibition on unfair business practices. But elsewhere, so in Canada, in Australia, in many European countries, they said what is this Clearview AI? Are our citizens in this database? And they were. And so they launched investigations and they all declared that what Clearview had done was illegal. That you can't be processing people's biometric information without their consent.
0:47:17.3 KH: So, they issued lots of fines to Clearview, and they essentially kicked it out of their countries, like Clearview was doing trials with police departments around the world, but they had to pull out of Europe, they had to pull out of Canada, they had to pull out of Australia because what they had done was not legal in those places. So, it's possible to regulate this kind of collection and use of our faces and other countries have done it.
0:47:44.9 S2: So, you mentioned the FTC. Are they doing anything with regards to facial recognition technology?
0:47:50.7 KH: So, the major thing that the Federal Trade Commission has done with facial recognition technology is they launched a big investigation into Rite Aid, the pharmacy stores, because Rite Aid had started using facial recognition to deter shoplifting. And so if somebody came in and stole something or got into a fight with somebody, they would put their face into a database and essentially approach them or kick them out if they tried to come in again. But this was a few years ago and they were using algorithms that didn't work that well. They were using probe images, photos of people off of like surveillance grainy stills that weren't that great. So, they would flag somebody at a store in Los Angeles, and then the next day kick that person out of a Philadelphia Rite Aid. So clearly not the same person.
0:48:42.3 KH: And so what they got in trouble for was basically having a terrible system that was unfairly harassing consumers. And other than that, the only other kind of facial recognition investigation I'm familiar with, that I know the FTC has done it is there was a, kind of what I was talking about at the beginning, there was a service for storing your photos, and you could tag people in the photos, tag your friends, tag your family. And what people didn't realize is that this company that they're using for photo storage also ran like a facial recognition surveillance business. And they were using all of your memory photos to train the algorithm for facial recognition. And the FTC said, Hey, this isn't okay. You didn't disclose this. This isn't unfair and deceptive business practice.
0:49:34.9 KH: And they ordered the company to actually erase, get rid of the algorithm they had built on this training data since they had gotten it in this deceptive way. So, other than that, I don't know, maybe there's other investigations that are happening FTC, but I'm not, I haven't heard about them, if they are.
0:49:50.8 S2: You do talk in the book. I mean, I'm interested in the fact that in the book you talk about, for example, well, you were earlier also today talking about the ACLU, Fight For The Future. To what extent has civil society kind of taken up the mantle of pushing back or challenging facial recognition or trying to force attention to some of the problems? Are they sort of occupying in some ways some of the roles that the government should.
[laughter]
0:50:21.8 KH: Yeah. Civil society is mostly, I think, kind of coalesced around police and government use of facial recognition technology. And so the ACLU was very involved in the Robert Williams false arrest case, for example. The ACLU actually sued Clearview in Illinois, for violating the Biometric Information Privacy Act that that law that they have, and that suit ended up settling with Clearview agreeing to only sell their database to the government, and not to everybody or to private businesses. So that was kind of a win. But yeah, I mean, there's various, yeah, there have been so many campaigns. There was like a, civil society for a while was trying to get facial recognition completely banned, trying to get police not to use it all. And they did get a few bans passed in, like San Francisco.
0:51:23.2 S2: Right. Local.
0:51:25.3 KH: Yeah. Oregon actually passed something saying like, let's not use facial recognition until we're more certain that it works, that we've analyzed the kind of privacy impacts of this technology. So they were kind of successful in getting these citywide bans. But now I think there's more of a, I would say more of a acceptance from civil society that police are gonna use this tool, and it's very hard to avoid that outcome. And so now it's more how do we get rules in place like they've done with the Detroit Police Department. But yeah, I mean, civil society has done a lot. The really fascinating thing to me when I started looking into the Illinois privacy laws that had come out of work that the ACLU had been doing. Yeah.
0:52:12.1 S2: I'm gonna turn it over to the students in a second, but one of the things that is also really of interest to me, and you sort of started by talking about it, is how facial recognition is also metastasizing in a variety of places. I mean, I've far beyond law enforcement, right? So, there's now use of facial recognition in what they call Emotion AI, right? There's use of it at the border...
0:52:46.5 KH: To board your plane now? Yeah.
0:52:47.8 S2: Yeah. To board the plane, but also even in ways to sort of tell, I know that there have been people trying to use it to tell whether somebody is lying and those sorts of things as well. And that's also a place where we haven't really seen much, certainly in the US. I mean, again, I think in other countries there's a little bit more efforts to try to regulate, but we don't, we've not even begun to address those sorts of things in the US as I, to my knowledge, but.
0:53:18.8 KH: Yeah. Maybe like a few piecemeal things at the state level, but no. We haven't really started yet. And I think part of the question is how viable some of those technologies are? Like how well do they actually work?
0:53:28.7 S2: Right. And I don't know, sometimes I worry nobody's actually asking that question when they adopt it, right?
0:53:35.8 KH: Yeah. I mean, Europe is. So, Europe has already passed regulations around the use of Emotion AI, emotion recognition. So yeah, Europe, it seems to be a little a step ahead. We're a step ahead when developing the technology and they're a step ahead in developing the laws to regulate it.
[laughter]
0:53:55.7 S2: I wish we will adopt the laws as much as they adopt the technical dimensions. So, let's maybe hear from the students and the rest of you.
0:54:09.7 Speaker 4: All right. Thank you so much for being here and for taking the time to speak with us today. So, in both your book and the talk today, you've been able to really compellingly document many legitimate and illegitimate legal and illegal uses of facial recognition technology. So we're curious if you see legitimate uses of this technology in journalism, and if you've been tempted to use it in your line of work beyond the scope of this specific story.
0:54:32.0 KH: Yeah. So, I definitely get emails from other journalists saying, "Hey, I need to identify somebody, like, how do I do this?" Because it can be a really useful tool. Let's say that there's some kind of event that happened, and you're people in the photo who saw the event happen, and you're trying to interview those people, interview those witnesses. One of the more controversial journalistic uses I've seen is the BBC, after a year ago, after October 7th used it on some of the footage to try to identify the people involved in the attacks and who they were and what positions they held, which I hadn't seen anything quite like that before. And I found very surprising because at this point, BBC, and the rest of us are saying, Hey, this is a really controversial technology. Like, I'm not sure if anybody should be using it.
0:55:31.9 KH: Within the Times, within the New York Times, if you wanna use facial recognition technology, you have to ask your editor. You have to run it by standards. So it's not something that anyone uses. For me, I have mostly used facial recognition technology in my journalism to do demos of the technology with people's consent. Because we talked a lot about Clearview AI, but we talked about how what they built is something that is increasingly easy to do, like these tools are available to people. And so there is this public face search engine that anyone can use. The database is not as big as Clearview, so it has something like two and a half billion photos as opposed to 40 billion photos. And they're mostly from the public internet, not from social media sites. And so I'll show that to people. So I'll say, Hey, do you want me to show your PimEyes results?
0:56:21.5 KH: The site claims that it exists, that you can search your own face and see what your online footprint is so you can protect yourself. But if you have a subscription, which I have, it's $30 per month, you can do 25 searches a day, which is probably more times than I need to search my own face [laughter] And they have no technical measures in place to make sure that I am searching my own face or that I'm even searching one face per day. So, yeah, for me, that's mostly what I've used it for, but I certainly see the appeal for using it in journalism. And I've heard other uses, like people for online dating, they like to do a PimEyes search to make sure the person they're talking to is who they say they are. Yeah. I hear a lot of kind of compelling use cases for it, but also as I document in the book, a lot of very disturbing use cases.
0:57:17.2 Speaker 5: Kashmir, this next question comes from a UM student. What incentives do you think there are for governments or government agents to regulate facial recognition technology when they're oftentimes the ones to take advantage of it?
0:57:33.0 KH: Well one concern for police and the use of facial recognition technology and it being generally available is, makes it very hard to be an undercover officer. Makes it very hard to be a spy. So, while I was working on the book, the CIA put out a memo to all of its outposts around the world and said, we are seeing our informants being compromised, and killed because of the use of AI technologies, including facial recognition. So with something like facial recognition, it means you can't just leave your phone at home and go meet somebody in a bar, a dive bar somewhere, because if your face is being tracked and they know who you are, they can track you to where you've gone. So I think there's some concern about, yeah, the threats that facial recognition poses.
0:58:22.6 KH: One of the, I think I talk about it in the book, but at one point in Oregon, there was an activist, who wanted to create a facial recognition database of police officers, because it was, while the protests were happening and police were starting to cover their badges, and he wanted to be able, if somebody was subject to abuse by police, wanted to be able to identify that officers that they could be reported. So, this really is something that can go both ways. And so there may be some incentive for police to regulate it, given that threat as well. And I think just wanting to have like a good society might require having privacy and anonymity, is something I think we kind of value as a society.
0:59:10.0 S2: I mean, in some ways it goes back to what Lynette was saying at the beginning. I think the way that, at least in the US we manage facial recognition is that it's almost up to the individual to decide whether and how to use it. And so it can enhance divisiveness and sort of total disparate kind of choices as opposed to having a, building a society in a different way through policy. Here, the technology itself is kind of facilitating a different kind of social formation almost, right?
0:59:46.3 KH: One of the interesting things about, so China is kind of farther ahead of us in terms of the deployment of facial recognition technology. And in some places, it is basically running on the cameras in real-time. And so there's rather than, so some policy makers and people in power in order to not be seen, they've created red lists and you'll put yourself on the red list so that the cameras don't see you, don't track that you have been a place. And so it's this awareness of how pervasive the surveillance is and not wanting to apply to themselves. And so the privilege now is kind of like not to be seen by the cameras, which is, I hope not the solution that we come up with here.
1:00:34.9 S2: So, our next question is also from a UM student. So, as we talked about, you mentioned the recent video from the students at Harvard. So, do you think the solution to this data privacy problem that's now being posed is just rooted in a change in our consumer perceptions, or do you think that there should be a substantive policy change? And if you think there should be a policy change, what would you potentially envision? And what do you think are the biggest threats of this accessible technology? So kind of multi-part question, but yes.
1:01:03.9 KH: Yeah. So it's really interesting. So, OpenAI, when they first released Visual ChatGPT, where you could take a photo and it kind of tells you what's in the image. When they were kind of first rolling it out, when they were piloting it, they partnered with Be My Eyes, which is an app for visually impaired people. Traditionally the app worked where if you needed help seeing something, you could like click Be My Eyes, and it would connect you with a volunteer, a seeing volunteer who they would then show video to and they would tell you, oh, the the red shirt you want is on the left, or the shampoo is on the left. And so OpenAI and Be My Eyes did this partnership where they had ChatGPT doing that instead.
1:01:51.6 KH: And they would take photos and it would tell you what was in the image. And so I talked to this one user and he said, when it first came out, it would tell me information about a person's face, and sometimes it would identify the person like OpenAI hadn't intended to, but they kind of accidentally built a facial recognition machine, because they had scraped all these photos on the internet and had learned the names, particularly of like famous people, or high profile people. And OpenAI decide to address this by blurring faces and just keeping ChatGPT from seeing the face, so that they can't say anything revealing about the face. And it was interesting because the visually impaired person I was talking to said like, this isn't fair. I want access to this. This is information that, seeing people just get, and I would like to have this described to me.
1:02:44.6 KH: So, just thinking about accessibility issues around the technology. In terms of like what consumers want or what we as a society want, I do think there's a possible world in which we decide we want this. We want to know who people are. We don't wanna live in a world with strangers. If you meet somebody at a bar, you wanna know what their name is and if they're really who they say they are, what the things they're telling you about you is this real? On the other side that means somebody you see at a bar that you never wanna see again, could know who you were and like find out where you live and know more about you than you want them to know. But I could imagine a world in which it shifts, and where instead of privileging anonymity, we privilege identity and knowledge and knowing who people are.
1:03:34.8 KH: So for me, it's just hard to predict which way we go. I see the appeal of that. Meta, the company that used to be Facebook, their chief technology officers said, we'd love to put this ability into glasses, so that you never go to a cocktail party and run into somebody whose name that you should know and you just can't forget. Like, we could supply that name to you. But they're worried about the legal and regulatory risk of putting that into the glasses. So, yeah, I just don't know which way it'll go. Like, our conception of privacy is changing all the time. There was a world in which, 20, 30, I don't know how many years ago now, 40 years ago, would hard to be a...
1:04:17.2 KH: It would've been hard to imagine we would create the internet where we all create these dossiers on ourselves with so much information that we like make available to everybody. But that's the way society evolves. So, maybe we evolve in that way. If we don't wanna evolve in that way. I think that we have to give people more power over their faces, that you can't just go and scrape all these photos on the internet and make a tool that identifies you and returns all the photos of you. And I think it's very possible that we could pass a law like that. It just would require a lot of work from policymakers.
[laughter]
1:04:58.5 S2: So that's, I know we're a few minutes over time, but thank you so much for answering all our questions and for taking the time to speak with us today. We really appreciate it.
[applause]