Chris Calabrese: The Pros and Cons of Facial Recognition Technology for Our Civil Liberties | Gerald R. Ford School of Public Policy

Chris Calabrese: The Pros and Cons of Facial Recognition Technology for Our Civil Liberties

November 20, 2019 1:20:56
Kaltura Video

Christopher Calabrese, discusses the pros and cons of facial recognition technology, how it is changing many aspects of our lives, and how policymakers should address it. November, 2019.

Transcript:

have a bunch of seats down here

too.

Hi everyone, before we get

started I want to invite folks

to move in a little bit.

Be had to switch rooms

accommodate the livestream but

to take advantage of the fact we

have a smaller crowd it would be

great if you want to come in a

little bit.

There are a bunch of seat down

here too.

Good Afternoon everyone.

My name is Shelby, I am

Professor and director of the

science technology and public

policy program here at the Ford

School of public policy.

STPP, as it's known is an

interdisciplinary university

wide program dedicated to

training students conducting

cutting edge research, and

informing the public and

policymakers on issues at the

intersection of technology,

science, ethics, society, and

public policy.

We have a very vibrant graduate

certificate program, and an

exciting lecture series.

Before I introduce today's

speaker I want to let you know

that next term our speakers will

explore the themes of health

activism, precipitation drug

patents and drug pricing and

graduate STEM education.

Our first talk on January 22nd at

4pm is by Layne Scherer, a Ford

School alum, who is now at the 

National Academies of Science,

Engineering, and Medicine. She'll

be talking about graduate STEM 

education in the 21st century.

If you're interested in learning 

more about our events, I

encourage you to sign up for our

list serve - the sign up sheet is

outside the auditorium. If you are

already on our listserv, please do

sign up there as well because it

gives us a sense of who's been 

able to come today.


Today's talk, "Show your Face:

the pros and cons of facial

recognition technology for our

civil liberties" is

co-sponsored for the society of

ethics society and computing,

and the science and technology

policy student group "inspire."

As part of their themed semester

on just algorithms.

Inspire is an interdisciplinary

working group run by STPP

students, but is open to all

students around the university

who are interested in science

and technology policy.

And now to today's speaker.

Mr. Chris Calabrese is the vice

president for policy for

democracy and technology.

Before joining CDT he served as

legislative council at the

American Civil Liberties unions

Washington legislative office.

Don't try to say that ten times

fast.

In that role, he led the offices

advocacy efforts related to

privacy, new technology, and

identification systems.

His key areas of focus included

limiting location tracking by

police, safeguarding electronic

communications and individual

users Internet surfing habits,

and regulated new surveillance

technologies such as an unmanned

drones.

Mr. Calabrese has been a

longtime advocate for privacy

protections, limiting government

surveillance, and advocating for

new technologies such as facial

recognition.

This afternoon he'll speak for

about 15 minutes, giving us the

lay of the land and he and I

will chat for about 15 minlittle

and then we'll open the floor

for questions.

Please submit your questions on

the index cards that are being

distributed now and will be

distributed throughout the talk.

Our student assistant at STPP

will circulate through the room

to collect them.

If you're watching on our

livestream you can ask questions

via the hashtag, #STPP talks.

Claire, our wonderful

undergraduate assistant and

Dr. Molly clineman, STPP ' s

program manager will ask the

questions.

 

I want to take the opportunity

to thank all of them, and

especially Molly, and Sugen for

their hard work in putting this

event together.

Please join me in welcoming

Mr. Calabrese.

[APPLAUSE]

 

CHRIS: Thank.

Thanks to all of you for coming.

This is obviously topic that I

care a great deal about so it's

exciting to me to see so many

people who are equally

interested.

Thank you for having me and

thank you for the ford school

for hosting.

I think these are important

topics as we incorporate more

and more technology into our

lives we need to spend more time

thinking about the impact of

that technology and what we want

to do with it.

And face recognition is a really

great example.

It's powerful, it's useful, and

it's often dangerous.

Like many technologies, this is

a technology that can do so many

things.

It can find a wanted fugitive

from surveillance footage.

It can identify everybody at a

protest rally.

It can find a missing child from

social media posts.

It is allow a potential stalker

to identify an unknown woman on

the street.

This is really a technology that

has the potential to and is

already impacting a wide swath

of our society.

That's why it's gotten so much

attention.

We saw a ban on face recognition

technology in San Francisco.

We saw a number of lawmakers

really engaged, and we -- we as

a society really need to grapple

with what we want to do with it.

So, before I get too deep into

this.

Just a word about definitions.

I'm going to talk about

something fairly specific.

I'm going to talk about face

recognition.

Taking a measurement of

someone's face, how far apart

are their eyes.

How high or low are their ears,

the shape of their mouth and

using that to create an

individual template that is

essentially a number that can be

used to go back to another photo

of that same person and do that

same type of measurement and see

if there is a match.

So it's literally a tool for

identifying someone.

It can be a tool for identifying

the same person, so if I bring

my passport to the passport

authority they can say is the

person on the passport photo the

person standing in front of me,

or it can be used as a tool for

identifying someone from a

crowd.

So I can pick one of you and see -- and you know do a face

recognition match and see if I

can identify particular people

in this room based off of a

database of photos that the face

recognition system is going to

run against.

That's face recognition and

that's what we're going to talk

about.

There are a few other things I

won't talk about.

One of them is something called

face identification.

And that's literally is there a

person standing in front of me.

We might use that to count the

number of people in a crowd.

We might use that to decide if

we're going to show a digital

signage on a billboard.

That's usually less problem

problematic.

There is another type of

technology I won't talk about

face analysis is looking al

someone's face and trying to

make a determination about them.

Are they lying, are they going

to be a good employee.

This technology doesn't work.

It's basically snake oil which

is part of the reason I won't

talk about it.

But you will see people trying

to sell this concept that we can

essentially take pictures of

people and learn a lot about

them.

But I can tell you that face

recognition does work.

And it's something we're seeing

increasingly deployed in a wide

variety of contexts.

So I already talked a little bit

about what exactly face

recognition is.

This sort of measurement of

people's faces turning that

measurement into a discreet

number I can store in a database

and compare against other

photos, see if I get that same

measurement, and see if I've

identified the person.

There is a couple of things you

need to understand if you want

to think about this technology

and how it actually works and

whether it's going to work.

The first is a concept we call

beening.

Beening is literally putting

people in bins.

Putting them in groups.

And it turns out, and this is

pretty intuitive.

If I want to identify someone,

it's much easier if I know

they're one of 100 people in a

group versuses one in a million.

It's a much simpler exercise.

So you can -- so that's one

thing to keep in mind as you

hear about face recognition is

to think not just about the

technology that's taking that

measurement of your face but the

technology that's being used to

pull the database in from

outside.

And the size of that database is

hugely important for the types

of errors we can see how

accurate the system is.

So, a little bit of history for

you.

So, face recognition has been

used for a long time.

Even though it really is only

started to be effective in the

last couple of years.

If you go all the way back to


out face recognition at the

superbowl in Tampa.

And they did a face recognition

survey of all the people that

entered the superbowl.

The technology wasn't ready for

prime-time.

It couldn't identify people.

It was swamped by the number of

different faces, and the

different angles the faces were

taken at.

It didn't work.

For a long time that was the

beginning and the end of the

conversation as far as I was

concerned.

Because if a technology doesn't

work, when I should we use it?

Why should we use it.

I have a friend who works in the

industry and we had lunch a

couple of years ago, and he said

to me t works now.

This technology will actually

match and identify people.

And that was kind of a Rubicon

and we've seen that in the last

couple of years.

The NIST.

Has confirmed that, national

institute for science and

technology has confirmed that.

Earlier this year they said

massive gains and address has

been achieved in the last five

years in these far exceed

improvements made in the prior

period, which is the prior five

years.

We're seeing this technology

being used more and more, and

it's more and more accurate.

And we can really understand why

that is.

We have more powerful computers,

we have better AI that does this

type of comparison, we also have

better photo databases.

If you look at the linked in

photo database, Facebook photo

database, high resolution

photos, many different kinds of

photos and templates, all linked

to someone's real identity.

That's a perfect tool for

creating a face recognition

database.

So why do we care?

What's the big deal about face

recognition?

And there is a couple of things

I think at advocates and I hope

we care about and I hope I can

convince you to care about a

little bit too.

The first thing is we have sort

of all kinds of assumptions we

make about our privacy that are

grounded in technical realities.

So we assume that while we might

go out in public, and somebody

might see us, if they know us

they might identify us.

That's where you get this idea

that well you don't have privacy

in public you put yourself out

there.

But the reality is that when

you're out in public you don't

necessarily expect to be

identified, especially by a

stranger, you don't expect to be

potentially trapped across a

series of cameras, and you don't

expect that record to be kept

indefinitely.

That is a different type of use

of the technology, and it really

sort of changes our assumptions

about what privacy looks look

like, and what privacy looks

like in public.

And of course you can imagine

the impact on that if you're

doing photo recognition for

example a protest rally.

You can see how suddenly I have

knowledge of whom may be worried

about the border, and that

allows me to take other kinds of

punitive action.

And of course it also allows me

to figure out who your friends

are, who are you walking with,

those Associational pieces of

information we worry about.

It also changes the rules in

other ways we don't always think

about, but I would encourage you

to.

So, we jaywalk every day.

We cross the street when we're

not supposed to.

You are breaking the law when

you jaywalk.

Everybody does it.

But what if we could enforce

jaywalking a hundred percent of

the time?

What if I could do a face

search, identify you and send

you a ticket every time you

jaywalked?

That would fundamentally change

how the law was enforced.

It would change how you

interacted with society.

We could do it, whether we would

do it or not or whether we

should do it, these are laws on

the books that could be enforced

using this technology.

That's a concern, and the second

concern I think that's related

is if we don't enforce it, and

we start to enforce it in a

selective way what bias does

that introduce into the system,

and you can think about that for

a minute.

In the private sector we also

see a lot of change changing in

relationships and that's -- I

already raised the stalker

example.

There is off off-the-shelf

technology, Amazon recognition

is one of the most well-known

that you can purchase and use to

run your own set of databases,

and we've already noted there is

a lot of public databases of

photos and identification you

can take those, run those

databases against your own off

the shelf face recognition

software and identify people.

And so there is a -- suddenly

that stalker can identify you.

Suddenly that marketer can

identify you.

Suddenly that photo, that

embarrassing photo from you for


and nobody knows is you,

suddenly you can be identified.

And if you're in a compromising

position, or if you were

drunk -- there was a lot of

photos out there about all of

us.

Potentially that's revealed

information that can embarrass

you.

The next sort of -- the other

reason we might worry about this

is that mistakes happen.

This is a technology that's --

it's far from perfect.

And in fact has a great deal of

racial bias in it.

Because many -- as you -- when

you create a face recognition

template, we can get into this

maybe in the Q and A, but you're

using essentially your training

the system to recognize faces.

So if you only put the faces in

the system that you get from

Silicon Valley, you may end up

with a lot of white faces.

A lot of faces not

representative of the broader

population.

And as a result, your facial

recognition algorithm isn't

going to do as good of a job of

recognizing non- white faces.

And literally the error rate

will be higher.

So this is sort of a bias

problem, but it's also -- just a

broader mistake problem.

As the technology gets used more

broadly people will rely on it,

and they will be less likely to

believe that in fact the machine

made a mistake.

People tend to trust the

technology.

And that can be problematic.

Ultimately, I would just sort of

give you this construct just to

sort of sit with, this idea of

social control.

The more that someone knows

about you, the more they can

effect your decisions.

If they know where -- if they

know you went to an abortion

clinic.

If they know you went to a gun

show.

If they know you went to church,

none of those things are

illegal -- in and amongst

themselves, but someone,

especially if it's the

government taking this action

may make decisions about you.

I'll give you an example that's

not facial recognition related

but I think is instructive.

So when I was at the ACLU we had

a series of clients who

protested at the border in San

Diego.

The border wall runs right

through San Diego.

And so they all parked their

cars at the border, and they

went and had this protest.

And then, you know as they came

out of the protest they found

people they didn't recognize

writing down their license

plate.

And those -- they didn't know

who that was.

Many of those people found

themselves on being harassed

when they were crossing the

border.

These were unsurprisingly people

who went back and forth a lot

and found themselves being more

likely to be pulled into

secondary screening, face more

intrusive questions.

And they believed -- this was

something we were never able to

proven, but I feel can the was

because of this type of data

collection because they were

identified as people who deserve

further scrutiny.

That's what happens as you

deploy these technologies.

You create potential information

that can be used to effect your

rights in a variety of wise.

And face recognition is a really

powerful way to do that.

So what should we do?

What should we do about this?

There are some people who say we

should ban this technology.

Face recognition has no place in

our society.

Well, that's a fair argument, I

think it does discount the

potential beneficial of facial

recognition.

I was at Heathrow airport, maybe

it was Gatlin, I was in London,

and I was jet-lagged, red-eye,

it was like 6:00 a.m.

I kind of walked up and looked

at the -- ran through the check

point, and then I looked up at

the literally just went like

this, and kept walking, and I

realized 30 seconds later I just

cleared customs.

That was face recognition and it

had sort of completely

eliminated the need for them to

do a customs check.

Now, maybe it's not worth it,

but that's a really benefit.

If you've ever stood in one of

those lines you're saying gosh

that sounds great.

And that's a relatively trivial

example compared to someone who

say has lost a child but thinks

maybe that child has been

abducted by someone they know,

which is unfortunately

frequently the case.

You can imagine going back to

that binning, maybe there is a

photo that might help somewhere

in your social network.

If you could do facial

recognition on the people in

your social network you might

find that child.

These are real benefits.

So we have to think about what

we want to do whenever we talk

about banning a technology.

So the close cousin of the ban,

and this is one I think is maybe

more effective or useful in this

context is the moratorium.

And that's this idea that we

should flip the presumption.

You should not be able to use

facial recognition unlessia --

there are rules around it, and

rules that govern it.

That's a really effective idea

because it forces the people who

want to use it to explain what

they're going to use it for,

what controls will be in place,

and why they should be allowed

the authorize to use this

powerful technology.

If we did have a moratorium, and

want to regulate the technology,

what would this regulation look

like?

And by the wayed, this

regulation could happen at the

federal level, and it could

happen at the state level.

There is already at least one

state, the state of Illinois

that has very powerful controls

on biometrics for commercial

use.

You cannot collect a biometric

record in Illinois wasn't

consent.

These are laws that are

possible.

There is no federal equivalent

to that.

So as we think about how would

we think about this.

I think the first thing,

especially in the commercial

context is think about consent.

If you can say it's illegal to

create a face print of my face

for this service without my

consent, that gives me the power

back on that technology.

Right?

I'm the one who decided whether

I'm part of a face recognition

system and what it looks like.

That can be a hard line to draw

because it's so easy to create

this kind of face template from

a photo without your permission.

But it's a start and it allows

you to responsible people who

deploy face recognition

technology will deploy it and

require consent.

And then after consent is

obtained you probably want

transparency, you want people to

know when facial recognition is

being able to be used.

So that's the broad idea.

When we can talk more about this

in the Q and A, from the consent

side -- from the private side.

Government side is a little bit

more tricky.

I think from a government point

of view, government is going to

do things sometimes but your

consent.

That's a fundamental reality for

law enforcement, for example.

So what do we do?

And I think in the government

context, we fall back on some

time-honored traditions that we

find in the U.S. constitution,

and that's the concept of

probable cause.

So, probable cause is this idea

that -- and this is embedded in

the father amendment of the

constitution.

This idea that it is more -- we

should -- the government should

be able to search for something

if it is more likely than not

that they will find evidence of

a crime.

And, in order to get that

probable cause they frequently

have to go to a judge and say

hey, I have evidence to believe

that this going into this

person's house will uncover

drugs because -- and here's the

evidence they were a drug

dealer, and then I can search

their house.

We can deploy a similar -- the

same idea with face recognition.

We could say that you need --

you could only search for

somebody, remember I said there

is that wanted fugitive who I

think I can go look at

surveillance footage and maybe

find him.

You maybe need to go to a judge

and say your honor we have

probable cause to say this

person has committed there

crime.

They're likely to be somewhere

in this series of footage, and

you know -- we would like to --

we believe we can arrest him if

we find him.

The judge can sign off on that

vet that evidence.

And then the technology can be

deployed.

Similarly, there is a -- you

know there are exgent

circumstances, and we have this

in the law right now.

So if I think there is an

emergency.

Say I have a situation where

someone has been abducted, I

believe they're still on the

African American the London

metro which is blanketed with

surveillance cameras and I

believe that child's life is in

danger there is a concept in the

law called exgensy which is the

idea there is an emergency, I

can prove the is an emergency

and need to deploy the

technology.

And we can build those kinds of

concepts into the law.

So I'm going into a lot of

detail on this.

Mostly because I think it's

worth understanding that these

are not binary choices.

It is not flip-on face

recognition we're all identified

all the time, I'm sure many of

you are old enough to remember

minority report, the movie which

used a lot of biometrics

scanning throughout the -- and

it was sort of everybody was

scanned and there was face

recognition happening all the

time, and advertisements were

being shown to them constantly.

We don't have to live in that

world.

But we also don't have to say

we're never going to get any of

the benefit of the technology,

and we're not going to see it

used for all kinds of purposes

that may in fact makure lives

more convenient or more safe.

So with that sort of brief

overview I will stop and so we

can chat and take questions, and

go from there.

[APPLAUSE]

 

I'm very -- I've been

thinking about this issue a lot

and I'm very interested in it,

and I think I tend to agree with

you in lots of ways but I'm

going to try my best to

occasionally play devil's

advocate, as my students know I

try to do that.

Sometimes I'm more successful

than others.

First, I would be interested in

your talking a little bit more

about the accuracy issue.

So, you said it's evolved over

time, it's more accurate than it

used to be.

Now Nist says it's accurate.

First of all, what does that

mean, and how is Nist

determining that?

, and yeah, why don't we start

there?

CHRIS: Sure, it's a wonderful

place to start.

Accuracy varies widely depending

on how you're deploying the

technology.

It depends -- so just to give

you an example.

So if I am walking up in a

well-lit customs office, even if

it's not a one-to-one match.

If it's a well-lit -- I'm

looking at the camera that

you're much more likely to get a

good face print and one that's

accurate.

Especially if you have a

database that is backing up that

in or that's backing up the

search that may have 3, or 4, or


different angles.

That's a very optimum sort of

environment to do a face print.

And you're much more likely to

get an accurate identification.

Especially as if I mentioned

before you have a relatively

narrow pool of people you're

doing the search against.

The reverse is true, obviously

if you have a side photo of

somebody that you only have a

couple of photos of, and the

photo quality may not be

particularly good you can see

how the accuracy is going to pin -- go up and down depending

on what the environment is.

And so, you know -- part of the

trick here part of the thing we

have to expect from policymakers

is to vet these kind of

deployments.

How are you using it, what is

your expectation, once you find

a match.

How accurate are you going to

treat it?

What's going to be your

procedure for independently

verifying that this person you

just essentially identified as a

perpetrator of a crime actually

committed that crime.

It can't just be the beginning

and the end of it as a facial

recognition.

So in terms of that I do exactly

what they would expect you to

do.

They have their own photo sets,

they will take the variety of

algorithms that exist and they

will run those algorithms

against their own data sets and

see how good a job they do.

See how accurate they are in a

variety of different contexts.

And this, I think it bears

putting a fine point on.

The accuracy doesn't just differ

depending whether you're

straight on or on the side,

right?

One of the big issues with

accuracy is that it's different

for -- it's most accurate among

white men, and then it degrades

in accuracy, right?

CHRIS: Thank you, and I

should have said that first.

That's the most important thing.

We are seeing a lot of racial

disparity, mostly because of the

training set data but I don't

know if we know enough to know e

if it's 100% the training set

data or not.

Or there may be other areas of

machine learning that are also

impacting it.

But we are seeing a tremendous

variation, it's problematics not

just because of the

identification issues but

because Robert, you and I were

talking about this earlier

today.

If you are not identified as a

person at all, right, because

the system does not recognize

you that has all kinds of other

potential negative consequences

for automated systems, so it's a

very big deal.

It's also worth saying that it's

it doesn't -- you know I worry a

little bit that people are going

to say, well once we fix that

accuracy problem a then it's

okay.

And I hope I've tried to

convince you a little bit that

the problem doesn't end even if

the system isn't racially

biased.

That's the minimum level we need

to get over before we can begin

to talk about how we deploy it.

So linking to that, and maybe

you mentioned a few of these

cases of potentially -- I'm put

it in my language, sort of new

forms of social control, or

reinforcing existing forms of

social control, I think some of

you in the audience may have

heard about this, but I think it

bears medicationing in this

context which is now about a

month ago, news broke that a

contractor working for Google,

you probably know who it was,

was caught trying to improve the

accuracy of their facial

recognition algorithm for the

pixel 4 phone, by going to

Atlanta, where there is of

course a large African American

population, and asking African

Americanmen, homeless, to play

with a phone.

And to play a selfie game, so

they were not concept but their

faces were scanned.

So that keeps ringing in my head

whenever I'm thinking about this

stuff.

And I think what's interesting

to me about it and I wanted to

get your sense of this.

What is interesting to me about

this and it ties to what you

were talking about in terms of

social control is that what --

what the act of supposedly

increasing the accuracy

supposedly to serve at least

arguably the additional -- to

serve African American

populations, actually ultimately

serves to rollovers existing

power dams, and the

discrimination that African

Americans have historically

experiences.

And so I'm wondering in this --

in pursuit of the goal of

accuracy, and the pursuit of

this wonderful technology that's

going to save our lives, you

know these kinds of things are

happening too.

CHRIS: Well that is the funny

thing about rights.

It's everybody that needs to

have their rights respected.

Everybody deserving had been

deserves equal rights but the

reality is those are the kind of

communities that really need

their rights respected.

And they need a consent

framework.

They are the most likely to have

images, because they have less

power and ability to say I am

not going to consent to this.

Maybe less knowledge -- so

really when we're creating these

rights part of what we're doing

is building on power imbalances,

where I have more power and you

may have less, and hence it's

even more important I have this

ability to actually exercise my

rights and know what they are.

And another piece of this which

I didn't mention in my talk,

is -- there is a number of

already unfair systems that

facial recognition might be

built on top of.

The most -- I think one of the

most ilust rive examples is the

terrorist watch list.

So there is a list in the United

States main -- ever-changing

list maintained by part of the

FBI where you -- you can be

identified as a potential

terrorist.

There is a master list that feed

into a wide variety of different

parts of the federal government

and it affects things like

whether you get secondary

screening at the airport.

And in rare cases even whether

you're allowed to fly at all.

And this is a secret list, you

don't know when you're on it it,

hard to know how to get off it.

And the incentives are very bad.

Because if I'm a FBI agent and

I'm on the fence whether to put

you in a database, if I put you

in the database, no harm no

foul.  If I don't put you in the

database and you do something

bad, my career is over.

So there is a lot of incentives

to put people in lists.

Well you can imagine putting

somebody on a list and combining

that with a power of face

recognition creates an even

greater imbalance because now I

have a secret list and I've got

a way to track you across

society.

So that's an existing unfairness

that has nothing to do with face

recognition but face recognition

can exacerbate.

So how would a consent

framework work in that context

give there were already -- we're

in a society where our faces are

being captured all the time.

So how would you envision --

CHRIS: So what you would

consent to in a very technical

way.

You'll consent to turning your

pace into a face print.

You would consent to creating

that piece of personal

information about you.

Literally the way your Social

Security number is a number

about you.

This would be a number that

encapsulate what your face looks

like.

That would be the point at which

you would have to consent.

And I think we might have to do

stuff around a lot of existing

facial recognition databases,

either saying the databases need

to be re-upped but the reality

is if you can catch it there,

then at least you're saying --

taking the good actors and

saying it's not okay to take

somebody's face print without

their permission.

And then, again, as we said the

government is a little bit

different, and of course these

are not magic -- this is not a

magic wand fixing the problems

with face recognition doesn't

fix the other problems with

society and how we use these

technologies.

So you mentioned going

through customs or going through

European immigration, and the

ease of facial recognition

there, and the excitement of

convenience, and you said maybe

that's an acceptable use of it.

And I guess when you said that I

was lining well I'm not sure if

it's an acceptable use of it,

because I worry a little bit

about the fact that normalizes

the technology.

That then, people start

wondering why it's a problem in

other domains.

Look it worked when you went

through immigration.

Why would there be a problem for

us to use it for -- you know

crime-fighting, or education --

to education schools or hiring,

or -- you know sort of.

CHRIS: You know it's always a

balance.

When I'm considering some of

these new technologies I tend to

think about people's real-world

expectations and I think in the

context of a border stop you

expect to be identified.

You expect that a photo is going

to be looked at, and that

somebody is knowing to make sure

Chris Calabrese is Chris

Calabrese.

So that to me feels like a

comfortable use of the

technology because it's not

really invading anybody's -- the

idea of what task is going to be

performed.

So, for a while, and they don't

do it this way anymore, but a

less intuitive example of this

one that I thought was okay, and

this was a little bit

controversial was that Facebook

would do a face template, and

that's how they recommend

friends to you.

You know when you get a tagged

photo, and they say is this

Chris Calabrese, your friend?

That's facial recognition.

For a long time they would only

recommend people if you were

already friends with them.

So the assumption you would be

able to recognition your friend

in real life, so it was okay to

recommend them and tag them.

That's a little bit

controversial.

You're not getting explicit

consent to do that but maybe it

fields okay because it doesn't

feel like you violate a norm.

They now have a consent-based

framework where you have to opt

in.

For a while they had that hybrid

approach.

I think it's helpful to map in

the real world.

I do think you have issues where

you potentially normalizing it,

and another area I didn't bring

up is one that is going to be

controversial is fails

identification and employment.

Obviously, we know the consent

in employment context is a

fraught concept often you

consent because you want to have

a job.

But, you know you really do have

a potential there to have that

technology well we're not going

to do the punch cards anymore.

We're going to do a face

recognition scan to check you

in.

But then of course that same

facial recognition technology

could be used to make sure that

you are cleaning hotel rooms

fast enough.

Make sure that you track your

movements across your day see

how much time you're specked in

the bathroom.

These toms can quickly escalate

especially in employment context

which can be pretty coercive.

So yes, there is a lot to this

idea we want to set norms for

how we use the technology.

Because the creep can happen

pretty fast, and be pretty

violative of your privacy and

rights.

So I've been asking questions

that are pretty critical, but I

feel like I should ask the

question that my mother would

probably ask.

So my mother would say I live a

very pure, good, life.

I live on the straight and

narrow, you know if I'm not

guilty of anything f I'm not

doing anything strange, if I'm

not protesting at the border

when I should I be worried about

this technology?

Or when I should I care?

It's fine, and it actually

protects me from kidnapping and

other things, and I'm getting

older, and you know this is a

great public safety technology.

CHRIS: Yes, the old -- if I

have done nothing wrong what do

you have to hide.

So, I mean I think the obvious

first answer is the mistake

answer.

Just because it isn't you

doesn't mean that somebody may

not think it's you and the

technology may be deployed.

Especially if you're part of a

population that may not actually

the system may not work as well

on.

So that's one piece of it.

I also think that you don't

always -- who are you hiding

from.

Maybe you're comfortable with

the government but are you

really comfortable with the

creepy guy down the street who

can now figure out who you are,

and maybe from there like where

you live?

That's legal in the United

States right now.

And it seems like the kind of

technology use that we really

worry about.

You know -- activists, and I

think this isn't something I --

this isn't something CDT did but

there were activists for fight

for the future.

They put on big white

decontamination suits and they

taped a camera to their forehead

and they just stood in the

hauled of Congress, and took

face recognition scans all day.

And they identified a member of

Congress.

They were looking for lobbyists

for Amazon because they were

using Amazon face recognition

technology.

It was an interesting

illustration of this idea of you

are giving a lot of power to

strangers to know who you are,

and potentially use that for all

kinds of things you don't have

control over.

So we take for granted I think a

lot of our functional anonymity

in this country, and the reality

is that facial recognition

unchecked will do a good job of

stripping away that functional

anonymity, and some people are

always going to say it's fine.

But I think at least what I

would say to them is you don't

have to lose the benefit of this

technology in order to still

have some rights to control how

it's used.

There are ways that we have done

this in the past, and gotten the

benefit of these technologies

without all of these harms.

So why are you so quick to just

give up and let somebody use

these technologies in harmful

ways when you don't have to?

So how would you -- I think

in our earlier conversation this

morning you may have mentioned

this wayly, but I'm wondering

when you think about governance

frameworks, how you think about

the what the criteria might be

to decide what is a problematic

technology, and what is not.

Is that the way to think about

it, or is it -- are there other

criteria?

What kind of expert?

Who should be making these kinds

of decisions.

Is there a role, for example,

for academic work, or research,

more generally in terms of

assessing the ethical, social

dimensions and in what -- on

what parameters I guess.

CHRIS: So I guess we would

say we would want to start with

having a process for getting

public input into how we're

deploying these technologies.

The A ACLU, and CDT has helped a

little bit, and has been running

a pretty effective campaign

trying to essentially get cities

and towns to pass laws that say

any time you're going to deploy

a new surveillance technology,

you have to bring it before the

city council.

It has to get vetted.

We have to understand how it's

going to be used so we can make

decisions about whether this is

the right tom.

So just creating a trigger

mechanism where we're going to

have a conversation first.

Because it may sound strange to

say this but that doesn't

actually happen.

Oftentimes what happens is a

local Police Department gets a

grant from the department of

justice or DHS, and they use

that grant to buy a drone, and

then they get that drone, and

then may might get trained by

DHS, and they fly that drone.

They haven't appropriated any

money from the city.

They haven't put that in front

of the city council, and they

start to use it.

It comes out and city council is

upset, or sometimes the police

draw it pack, and sometimes they

don't, but just having that

public conversation is a really

useful sort of mechanism for

controlling some of that

technology.

So I would say that's a

beginning.

Obviously, you know state

lawmakers can play a really

important role.

Federal lawmakers should be

playing a role but we're not

passing as many laws in DC --

we're not doing as much

governing in DC as maybe people

would like.

It's a pret -- without being too

pejorative, we are a little bit

at a loggers head in terms of

partnership and that makes it

hard to pass things federally.

That's the wonder of the

federalist system there is a lot

of other places you can go.

Academic researchers are

important, my answer to many of

these technologies is this one

specifically was it doesn't

work.

So if it doesn't work, and if an

academic can say this technology

doesn't work or these are the

limits, that's at a

tremendousliy powerful piece of

information, but it's really

hard for your ordinary citizen

to separate out the snake oil

from truly powerful and

innovative technologies.

And I think technologists and

academicked, play an important

role as a vetting mechanism, and

saying I guesses or no to a

policy maker who wants to

know -- is what they're saying

true.

That kind of inaccurately

third-party is really important.

So I don't know how much you

know about this, but facial

recognition has been

particularly controversial in

Michigan.

So for two years, over two

years, Detroit was using facial

recognition, something called

project green light without any

of the kinds of transparency

that you're recommending and

talking about.

It came to light with the help

of activists, and so now the

city -- they've sort of said

okay, fine, and it was sort of

being used indiscriminately as

far as we can tell, and more

recently the Mayor came out and

said okay, we promise we'll only

use it in -- for very narrow

criminal justice uses, but of

course, again, to trade a

majority African American city,

one in which there is not great

trust between the citizens and

the government, in Detroit.

That kind of falls on deaf ears.

And one of the things that even

though they're now useful it, my

sense is that one of the things

that's missing is transparent in

understanding how the technology -- where is the data

coming from.

How is the technology used?

What kinds of algorithms?

There is no independent

assessment of any of this.

So I'm wondering if you know

anything about this or if you

have recommendations on how --

in those kinds of settings, how

you might try to influence that

kind of decision-making.

Because often these are

proprietary algorithms that

these Police Departments are

buying and they're not even

asking the right questions

necessarily, right?

CHRIS: They're not.

And so I think it's a really

compelling case study, because

you're right the reality is,

gosh it's hard to trust a system

that hasn't bothered to be

property or truthful with us for

years, get caught and says oh,

sorry, and then kind of we'll

put protections in place.

So that's not an environment for

building trust in a technology.

It doesn't say you know citizens

and government are partners in

trying to do this right.

It says what can we get away

with.

So, yes, in no particular order

clearly there should be

transparency about who the

vendor is, what the accuracy

ratings for those product are,

without revealing anything

proprietary you should be able

to answer the question of how

accurate your algorithm is in a

test.

NIST tests these products, and

go Google NIST face recognition

tests and you can read the


all the algorithms.

This isn't secret stuff.

You should know when it's being

deployed.

You should be able to understand

how often a search is run, what

was the factual predicate that

led to that search.

What was the result of that

search, did it identify someone?

Was that identification

accurate?

These are kind of fundamental

questions that don't reveal

secret information, they are

necessary transparency, and we

see them in lots of other

contexts, if you do an emergency

and if you're a law enforcement

officer or Department of

Justice, and you want to get

access and read somebody's

email, in an emergency context.

You say it's an emergency, can't

wait to get that warrant, I have

to get this.

You have to file a lotter.

report.

I won't bore you with the code

section.

It's a legal requirement.

I have to report why this is and

what's the basis for it.

So these kind of basic

transparency mechanisms are

things that we have in other

technologies, and we kind of

have to reinvent every time we

have a new technology.

Like the problems do not change.

Many of the same concerns exist,

it's just that the technologies

is often -- the law is written

for a particular technology and

so when we have a new technology

we have to go back and reinvent

some of these protections and

make sure they're broad enough

to cover these new technologies.

 

It's also so in my field we

would call this a

sociotechnology system.

I would think one of the things

you didn't say but would want, I

was thinking about previous

technologies.

There was a recent article

lengthy article, investigative

article in the "New York Times"

about breathalizers, and in that

article they talked about how

there is both the calibration of

the device, and ensuring the

device remains appropriatelily

calibrated, but also there is

interpretation, and it's a human

material system.

And in this case, there may be a

match, it's a percentage match.

It's not -- you have humans in

the system who are doing a lot

of the interpretive work who

also need to be trained, and we

also don't have transparency

about that either, do we?

CHRIS: No, we don't.

And that's an incredibly

important part of the training

of any system is understanding

what you're going to do with a

potential match when you find

it.

So, I'll give you this example,

we talked about it earlier.

So probably -- I don't know if

they still do it this way.

But this wasn't that long ago.

Maybe ten years ago I went to

the big facility in West

Virginia that handles all of the

FBI's computer systems.

Right?

The NCTC -- excuse me the system

that when you get stopped for a

traffic violation the system

they check against your driver's

license before they get outcar

to make sure you're not a wanted

fugitive, and they're -- it's

all -- here.

And one of the things they do in

that facility is they do all the

fingerprint matches.

So if I have a criminal -- if I

get a print at a crime scene and

I want to go see if it's matched

against the FBI database this is

where I sent it.

So you know what happens when

they do a finger print match, at

least ten years ago, but still

is the technology that's bib

deployed for 150 years?

There is a big room, it's ten

times the size of this room.

It's filled with people sitting

at desks with two monitors.

And this monitor is a

fingerprint, and on this monitor

is the five or six potential

matches, and a human being

goes -- to see if the rolls of

your fingerprint actually match

the right print.

So if you think about it, that's

a technology that is 100 years

old, and we are still having

people make sure it's right.

So, that is the kind of -- just

to give you the air gap between

what automation can do, and what

the system can do, imagine now

how are we going to handle this

protocol when I have a photo of

my suspect and then I've got six

photos of people who look an

awful lot like this person.

How am I going to decide which

is the right one.

And maybe the answer is you

can't definitively, you have to

investigate those six people,

and the reality is with face

recognition it's kicking out not


There are real limitations to

the tom.

It is getting better so I don't

want to oversolve those

limitations, especially if there

are other things you're doing

like narrowing the photos you're

running against.

There are systems that will have

to be built on top of the

technology itself to make sure

that we're opt merchandising

both the result and the

protections.

 

So we've been doing a

research project around our new

technology assessment clinic,

and one of the things we've been -- what we've noticed in

our initial analysis of the sort

of political economy of this is

it is a global industry.

And I'm wondering how -- the

legal frameworks what are the

legal frameworks that are

evolving, what are the global

dimensions of its use, and how

are those interfacing with the

legal frameworks, and does that

have any implication for the way

we think about that in the U.S.?

CHRIS: It has huge

implications.

There are a couple of things to

think about globally.

Most developed countryvise a

base-line privacy law.

There is a comprehensive

base-line privacy law that

regulates the collection and

sharing of personal information.

So if you were in the UK, for

example, there would be rules

for who could collect your

personal information and what

they could do with it and

getting permission for it.

And those rules -- by and large

I believe do apply to face

recognition.

I think there may be some nuance

there, but I think the

expectation for the people in

those countries is that facial

recognition will be covered and

what the impact of that will be.

So that's important because it

goes back to that idea I

mentioned before about do we

start with justification why

you're going to use the

technology or do we start with

go ahead and use the technology

unless you can prove there is a

reason not to.

And I think we want to be more

in the don't use the technology

unless you have a good reason.

But what -- equally interesting

at least to me is that this

technology is becoming as it

becomes -- diffuses and becomes

more global and there is a

number of countries that are

leaders in facial recognition

technology, Israel is one.

You may have a harder time

controlling it.

If I can go online, go to a --

an Israeli company, download

face recognition software,

scrape the linked in database

without your permission, and

create a database of 1-million

people that I can then use for

identification purposes, that is

really hard to regulate.

It may be illegal, eventually in

the United States, but from a

regulatory point of view it's a

real enforcement nightmare to

figure out when that -- how that

system was created and how it

might be used.

So this globalization issue is a

real problem, because a US-based

company may not do that, but

then certainly there are going

to be places off-shore where you

might be able to use that.

And it may be less of a problem.

You see there are lot of places

you can illegally torrent

content.

There are lots of people who do

that, there are lot of people

who don't because they don't

want to do something that's

illegal they don't want to get a

computer virus.

To overstate the problem it is a

concern with the Internet, and

with the defusion of technology

across the world.

It often can be hard to regulate

it.

And it's also being used in

Israel but I know in China right

for a variety of different kind

of crowd control and

disciplining contexts.

CHRIS: I'm always a little

bit careful with China because

China is the boogie man who

allows us to feel better about

ourselves.

Like we're not China, so don't

make chine the example of what

you know you're not.

China is a good example of how

you can use this technology.

They're using it to identify

racial minority, and in many

cases to put the racial

minorities in concentration

camps or at least separated them

from the general population.

These are incredibly coercive

uses of the technology, China is

becoming famous for its

social-credit scoring system

where we're -- and it's you know

I think it's not yet as

pervasive as it may be someday.

But it's being used to

essentially identify you and

make decisions about whether you

should -- whether you're a good

person and should be allowed to

take a long-distance train.

Whether you should be able to

qualify for particular financial

tools.

And so, again, tools for social

control.

If I can identify you, I know

where you are and ask make a

decision about where you should

be allowed to go.

And this again is part of called

it a sociotechnical sort of

system that allows you to sort

of use technology to achieve

other ends.

And at least perhaps a

warning for us right?

CHRIS: It is a cautionary

tale but we have our own ways

that we use this technology.

Don't think that just because

we're not quite as bad as China

we are not deploying -- we

cannot be better in how we

deploy these technologies.

Maybe we'll start by asking

some questions from the

audience?

Do citizens have any recourse

whether facial recognition

technology is used without their

permission?

CHRIS: If you're in Illinois

you do.

[LAUGHTER]

In Illinois is a very strong law

it has a private right of action

you can actually sue someone for

taking your faceprint without

your permission.

And it's the base for a number

of lawsuits against big tech

companies for doing exactly this

kind of thing.

I believe the technology is also

illegal in Texas.

There is not a private right of

action so you hear less about

it.

I'm trying to think if there is

any other -- I mean the honest

answer is probably no.

In most of the country, but --

you know you could if you

were -- if we were feeling kind

of crazy there are federal

agencies that arguably could

reach this the Federal Trade

Commission has unfair and

deceptive trade practices

authorities so they decide

taking a face print is unfair.

They could potentially reach

into that.

It's not something they've

pursued before and it would be a

stretch from their current

jurisprudence.

Another audience member asked

what led to the Illinois rule of

consent and what is the roadmap

for getting new rules in?

CHRIS: Well it's interesting

in many ways Illinois happened

really early in this debate.

The Illinois law is not a new

one.

It's at least 7 or 8 years old.

So, and a lot of cases I think

what happened was the Illinois

legislature was sort of

prescient in getting ahead of

this technology before there

were tech companies lobbying

against it.

Before it became embedded, and

they just -- they said you can't

do this.

And for a long time the only

people really that upset were

like gyms, because you couldn't

take people's fingerprint at the

gym without getting -- going

through more of a process.

And so that in some ways is a

way that we've had some success

with regulating new technologies

is to sorrow get at them before

they become really entrenched.

We're kind of past that now, but

we're also seeing as we see a

broader push on commercial

believeacy we're seeing a real

face on facial recognition.

People are particularly

concerned about facial

recognition.

We're seeing it in the debate

over privacy in Washington

state.

It's come up a number of times

in California both at the

municipal level and state level.

I think some of the other sort

of state privacy laws that have

been proposed include facial

recognition bans.

So I guess it would say it's

something that is right to be

regulated certainly at the state

level.

And we have seen a federal bill

that was fairly limited but did

have some limits on how you

could use facial recognition

that was bipartisan, and it was

introduced by senators Kunnz,

and Lee, I would say now the

state level is the most fertile

place.

Beyond policy advocacy what

action can individuals do to

show the growth of this by

companies author government.

CHRIS: So it's interesting.

There are things you can do.

You could put extensive make-up

on your face to disport the

print image.

There are privacy self-help

things you could do.

By and large as a society we

tend to like -- look ascance at

somebody who covers their face.

That's a thing that is maybe we

aren't comfortable with.

But, maybe we could be

comfortable with it.

I mean this is certainly an

environment.

You're in an academic setting a

place where you could be a

little different without being

you know -- without suffering,

if I put checks on my face and

go to work tomorrow, I'm the

boss, actually so I can just do

that.

But if I wasn't the boss people

might look ascance at me for

doing that.

But here you could probably do

it and if people said why does

your face look like that maybe

you could explain.

We have face recognition

deployed in our cities and

that's wrong, and this is my

response.

And maybe that is sort of a

little bit of citizen activism

that can help us kind of push

the issue forward.

But you know you can try to stay

out of the broader databases

that fuel face recognition, so,

if you don't feel comfortable

having a Facebook profile, link

linked in profile, anything that

link ad good high quality photo

of you to your real identity is

one that is going to make face

recognition much easier.

Obviously it's hard door do if

you can't stay out of the DMV

database, and that's one that

police are pulling from.

So that's harder to escape.

What are the ethical and

technical implications of the

increased use of facial

recognition for intelligence and

military targeting purposes?

CHRIS: Oh, that was a hard

one.

I mean they're similar to the

ones we've laid out that the

stakes are just higher.

We're identifying people for the

purposes of potentially

targeting them for you know --

for an attack.

And we've seen drone strikes for

the last at least 7 or 8 years,

you know you can imagine a face

recognition enabled drone strike

being particularly problematic,

not just because drone strikes

are really problematic, that

goes back to the whole argument

about unfair systems and

layering on face recognition on

top of it.

You have a greater potential for

error.

But, to be fair, and I'm loathed

to be fair here because I think

drone strikes are just unjust

for so many reasons, you could

argue that that actually in fact

makes it more likely that I'm

not going to target the wrong

person.

Than in factist another

safeguard that you could put in

place.

That is the charitable as I can

be to drone strikes.

Now this audience member

wants to know what can we do

when biometrics fail.

So your facial measurements

change as you age.

So what are the implications of

facial recognition their

validity and reliance over time?

CHRIS: So there is a big

impact certainly for children.

As you grow up your face print

changes substantially.

The prints have become more

stable as you grow older as an

adult.

There is an impact, but, if you

have enough images and you --

did you know you have a robest

template the aging process has

shown to have less of an impact

on inaccuracy, but that has to

do with how many totals you're

using to create the original

template.

There is also an issue with

transgender people right?

CHRIS: I'm sure, yeah, right.

There were many DMV's that

force a transgender person to

wipe off their make-up, and --

you know appear as their

biologicaldying the given

biology at birth gender, and

that's used for facial

recognition, and then it has

again -- I think one of the

things that's interesting to

what you've said is actually,

yes, it has very difficult

implications in terms of

criminal justice.

But these kinds of quieter

perhaps at the outset, in the

process of data collection, the

kinds of social disciplining

that's happening is super

interesting, and distressing.

I mean disturbing.

>>

CHRIS: We're interested in

technology that's whyio get into

this sort of thing.

Technology is often a multiplier

in a lot of ways.

It can multiply benefits in

society, and it can multiply

harms.

That's true in many technologies

and technology is a tool.

Yes, there is no question as you

go through the these systems as

you see them deployed more

broadly you're going to see

these impacts in all kinds of

unexpected ways.

What kind of regulation

should be put into place to

protect data collected by big

companies such as apple?

So we haven't talked about

data protection but it is worth

understanding that this is

personal information, same way

your Social Security number is

personal information.

You should expect good

cybersecurity protections for it

that information.

You should have the ability to

delete that information if you

access and find out what -- how

that information is being held,

and delete it if you want.

And that would be a right you

would have if you were in the UE

for example.

We don't have those in the

United States, except in

California once the new

California privacy protection

actgosis into effect in January.

But you should also -- Apple

does interesting things that are

ilus tretive of this.

Apple dozen doesn't actually

take the biometric off the

device.

What they do is they store it in

a separate place in a separate

place on the device that's

actually physically separated

from the rest of the systems and

the phone to make it even harder

to get access to.

So when you take a face print

through face ID or previously

through fingerprint it resides

on a physically separate place

on your phone.

And that's a really good privacy

protection.

It makes it much hard to get at

that biometric makes it much

harder to you know -- if a

hacker wants to get access to

your information it makes it

much harder to do, which is --

illustrative of a broader

concept we should all embrace

which is the I did of privacy by

design.

We can build some of these

systems at the outset so they

are more privacy protected.

We don't have to wait until

after we see the harms and then

try to back-fill the protections

in place.

Why don't we try to anticipate

some of these problems at the

outset and build systems that

mitigate those problems at the

beginning?

How can the government even

subject the technology like

facial recognition to a

moratorium when private

companies are already using it?

CHRIS: That's a very good

question, and that varies a lot

depending on where sort of who

is doing the regulating.

For example, the City of San

Francisco cannot tell Amazon

they cannot regulate the use of

recognition in San Francisco.

They can regulate how the City

of San Francisco chooses to

deploy the technology, but they

just don't have the authority.

But a state could impose a

moratorium.

They could require any facial

recognition be either banned,

they could say facial

recognition requires consent,

they could say we're going to

have a moratorium while we think

about rules.

They have that authority, and

because there is no overriding

federal law that power devolves

to the state, the state could

actually do that and the federal

government could do the same

thing.

Would the increased accuracy

of face recognition lead to

better surveillance of a group

that's already disproportion tly

targeted by the criminal justice

system?

Yes, it could.

I think that's what we worry

about.

Arguably, and this is not --

this is not a face recognition

example, but we are using -- we

are starting to see artificial

intelligence deployed to do

things like pre-trial bail

determinations.

So when I go to decide whether I

get released on bail or stay in

the criminal -- there are off

the shelf technologies,

compasses, that will say red,

yellow green.

Nominally they're not making a

determination, but they're

making a judgment.

They're saying red is definitely

not, yellow is maybe, and green

is -- you should, and judges by

and large are follow those

determineses very closely.

I won't get into the details but

there are real concerns about

the racial bias and how those

statements are made.

The training data used, and the

way they're weighted.

But, the current system for

doing bail determinations is

really bad too.

Like judges aren't going to turn

out to be real good at this

either, and they tend to rely on

their own set of biases.

So it's not that automating this

process is automatically bad.

The trick is that you have to

automatic it in a way that's

fair.

And that's a harder -- and that

requires more understanding from

policymakers about how the

technology works, and it

requires more deliberation about

how these systems are built.

How often are facial

recognition databases wiped?

So if I'm in one, am I in it for

life?

CHRIS: That would depend on

who created the database.

In some -- in a lot of countries

like in western democracies,

there may be data retention

limits so that any kind of

personal information the

expectation is that you're going

to delete it after a set period

of time, or after you haven't

used the service for a set

period of time, but that's going

to vary widely depending on the

jurisdiction and who holds the

data.

Is there a way to uniforming

tech companies to innovate and

develop thinking about consent

from the start rather than just

retroactively putting it in

place after they've been caught?

CHRIS: Well there are a lot

of ways.

Some of them are more effective

than others.

Tech companies are I think

becoming more sensitive to these

questions, the tech backlash

I've seen the last couple of

years is real.

People are worried about these

technologies and companies are

really worried about people

being worried about these

technologies.

They want them to use them.

So I think it's a -- we're

seeing a lot of different ways

to put the pressure on.

We're seeing it in state and

federal laws, we're also seeing

it in putting individual

employees of those companies

putting pressure on their

companies to behave in a more

responsible way.

Silicon Valley's most

precious -- are its engineering

talent.

And if engineers aren't happy

that can make change at the

companies.

Saying we want to deploy these

technologies in a more

responsible way.

We the employees of a big tech

company, it really is a way to

make a meaningful change.

And there is a whole bunch.

There is a lot of ways -- I

think we're in a little bit of a

moment where people are paying

attention to this technology and

that gives us a lot of

opportunity to try to push

changes across the board.

Is consent the right way to

think about it?

I think in the U.S. an

individualistic society like

ours consent individual consent

is -- seems like the

straightforward way to think

about it, but this is the

technology that implicates

families and communities, just

like --

CHRIS: 100 percent yes.

I'm thinking of forensic data

base, and those conversations

around databases and biobanks

there's been a lot of discussion

about how concept is inadequate

way of thinking about this.

So I'm wondering are there

alternative ways of thinking

about it this?

I do think consent -- I am

not a big fan of consent as a

solution to privacy problems.

I think we all understand that

checking that little box and

saying I agree to your terms of

service, congratulations you've

consented.

I don't think anybody feels like

their rights have been protected

by that process.

That's not working for us.

And so one of the things we've

really been pushing is this idea

that we need to put some of the

responsibility back on the data

holder, as opposed to the person

who is consenting.

But I do think we can do that in

a way that is very -- colloquial

analogous to what we think of as

true consent.

So to give you an example.

When I use my i-phone, actually

I use an Android phone because

I'm a fuddy-duddy.

But my kids are like why -- I

don't have face-ID, and I did

it, I understand what happening

there.

I understand that I am giving

you my face template in exchange

for the ability to open my

phone.

That's a pretty close to the

pure idea that we have of

consent.

Right?

I get it.

I know what the trade-off is

here.

So the trick I believe is to

stop there and say

congratulations you got consent

to collect that face template

for the purpose of looking

somebody to open their phone.

You don't get to do anything

else with it.

That's it.

We're going to stop and make

that a hard-use limitation.

And if we do that I feel like

we've gotten.

You are responsible as the data

holder to hold that line.

You understand what the benefit

was.

You don't get to use it for

anything else, but we really do

honor the individuals desire to

either sort of have more or less

yeahs of this kind of

technology.

So I do think there is a role

for consent, it's just that it

can't be like a get out of jail

free card that says once I be

gotten consent, that's it I'm

good I can do whatever I want.

Is transparent the right way

to be thinking about this issue?

Considering that transparency

could mean opening up all data

for everybody?

Or do we need new definitions

and values as we frame this

issue?

CHRIS: Transparency is

interesting in this area because

transparency doesn't work super

well for what we're talking

about.

The fact of the matter is am I

put a sign up that says face

recognition in this facility,

but if I need to go use that

facility or I want to shop in

that store that transparency is

worthless to me.

That's not a useful technology,

or it's not a useful sort of

benefit to me.

I do think that preparing can be

useful in the way we described

it before.

Understanding as part of a

broader system how the system is

being used.

How many searches are being run,

who might be running against a

face recognition database.

That kind of transparency how

accurate is the system.

I do think there are ways we can

use transparency to really try

to drive fairness in the system.

But transparency itself it

not -- probably not an optimum

tool in this case for a lot of

reasons, it's hard to escape the

technology and it's also hard to

know as a user how this

technology is being deployed, so

having -- being transparent

about the fact that you're

deploying it doesn't help me

understand what's actually

happening.

We've heard a lot about

policies about the use of facial

recognition technologies.

Are policies about the

technology relevant.

For example, they reported on

cameras being built in in China

with minority detection.

CHRIS: I think regulating the

technology itself is really

important.

We are seeing more and more

cameras with

Internet-connected -- more nets

of cameras, that are

Internet-connected, and then can

be have a variety of add-ons.

And so, regulating like when

we're actually using the tom is

really important.

Here's here's a great example.

We are activists for many years

have been excited about using --

about police body cameras.

This idea that we can use a body

camera, and we can really see

what happened as a crime scene

or -- while confrontation

happened with the police.

As they become more widely

deployed we sort of started to

grap with the real limitations

of this technology.

Police turn them off.

Oftentimes they're not point in

the right direction.

Or, police will be allowed to

look at the camera footage

before they write their report

and write a lotter that matches

whatever happened in the camera

footage no matter what and

allows them to curate that.

Well, now, say imagine we just

said well, I'm Taser, is a

company that downpours makes

many of these body cameras I'm

going to put automatic facial

recognition on all of the body

cameras.

Great new technologies, going to

help everyone.

-- now what you've done is taken

a tool that was supposed to be a

tool for social justice, a that

was supposed to protect people

and their interactions with

police, and you've turned it

into a surveillance tool.

You say now I get to identify

everybody as I walk down the

street, and I'm a police

officer, I get to identify all

the people on my patrol.

I potentially get to put them in

a database and track where they

are.

I get to -- know who everybody

is, and rely on that

identification in ways that may

be problematic.

So now we've lipped the

presumption.

It's been something supposed to

benefit the communities to

something that may actually harm

them.

We have to think about when

we're deploying the toms what

the context is going to be used

in and who is going to benefit

from it.

I want to wrap up with one

last question.

So we're in a public policy

school, and a lot of the folks

who are getting master's degrees

or undergraduate degrees here

will go off into policy or law.

They'll be in a position.

CHRIS: Somebody that might

work for you someday.

Well, yeah, and I'm wondering

this conversation in some ways

our conversation hasn't been too

technical but it is a technical

issue and people often might say

oh, that's really technical I

don't understand -- black box it

and say I can't deal with it.

And yet it's incrediblyquential

as we've been discussing.

For students who are interested

in this, or generally pursuing

policy careers given the size of

this issue it's Lyme to

intersect -- likely to intersect

with their lives, what kind of

training, and exese is useful in

being able to navigate these

issues, these technical issues.

In your own career you have come

from law, and had to navigate

pretty technical questions, so

I'm wondering how you think

about this.

CHRIS: I guess I would say

I've been purposefully not

making this too technical of a

conversation because I don't

think it needs to be.

You're all going to understand

the concept I'm talking about.

We don't get too deep into the

weeds of to understand the

implications of it.

You do have to be willing to

answer hard questions and

explore under the hood, and be

really skeptical about claims

about the an efficacy of

technology.

Technology is often treated by a

policymakers like it's some sort

of magic fairy dust you can

sprinkle on problems and make

them all be fixed because

technology solves it, and it

very rarely does.

And so any time someone comes in

and says to you I've got this

silver bullet that is going to

solve it all.

Right there your antenna should

go up and say I should be sold a

bill of goods here.

And then you have to ask hard

questions and go to your own

sets of validaters, I'm not a

technical person but certainly

your local university has a

mutual person who can tell you

whether claims that all being

made are real.

A lot of Congress has been

pushing in recent years to add

more technology policy fellows.

So there are more people with a

background in technology policy.

So you don't have to be a

technology expert.

You just have to be willing to

not accept any claim that's been

offered as unvanisherred truth,

and without looking for outside

experts to help you sort through

the fact and the fiction.

And if you do that, literally if

you just kind of get to the

point where you separate out the

stuff that doesn't work, from

the stuff that works, you will

be miles ahead of many policy

discussions because you'll at

least be having a factual

discussion about what technology

can do as opposed to sort of a

wishful discussion about what we

would love it to do in an

imaginary society.

Great, well I certainly

endorse that.

Well thank you very much.

[APPLAUSE]

Thank you.

Thanks so much.