Published: Feb. 18, 2020

Paul Beique:听
Welcome to Brainwaves, a podcast about big ideas produced at the 麻豆影院. I'm your host this week, Paul Beique. There's no shortage of news coverage of artificial intelligence. From Amazon, to China, to robotic relationships.

Announcer: I have an appointment with Harmony, the world's first sex robot. 鈥淚 am already taking over the world one bedroom at a time.鈥澨
笔补耻濒:听
But if we've learned anything from the movies, artificial intelligence might have a few downsides.听

From 鈥2001: A Space Odyssey鈥:听
鈥淥pen the pod bay doors, HAL.鈥澨

贬础尝:听
鈥淚'm sorry, Dave, I'm afraid I can't do that.鈥澨

顿补惫别:听
鈥淲hat's the problem?鈥澨

贬础尝:听
鈥淚 think you know what the problem is just as well as I do.鈥

顿补惫别:听
听鈥淲hat are you talking about, HAL?鈥澨

笔补耻濒:听
That's HAL, the robotic antagonist from 鈥2001: A Space Odyssey.鈥 What HAL's talking about are the unforeseen limitations of AI. Let鈥檚 start there this week. Executive Producer Andrew Sorensen talked with research scientist Janelle Shane about AI鈥檚 shortcomings.

Andrew Sorensen:听
Janelle Shayne, author of the book 鈥淵ou Look Like a Thing and I Love You.鈥 This book is about artificial intelligence and specifically about some of the gaps in what artificial intelligence can't do. What can artificial intelligence not do? Where are we limited right now?

Janelle Shane:听
We are limited to really simple, well-defined problems. Because the algorithms we鈥檙e dealing with actually, if you look at raw mental power, it鈥檚 more along the lines of what an earthworm can do. So trying to understand the broader world, the context like that, is a really hard thing for today's algorithms to do.

础苍诲谤别飞:听
Tell us about the title of that book, 鈥淵ou Look Like a Thing and I Love You.鈥 Where did that title come from.?

Janelle Shane:听
That was an experiment when I was trying to get a text-generating neural network to generate pickup lines, and this was the best it did.听

础苍诲谤别飞:听
That was the best pick up line it came up. So, in your mind, what does that kind of show us as far as where we're at with artificial intelligence, and what do we need to keep in mind, as there are a lot of people out there who are very worried about what artificial intelligence can do, and then on the other side we're seeing a lot of companies sell these artificial intelligence solutions to a lot of the problems that we face.听

Janelle Shane:听
Yeah, so, I think a lot of people tend to think of AI as a kind of science-fiction level AI like Skynet level and so forth. And what we have today is a lot less complicated than that. It's unlikely to get complicated anytime soon. In fact, what we have to worry about a little bit more are algorithms that don't really understand what we're trying to get them to do and accidentally solve the wrong problem, copy human bias when they shouldn't copy human bias, or, you know, goof up and not recognize a pedestrian when they should.

础苍诲谤别飞:听
How far off do you think we are with artificial intelligence from something that can be maybe a little more reliable?听

Janelle Shane:听
It depends. I mean, you can build something that's fairly reliable now at certain tasks. Like we've got them tagging photos in our cell phones, we鈥檝e got them delivering search results, doing autocomplete. And so they work well for a lot of different tasks, but what we're running into is there are some things that we don't realize are very difficult like, you know, flexibly answering a customer's questions, until we try to build a machine to do it and realize, oh, there's a lot of complex stuff that we humans are doing without even thinking about it.听

础苍诲谤别飞:听
So to that end, what is your advice to people as they think about AI and as they are being hawked a lot of products that have artificial intelligence solutions behind them?

Janelle Shane:听
Yeah, I think it鈥檚 to remember that these algorithms can't make moral decisions by themselves and that they copy human behavior. So if the human behavior is flawed, these algorithms will copy it unknowingly.

础苍诲谤别飞:听
OK, thank you so much.

Janelle Shane:听
听Thank you so much.听

笔补耻濒:听
Janelle Shane is a research scientist at 麻豆影院 Nonlinear Systems, and she's the author of a book on artificial intelligence called 鈥淵ou Look Like a Thing and I Love You.鈥澨
It's true that machines can only do what people want them to do, and they can't decide what is ethical and what is not. When that machine is designed to kill, the prospects become very frightening very quickly. The New York Times recently published a story and a documentary on AI鈥檚 increasing role in the military. Our next guest is Jonah Kessel, the Times鈥 director of cinematography. He produced the documentary. It starts with a shot of Kessel sitting in an orange leather chair against a stark black background. He's looking at his phone.听

Jonah Kessel narration:听
I love that I can unlock my phone with my face. And that Google can predict what I'm thinking. And that Amazon knows exactly what I need. It's great that I don鈥檛 have to hail a cab or go to the grocery store. Actually, I hope I never have to drive again. Or navigate, or use cash, or clean, or cook, or work or learn.听

But what if all this technology was 鈥 trying to kill me?听

笔补耻濒:听
Jonah, welcome to Brainwaves.听

Jonah Kessel:听
Thanks for having me.听

笔补耻濒:听
And, full disclosure, Jonah was a student of mine years ago at Saint Michael's College outside of Burlington, Vermont. In the documentary, you travel to a Russian arms expo where some AI-equipped weapons are on display. Did anything surprise you about what you found there?听

Jonah Kessel:听
I think the thing that surprised me most was that they were kind of, first, showing them at all. Some of these weapons are considered in a morally gray area and specifically at the Kalashnikov booth. Kalashnikov, as you know, is like a world-famous icon of killing. You know, their guns are 鈥 the AK-47 is probably one of the most infamous guns in the world. And when we saw this gun that they had there, it was a turret hooked up to facial recognition software. And it took me a minute to understand it. I was like, huh. It was on display and I saw the turret, I saw the machine gun on it. And I saw what it was hooked up to, and it was pointing at me. And after it kind of registered what was going on, I was like, wow, this is amazing. And so I immediately went, you know, to the sales people and to their PR people. I was like, hey, can we ask you about this weapon? We鈥檙e really curious how it works.听

And, you know, they chatted for a second and eventually they were like, 鈥淎bsolutely not. Go away.鈥 And we鈥檙e like, well, you know, we have press passes. We鈥檙e here you know as legitimate members of the press. You know, we鈥檇 really like to speak to somebody about this. And they said, well, you know, come back in an hour. And we came back in an hour, and they said come back tomorrow. And we came back tomorrow, and then it was gone.听

So this weapon, which clearly drew our interest, as soon as we started asking about it, they didn't feel comfortable enough let alone talking about it, but that they felt the need to put it away. And I think that speaks volumes to its perceived threat and its perceived use.听

笔补耻濒:听
You make the point in the documentary that we've been here before with military technology. The Gatling gun was originally designed to save lives. Nuclear and chemical and biological weapons were supposed to be a deterrent but they've all been used to kill people. Do you see this piece as a kind of warning not to go down this same path again with artificial intelligence?听

Jonah Kessel:听
Yeah, absolutely. I think, you know, one of my passions in journalism is to raise red flags. You know, this piece is more on the analysis side than anything else. But it's not up to us as journalists to say this is good or bad. But it certainly is up to us to say, 鈥淗ey, this needs more attention.鈥 And if we look at lessons from the past, you know, such as the Gatling gun, clearly our inventions don't always have the intentions we predict. And in the case of autonomous weapons, you know, the AI scientists are screaming don't do this, and yet we are not listening to them.听

笔补耻濒:听
The United Nations does not come off particularly well in this documentary. They鈥檙e portrayed as talking in circles about definitions and rules while tech companies and nations are rapidly developing autonomous weapons. What struck you about that dichotomy?听

Jonah Kessel:听
Yeah, so I went to, I think it was five days of meetings at the United Nations in Geneva. But when I was in there I started noticing it was the pleasantries which first started to really get to me. That, you know, whenever someone started talking, the first 30 seconds were, you know, left for pleasantries: Thank you for having us, thank you for letting us speak, Your Excellency, you know. And the same thing would happen in return with the chair would talk. And the amount of time that was being wasted kind of, you know, shaking each other's hands, if you will, really started to be in juxtaposition to what I had seen the previous couple day at that weapons fair. I鈥檇 been talking to technologists and developers about all this stuff and all the things they're working for, and all of a sudden you show up at the, you know, the highest level of international governance. And people are just thanking each other. And the scene started to build in my head while I was there in the meetings. I could see what I want to do with it and how to kind of juxtapose these things to kind of show we're not acting fast enough and certainly not at a bureaucratic level.听

笔补耻濒:听
Children actually play a pretty significant role in this documentary. You show several scenes of children examining, almost playing with, these weapons systems. For me, those were some of the most poignant scenes. What was the thought behind including children in a story about AI in the military?听

Jonah Kessel:听
Certainly, when thinking about the future, there鈥檚 probably no more potent sign than children. They're also a symbol, or they acted, I was intending them to act as a symbol of cultural differences. So, this is in Russia, and I think this is an important part of the story, which is a little bit subtle, but that we just don't all have the same values. And that can be pretty tricky if, you know, let鈥檚 say the United States, we鈥檙e having these conversations about ethics as it relates to weapons, but those same conversations aren鈥檛 necessarily happening in other places. And if our value systems are so different, perhaps one country will make these weapons, whereas another won't because of, you know, their own values. And that creates a kind of an unevenness to warfare, which could potentially be dangerous and actually is one reason why people don't want to stop making these things, because they're afraid if they stop making them, their competitor or their adversary might continue to make them, giving them an advantage should they go to war.听

笔补耻濒:听
One of your subjects makes the point that we really don't have to wait for this technology, it鈥檚 already being created by commercial tech companies. He also says that we can teach military machines to be legally right but getting a morally right is a lot more difficult. Can you tell us about the example he used.听

Jonah Kessel:听
Yeah, so, Paul Scharre is a former Army Ranger and he became a policy guy in the end. He's in a think tank in D.C. And in the story, Paul describes a young girl 鈥 she could have been 4 or 5 years old 鈥 that was spying on him and his teammates. And you know, by the rules of law, by the rules of war, she was a valid enemy combatant. And the rules of war don't have an age limit on who鈥檚 a combatant, so she was a valid target. And his point he makes with this young girl he sees in Afghanistan is that, should he have been a machine programmed by algorithms to follow the rules of war, that machine would have shot this little girl. Now, he knew that was wrong and he didn't shoot her. But a machine, could you program a machine to know the difference between right and wrong, even if that means breaking the law? And I think there's something, a couple really interesting points here. One is certainly that what's right and wrong is not always clear. Another thing is that sometimes doing the right thing means breaking the law. And a third thing is just the uncertainty that is required for judgment. Paul once told me: The entire time I was in Afghanistan, when someone came up to talk to me, I could never be totally sure if this person was just a civilian who wanted to say hi, or maybe they didn't understand me, or maybe it was actually someone who wanted to kill me. And I was never quite certain.听

And this is the reality of modern warfare today. You know, we're not living in, you know, in World War II or World War I times where you could identify your enemy by their helmet or by their uniforms. War is much different now, and the battleground is not as clear. So these are real challenges for AI, if we think about making machines that are going to carry out warfare and follow rules, because they're all going to be governed by rules which we give them.听

笔补耻濒:听
Jonah, thank you very much for your work and thanks a lot for joining us today on Brainwaves.听

Jonah Kessel:听
Great, thanks for having me.听

Jonah Kessel:听
Jonah Kessel is the director of cinematography at The New York Times. You can find links to the documentary in the podcast description.听

笔补耻濒:听
Facial recognition as part of a weapons system might sound frightening but even the facial recognition in phones and on Facebook can have a hard time figuring out who we are. Executive Producer Andrew Sorensen discussed the weaknesses of facial recognition 鈥 particularly around gender identity 鈥 with Morgan Klaus Scheurman, a PhD student in information science at CU 麻豆影院.听

础苍诲谤别飞:听
Morgan Klaus Scheurman, thank you and welcome to the show

Morgan Klaus Scheurman:听
Yeah, thanks for having me.

础苍诲谤别飞:听
So we're talking about artificial intelligence. You've done some research on to artificial intelligence and facial recognition. How commonly used is artificial intelligence in this facial recognition software?听

Morgan Klaus Scheurman:听
Well, I would say that facial recognition and facial analysis more broadly is just an instance of AI, so all facial analysis is AI, I guess I would say.听

础苍诲谤别飞:听
What are we looking at as far as where is this technology currently? In your research you found some pretty serious shortcomings.

Morgan Klaus Scheurman:听
Well, I guess I can say for some listeners that are maybe not as familiar with this technology that maybe facial recognition is the most familiar use case people know. So, how you unlock your phone, how you tag your friends on Facebook. We're all kind of familiar with that instance of facial recognition. But in my research I looked at facial classification. So that鈥檚 when a system will analyze aspects of an image, aspects of a face, and then try to classify certain characteristics of that face, including things like gender, ethnicity, age. Those sorts of features.

础苍诲谤别飞:听
And previous research showed that there are a lot of issues around minority groups, particularly women, who have darker skin. Is that right?

Morgan Klaus Scheurman:听
Yes, so, previous research has been done to show that women with darker skin tend to be misclassified as male more often than people with lighter skin types in general.听

础苍诲谤别飞:听
And then what you found in your research, can you explain a little bit of that?听

Morgan Klaus Scheurman:听
So, in my research, I looked at gender across different gender identities. So I looked at cisgender men, cisgender women, transgender man, transgender women, and nonbinary genders such as gender queer, agender. And so we found that facial classification broadly misclassifies trans people far more than it misclassifies cisgender people. And then these systems aren't built to recognize anything beyond male or female so it's actually not possible for it to accurately classify anything outside of the gender binary

础苍诲谤别飞:听
So the population that identifies is trans is still, I think, pretty small, somewhere in the kind of single digits for the whole population. Why should the average person really take stock of that and be concerned?

Morgan Klaus Scheurman:听
Yeah, so, I guess if we're talking about, like, why would any person on the street be concerned is, on one hand, it's really encoding into these systems what a normal woman and a normal man should look like. So it's very limited in the way that it views gender for every person that it comes into contact with. So if you fall outside of that, like maybe you like to wear your hair short more often, or you just kind of have what a computer would see as a more masculine appearance as a woman, you may be misclassified. So I find it very interesting. This is something that I also tested on myself, and when I interact with people in real life, when I talk, generally people will say 鈥渉e鈥 or 鈥渟ir,鈥 but these different systems actually classified me differently. So, like, Amazon classified me as female, and Microsoft classified me as male. So you can also kind of see that depending on the system you're interacting with, it might see you as a totally different gender, so it could affect any person, really.

础苍诲谤别飞:听
And you, I know our audience can't see Morgan right now. He does have long hair but he's wearing a flannel and a NASA shirt.

Morgan Klaus Scheurman:听
It really depends on the day. I鈥檓 definitely one of those people, too, who鈥檚 more maybe genderfluid, as you would say. Like, I wouldn't consider myself as maybe falling into the norm like short hair, wearing jean jackets every day or something.听

础苍诲谤别飞:听
But once you interact with you, it's not unapparent that you're a man.

Morgan Klaus Scheurman:听
Well, I think it's very interesting, too. These systems, they're really trying to be as good as humans are at this, but they don't have as many context clues, and then there's kind of many notions around gender now that your appearance doesn't necessarily map to your gender identity. So me and you, we could have a conversation about that, right, and I could say, well, this, the way you perceive me is not the way that I feel, right? But you can't really do that with a computer, and there are no opportunities right now for you to even intervene in most of these systems. A lot of us don't even know it鈥檚 happening.

础苍诲谤别飞:听
And what are some of the problematic kind of use cases were facial recognition is being used where you find what you found with their limitations to be an issue.

Morgan Klaus Scheurman:听
Yeah, so, I would say that the majority of how it's being used as in either media or in marketing. So, in that case it's more about who are you misrepresenting, and who are you erasing from the reality of who's interacting with products, or who鈥檚 on screen or whatever. In other cases, facial recognition or facial classification are being used in, like, security scenarios or policing. And usually that's more on the recognition, one-to-one individual matching use case. But it is interesting to see maybe your documentation, your ID, or what's been recorded in a database by police doesn't match your current gender identity. So I could see that as being very problematic and very dangerous for people who already kind of have higher levels of violence generally. Trans people face higher levels of violence in the general population.听

础苍诲谤别飞:听
So we were talking a little bit before this interview, and you鈥檝e had some pretty big companies who were involved in this space reach out to you to learn a little bit more about your research. Who has been reaching out to you and what have they been asking?

Morgan Klaus Scheurman:听
So, I don't know if I want to say which companies have a reaching out to me on a podcast, but basically some bigger tech companies that are using gender classification in their facial analysis softwares have reached out to understand what directions they think they should be moving in, in terms of gender classification, and kind of in many ways talking to us about the use cases that their clients are currently using it for. But I think the companies and the people in the companies actually are thinking a lot about this problem. As society, you know, trans rights and different views of gender are becoming more apparent, I don't think these companies have been unaware of this issue.

础苍诲谤别飞:听
So to that end, that they are reaching out, they are looking at the problem, does that give you some hope for the future of facial recognition that it might be a little more accurate and not create some of these problematic scenarios where, you know, maybe you're being marketed women's products when you identify as a man?

Morgan Klaus Scheurman:听
Yeah, I think that's actually an interesting question, because I'm not necessarily a huge proponent of facial recognition anyway, so in terms of making it more accurate, there is a lot of concern from different marginalized groups, and not just trans people but also people of color, about the more accurate it is maybe the more dangerous it is to those groups especially in terms of policing when surveillance and things like that. I do think that there are some use cases that are promising. So, if we鈥檙e looking at representation or bias mitigation using these kinds of tools and seeing, like, oh, how many people of a certain gender are shown in television shows, or how can we mitigate bias against certain people of color, I think that is useful. I personally think that the best step forward is actually in policy and less about diversifying data. So I would like to see more discussion around how these systems should be used and what use cases should be regulated.

础苍诲谤别飞:听
OK, Morgan Klaus Scheurman, thank you so much.

Morgan Klaus Scheurman:听
Thank you so much for having me.听

笔补耻濒:听
Thanks for joining us this week on Brainwaves. I'm Paul Beique. If you liked what you heard or have an idea for a topic we should cover, we want to know. You can now email us at brainwaves@colorado.edu. Executive Producer Andrew Sorensen and I produced this episode. Join us next week when the topic is music, from the Beatles to Gen Z.听

Dave from 鈥2001: A Space Odyssey鈥:听
Hello, HAL, do you read me?听

Hello, HAL, do you read me?听

Do you read me, HAL?听

Do you read me, HAL?听

Hello, HAL, do you read me?听