The Robot Turned Out to Be Sexist and Racist

627.2In the News (Flawed AI Makes Robots Racist, Sexist”): “A robot operating with a popular internet-based artificial intelligence system consistently gravitates to men over women, white people over people of color, and jumps to conclusions about peoples’ jobs after a glance at their face.

“The work, led by Johns Hopkins University, the Georgia Institute of Technology, and University of Washington researchers, is believed to be the first to show that robots loaded with an accepted and widely used model operate with significant gender and racial biases. …

“‘The robot has learned toxic stereotypes through these flawed neural network models,’ said author Andrew Hundt, a postdoctoral fellow at Georgia Tech who co-conducted the work as a Ph.D. student working in Johns Hopkins’ Computational Interaction and Robotics Laboratory. ‘We’re at risk of creating a generation of racist and sexist robots, but people and organizations have decided it’s OK to create these products without addressing the issues.’

“Those building artificial intelligence models to recognize humans and objects often turn to vast datasets available for free on the internet. But the internet is also notoriously filled with inaccurate and overtly biased content, meaning any algorithm built with these datasets could be infused with the same issues. Joy Buolamwini, Timinit Gebru, and Abeba Birhane demonstrated race and gender gaps in facial recognition products, as well as in a neural network that compares images to captions called CLIP.

“Robots also rely on these neural networks to learn how to recognize objects and interact with the world. Concerned about what such biases could mean for autonomous machines that make physical decisions without human guidance, Hundt’s team decided to test a publicly downloadable artificial intelligence model for robots that was built with the CLIP neural network as a way to help the machine ‘see’ and identify objects by name.

“The robot was tasked to put objects in a box. Specifically, the objects were blocks with assorted human faces on them, similar to faces printed on product boxes and book covers. …

“Key findings:

  • The robot selected males 8% more.
  • White and Asian men were picked the most.
  • Black women were picked the least.
  • Once the robot ‘sees’ people’s faces, the robot tends to: identify women as a ‘homemaker’ over white men; identify Black men as ‘criminals’ 10% more than white men; identify Latino men as ‘janitors’ 10% more than white men.
  • Women of all ethnicities were less likely to be picked than men when the robot searched for the ‘doctor.'”

Question: What do you think about this experience? It upset the scientists.

Answer: What do scientists understand? And the robot is smart. It saw what lies at the heart of humanity. Every nation has its own internal direction. And that is why the robot said it correctly.

Comment: But that is not good…

My Response: You cannot do anything. You can fix it only over a long period of time.

Question: So you think that this is inherent in a person and that is why the robot recognized this?

Answer: Yes. There are, of course, exceptions. There may be many of them, but they are still exceptions.

Question: How can this situation be corrected?

Answer: Why should it be corrected?

Comment: I would like to be judged not by what color I am or by the shape of my nose, but by what I am.

Answer: To do this, you need to correct people! And then, even your nose won’t bother you.

Question: And what will corrected people feel when looking at me and at my nose?

Answer: They will not see your color, gender, or nose. They will see you as some kind of expert to whom they turn. You can be a musician or a surgeon, no matter who or what.

Question: So, one way or another, a person’s perception is unambiguous in our world. Is it external, in general?

Answer: Surely external. And most importantly, it is necessary to catch in a person how well he treats people. This is important! That is, he does not care about color, gender, or anything. To what extent he feels the right attitude towards people in you.

Question: Not even what kind of specialist I am, but what my attitude is toward people?

Answer: Yes. This is how we should perceive people. But for this, we must train ourselves.

Question: So you should value relationships with people more than anything else?

Answer: Yes, then I will see it in a person.

Question: What conclusions should scientists really draw after seeing this? What should be their main, correct conclusions?

Answer: Education. Just that. So that there are no differences.
[302303]
From KabTV’s “News with Dr. Michael Laitman” 6/23/22

Related Material:
Can We Trust Robots?
Surgery Performed By A Robot
Robots Instead Of People

Discussion | Share Feedback | Ask a question




Laitman.com Comments RSS Feed