You have commented 339 times on Rantburg.

Your Name
Your e-mail (optional)
Website (optional)
My Original Nic        Pic-a-Nic        Sorry. Comments have been closed on this article.
Bold Italic Underline Strike Bullet Blockquote Small Big Link Squish Foto Photo
-Great Cultural Revolution
Is AI Racist ?
2024-01-29
[The Conversation] Problems of racial and gender bias in artificial intelligence algorithms and the data used to train large language models like ChatGPT have drawn the attention of researchers and generated headlines. But these problems also arise in social robots, which have physical bodies modeled on nonthreatening versions of humans or animals and are designed to interact with people.

The aim of the subfield of social robotics called socially assistive robotics is to interact with ever more diverse groups of people. Its practitioners’ noble intention is "to create machines that will best help people help themselves," writes one of its pioneers, Maja Matarić. The robots are already being used to help people on the autism spectrum, children with special needs and stroke patients who need physical rehabilitation.

But these robots do not look like people or interact with people in ways that reflect even basic aspects of society’s diversity. As a sociologist who studies human-robot interaction, I believe that this problem is only going to get worse. Rates of diagnoses for autism in children of color are now higher than for white kids in the U.S. Many of these children could end up interacting with white robots.

WHY ROBOTS TEND TO BE WHITE
Given the diversity of people they will be exposed to, why does Kaspar, designed to interact with children with autism, have rubber skin that resembles a white person’s? Why are Nao, Pepper and iCub, robots used in schools and museums, clad with shiny, white plastic? In The Whiteness of AI, technology ethicist Stephen Cave and science communication researcher Kanta Dihal discuss racial bias in AI and robotics and note the preponderance of stock images online of robots with reflective white surfaces.

What is going on here?

One issue is what robots are already out there. Most robots are not developed from scratch but purchased by engineering labs for projects, adapted with custom software, and sometimes integrated with other technologies such as robot hands or skin. Robotics teams are therefore constrained by design choices that the original developers made (Aldebaran for Pepper, Italian Institute of Technology for iCub). These design choices tend to follow the clinical, clean look with shiny white plastic, similar to other technology products like the original iPod.
Posted by:Besoeker

#9  AI is more than a buzzword. It is an actual New Thing. Early AI attempts in areas like automated English/Russian translation, a big deal during the Cold War, were dismal failures. Newer approaches based on Bayesian probability and machine learning have been rather successful. The Netflix movie recommender is powered by machine learning and lots and lots viewer data. Google Translate gives you the services of a United Nations-worth of translators, without all the baggage of an actual UN. Thanks to machine leaning, some poor woman at the Post Office is freed from routing letters by reading hand-scrawled envelopes and typing in zip codes.

That depends on the programmers who programmed it and the designers who created the specifications the programmers used.

Once upon a time, that was strictly true, but the invention of the Neural Network (NN) changed things a bit. Someone still has to write the code, but NNs differ from traditional programs in two big ways:
1) A NN must be trained before it is useful. Training takes time and data, lots and lots of data. Training an NN is an art, not a science.
2) A trained NN is a black box. With a traditional program, you can step thru the code and at least pretend to understand what is going on. With a NN, you get a big box of numbers and no explanation of where a particular output came from.

The newest New Thing is the Large Language Model (LLM), like ChatGPT and friends. By 'large', they mean friggin' huge. For a simple model of projectile motion, you only need 2 parameters - position and velocity. LLMs have millions or even billions of parameters. Their size makes them really, really good at their job which is predicting the next word in a sentence. But generating nice sentences is not the same as understanding them. ChatGPT will readily hallucinate 'facts' like legal citations or scientific references.

tl;dr: Modern AI is a toolkit of algorithms and techniques for solving hard problems. Large Language Models have become so good at making sentences that humans are failing the Turing Test right and left.
Posted by: SteveS   2024-01-29 21:59  

#8  "Synergistic AI. That's the bullshit buzzword of the future. Think Synthetic Plasticsâ„¢"
Posted by: Frank G   2024-01-29 18:51  

#7  The speed with which AI generated imagery has occupied the net is astounding.
Posted by: Ululating Platypus   2024-01-29 15:04  

#6  AI is nothing but a buzz word. All it really means is computer software. The buzzword is meant to imply that somehow the software is far more advanced than previous generations. Maybe it is, maybe it isn't. That depends on the programmers who programmed it and the designers who created the specifications the programmers used. If it's racist, that's because of the people who created it. Now, if some of the people who are so quick to use the extremely derogatory term "racist" were to learn to code, maybe they could create computer software of their own.
Posted by: Abu Uluque   2024-01-29 12:58  

#5  artificial intelligence. ask if non-artificial can be racist and you have your answer.
Posted by: irish rage boy   2024-01-29 11:13  

#4  Harvard dropout builds wearable AI companion that hangs around neck
Posted by: Skidmark   2024-01-29 09:09  

#3  Maybe not just white.

China's tightly controlled internet flooded with antisemitism following Hamas massacre
Posted by: Skidmark   2024-01-29 09:04  

#2  GIGO
Posted by: M. Murcek   2024-01-29 08:23  

#1  nonthreatening versions of humans
Are apparently white, not rainbow.

the data used to train large language models

Generative language models derived from grammar rules, trained on Twitter conversations [a personal favorite] or news media text, which seemingly is biased.

"The proverbial saying 'You are what you eat' is the notion that to be fit and healthy you need to eat good food."

I wonder how an AI trained on rap lyrics would turn out.
Posted by: Skidmark   2024-01-29 08:13  

00:00