Combating Bias with a Human-Centered Approach to Artificial Intelligence

Stevens Institute of Technology

Stevens Institute of Artificial Intelligence director says the key to addressing algorithmic bias is prioritizing awareness, education and integrity — both in data and in humans.

What does it mean for an artificial intelligence (AI) to be biased?

A documentary recently released on Netflix and PBS highlights the experiences of MIT Media Lab researcher Joy Buolamwini, a dark-skinned Black woman who discovered that commercially available facial recognition software algorithms were unable to detect her face unless she wore a white mask.

Last fall, a Twitter user posted to the social media platform to complain that Zoom's virtual background algorithms continually erased his Black colleague's head — only to discover that Twitter's own photo preview algorithm repeatedly chose white faces in a photo as "salient" (Twitter's word) over Black faces, regardless of position or even if said faces were presented in cartoon form.

How could three independently designed algorithms consistently fail to "see" Black people — the social implications of which are tantamount to technological racial discrimination?

And how do such failures enter the system in the first place?

According to Stevens Institute of Technology computer science professor and director of the Stevens Institute of Artificial Intelligence (SIAI) Jason Corso, the main culprit is most often data. . . .

Continue reading at Stevens Institute of Technology.