AI should be utilized as a tool to identify and eliminate bias inherent in human decision-making. AI systems are only as good as the data it is provided. Bad data contain implicit racial, gender, or ideological biases, and many AI systems will continue to be trained using these data, which makes this an ongoing problem. Bias in AI does not come from the algorithm being used but from people.
Back in 2015, Jacky Alciné, a software engineer, pointed out that the image recognition algorithms in Google Photos were classifying his black friends as “gorillas.” Google said it was “appalled” at the mistake, apologized to Alciné, and promised to fix the problem.AI should not be designed or used in ways that violate the fundamental principle of human dignity for all people.
 James Vincent, “Google ‘fixed’ its racist algorithm by removing gorillas from its image-labeling techs,” The Verge, January 12, 2018, accessed October 28, 2019, https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai.
Post a Comment