top of page
Writer's pictureTina Gallico

AI design challenges: bias

Updated: Mar 4, 2022


The problem of bias in AI is not a problem of the AI design per se but of historical societal conditions of inequality and prejudice which characterises or is missing from the datasets that the AI system learns from.

 



Potential for bias in artificial intelligence


Humans make decisions and take actions underpinned by unconscious biases and emotional reflexes. Artificial intelligence algorithms make predictions based on the models we create and the data we input. Any dataset involving human decision-making will to some extent have inherent elements of human prejudice, misunderstanding and bias.


Mathematician Cathy O’Neil argues that the mathematical models within software applications (such as AI systems) that process data and power the ever increasing ‘data economy’ amplify human decision-making weaknesses and cause cumulative loops of injustice and inequality (O’Neil, 2016). 


Zou and Schiebinger’s recent commentary for Nature journal identified numerous examples whereby sexist and racist outcomes can be produced from skewed training data. They note (Zou & Schiebinger, 2018):


“Biases in the data often reflect deep and hidden imbalances in institutional infrastructures and social power relations.”

For example, an AI system designed to identify candidates for an executive position might favour white males in its recommendations because historical data contains a higher frequency of this group compared to any other. The (narrow) AI agent does not have the reflexivity to consider why this might be the case or ways to identify and correct its own biases.


Bias also occurs not only due to (unjust) patterns in the training data, but also due to a lack of training data. Last week the Georgia Institute of Technology released a study that found object detection models, such as those used in self-driving cars, might be less accurate for people classified as dark-skinned compared to light skinned people. Whilst the study did not test actual models being used by autonomous vehicle manufacturers (due to the required data not being publicly available), the study does highlight the need for companies to be diligent to ensure sufficient object detection training data so that safety levels do not vary according to a person’s physical characteristics. 


There is also potential for AI systems to enable human bias in new ways. Facial recognition technology, which is increasingly being employed by law enforcement and government authorities, has been shown to disproportionally incorrectly identify people with dark skin tones. There are AI based facial recognition systems that can identify homosexuals, which could be used dangerously in countries with oppressive regimes or in more tolerant societies by prejudiced actors.  


Nobel Prize winning psychologist Daniel Kahneman’s work on human decision-making extrapolates that human beings cannot overcome all forms of bias. Since we know that some degree of bias will be evident in any data created from human decisions, it is essential that we take a more considered or ‘slow’ approach to the design process of AI and its learning data to correct the parameters that lead to bias outcomes. 


Examples developed to address potential bias in AI systems include Bolukbasi et al's (2016) methodology to remove gender stereotypes, such as the association between the words receptionist and female, while maintaining desired associations such as between the words queen and female. Hashimoto et al (2018) propose an approach to achieve greater fairness for minority groups in the application of machine learning models, such as for broadening speech recognition capabilities. Kim et al (2018) propose a notion of fairness called metric multi-fairness and a method for its implementation to ensure that similar subpopulations are treated similarly.


With the identification of bias in AI training data or outputs, changes to the algorithms could then potentially create an AI system with much less bias or irrationality compared to humans. This could turn into a societal benefit of AI in future applications. 



Yorumlar


Yorumlara kapatıldı.
bottom of page