We Need a Feminist Approach to AI Development

By Dr. Katelyn Jones, Women, Peace and Security Fellow, Chicago Council on Global Affairs & Nicole Mattea, Women, Peace and Security Intern, Chicago Council on Global Affairs

COVID-19 has sent most of the world into isolation, causing unemployment rates to skyrocket and the global economy to reach unprecedented lows. With over 9 million cases worldwide and no vaccine in sight, healthcare and scientific communities are working to develop Artificial Intelligence (AI) technologies as a means of combatting coronavirus and preparing for future pandemics. China, where the outbreak first began, has been using AI and other burgeoning technologies—including thermal scanners and facial recognition software—to track and monitor the virus, search for and apply treatment, and distribute resources. The rest of the world, too, is rushing to develop and adopt new AI technologies as a pandemic response. Russia and Poland, like China, are using facial recognition to enforce quarantines, and international companies like Amazon are using AI to track employees’ social distancing.

While AI is undeniably a promising avenue to monitor disease spread, we know very little about the social and political consequences of using AI to address global public health issues. What we do know from past AI applications, however, is that AI often reinforces racial and gender biases, further excluding and disempowering already marginalized persons. We argue that it is important to investigate AI as a tool to respond to and prevent pandemics, but it is essential to do so with careful attention to how AI can harm less powerful individuals, especially women and people of color. To do so, researchers need to take a feminist approach when developing, analyzing, and applying AI.

Although AI may seem objective, it is subject to human biases. AI has a history of gender bias and racial exclusion, originating from unrepresentative datasets and research used to train AI systems. A study on facial recognition software reported that such programs have significantly higher rates of error for women with dark skin than they do for men with light skin. AI systems have also shown to be affected by and influenced by gender stereotyping, with over 67 percent of digital assistants—meant to serve and assist in tasks—presented as female. The individuals designing AI, as well as the data used to create AI algorithms , have biases that reinforce discriminatory tendencies when left unchecked.

AI’s biases are particularly stark in medical research. In one case, health service company Optum sold an algorithm that excluded more than half of black patients that would have otherwise been flagged at risk of medical care. In 2016, a computer model was created to identify melanoma via image. However, limitations soon revealed that over 95 percent of the 100,000 imagesthat had been used to  generate the model  depicted white skin, thereby problematically underrepresenting persons of color. Additionally, medical AI algorithms often omit gender. Many medical algorithms are based on U.S. military data, where in some cases women only account for six percent of those studied. The overwhelming evidence for AI’s gender and racial biases in medicine should be an immediate and pressing concern for researchers and computer scientists when it comes to the use of AI, not just COVID-19 but also for future pandemics.

Unfortunately, biases are not a frontline concern in some of the biggest pushes for AI development today. Consider C3.ai DTI, a consortium of six leading research universities, AI software provider C3.ai, and Microsoft. Launched in March 2020, C3.ai DTI aims to develop new AI technology as a means of addressing present and future pandemics. With over $300 million in contributions, they are calling for proposals on how AI can be used as a tool for prevention, understanding, and treatment for COVID-19. C3.ai DTI’s call for proposals, though, makes no mention of how researchers are expected to address AI’s potential exclusion of individuals’ identities or likelihood of reinforcing existing inequities. Despite the overwhelming evidence that AI is often discriminatory, current AI efforts to address pandemics do not actively work to address biases or mitigate negative consequences.

This failure to address identity is alarming given the ways that COVID-19 has already disproportionately and negatively affected marginalized persons, especially women and people of color. To ensure that AI mitigates pandemics’ health consequences, and to prevent AI from exacerbating extant gender and racial divides, new AI developments must ensure that the data used are representative of the entire population. Moreover, researchers must carefully examine the potential negative sociopolitical consequences of applying their technologies.

To avoid both biased technologies and worsening social inequities, AI researchers would do well to take a feminist approach. This involves steps beyond hiring more women, which is undoubtedly helpful, but also changing the way data are collected and AI is tested and applied. Data must be gender-disaggregated to enable decision-makers to act to address the needs of those at heightened risk of infection and mortality.  Further, researchers need to consider how multiple axes of identity intersect and shape an individual’s interaction with systems of oppression. For example, a woman of color’s experience with AI will be different than a white woman’s experience, and analyses must involve an intersectional perspective to capture these variations. Researchers must ask questions like: Where are women in this dataset? Where are the people of color? How are individuals already impacted by longstanding inequities going to be affected by these technologies?  How, for example, might facial recognition software affect individuals’ daily sense of (in)security?

If AI is created and applied without critical attention to its biases, existing inequities will be exacerbated, and women and people of color will suffer. We need a feminist approach to develop effective AI.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x