top of page

Don't Dream It, Do It: Mitigating Bias in Generative AI Technology

Writer's picture: Lindsay PierceLindsay Pierce

Updated: Jun 9, 2023

The advent of Generative Artificial Intelligence (AI) applications have fundamentally transformed the landscape of what is possible, creating new destinations and goalposts in innovation. We can see the power of these possibilities in ChatGPT3 and text-to-image platforms like Midjourney.

woman dancing through an abstract landscape of colors
Midjourney (V3) collaborated artwork - Lindsay Pierce 2022

But accompanying the surge of enthusiasm for AI solutions are deep rooted public fears and questions about the ethical nuances of using these applications. Just last week, Sam Altman, daddy-o of OpenAI, testified before Congress (transcript) about the potentiality and the pitfalls of the rapid growth and evolutions in AI technology. In that same meeting, Congress members and Altman discussed what safety and what "responsible" development may ultimately need to mitigate the harms inherent in scalable neural networks. They discussed the mounting (and valid) concerns about the transmission and proliferation of bias because of the way these systems autonomously repackage and present "new" data from existing data—data rife with the prejudices, biases, and stereotypes present throughout the world. These concerns are exacerbated by the reality that the sheer complexity of the algorithms used in generative AI applications means most of us will never truly understand the specifics of machine-learning processes. As Roman Yampolskiy states in the 2019 literature review “Unexplainability and Incomprehensibility of Artificial Intelligence,”


“The more accurate is the explanation the less comprehensible it is, and vice versa, the more comprehensible the explanation the less accurate it is. A non-trivial explanation can’t be both accurate and understandable, but it can be inaccurate and comprehensible.”

Chilling, but chillier still? Emergent spontaneous behaviors have occurred in collaboration with AI, as seen in user testing with Bing’s amorous (or downright abusive) Chatbot (Roose 2023). Generative AI is being used in a variety of domains, including natural language processing, image and video generation, and autonomous decision-making, but developers and engineers can’t always anticipate all the ways these programs will react or behave. Think: entertainment and media, but also commerce and healthcare.


Going back to the issue of bias however, these programs can develop systemic (and scalable) “opinions” formed from what they have learned from us. We can also anticipate that AI will continue to be integrated into more and more of our vital systems, so the exponentially for the transmission of social bias can have a significant impact on the results generated. And these impacts aren’t just leading to unfair assumptions or inaccurate decision-making. Because what do we mean when we say “results”? Often the answer is “people.” Human beings.


These impacts are especially salient for marginalized and disenfranchised communities who are so often left out of the conversations happening in the silos of tech industry. Much can be said about exactly how this bias gets baked in and how we can identify it—but that’s a subject for another post. At FIERCE, we’re interested in generating solutions and strategies to mitigate the pervasive bias we already know exists.


One such example was captured in my own work at @aigaydar, a comparative analysis project on Instagram, exploring how gender identity and sexual orientation are portrayed in Midjourney (v3) and NightCafe (two generative "text-to-image" programs). In this instance, the word "intersex" is banned outright on Midjourney, with potential impact for anywhere from 0.018% to 1.7% (Sax 2002) of the world's population who may discover their medical and lived experience has been deemed somehow inappropriate.


This list of strategies and actions is far from exhaustive, but could be considered the lowest bar developers and engineers should seek to clear:

  • Bake anti-oppressive and anti-racist strategies into AI design from the beginning. Proactively discuss the reality of bias and prejudice proliferation, and establish a code of ethics from the get-go.

  • Don’t just talk about it. Commit to developing comprehensive strategies for reducing harm, which should include developing direct relationships with and hiring from diverse and intersectional communities.

  • Proactively seek out feedback and impact statements from marginalized entities, communities, and individuals to better understand the ecosystem created through the interpretation of data and the implementation of those interpretations.

  • Take responsibility for impact by investing (see: “money-meet-mouth”) in comprehensive consultation, training, and advisement from entities versed in anti-oppression principles and tenants of responsible AI.

These strategies also include the need for on-going research on how to reduce social bias in generative AI applications, and beyond the development of algorithms (for which I am not an expert) the interdisciplinary nature of AI technology is in need of more qualitative research, especially regarding the real-world outcomes of generative AI use and implementation. Hearing directly from individuals and communities who are not incentivized by the competitive goalposts in tech industry will be vital in accurately capturing the real-world benefits of responsible AI and the detriments of AI created without humanity at its core.



Citations

Roose, K. (2023, February 27). A Conversation With Bing’s Chatbot Left Me Deeply Unsettled. The New York Times. https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html


Pierce, Lindsay. [@aigaydar]. (2022, September 13). “NightCafe prompt: Intersex student..." [Image]. Instagram. https://www.instagram.com/p/CidKyVIJW0W/


Sax, Leonard (August 2002). "How common is intersex? a response to Anne Fausto-Sterling". Journal of Sex Research. 39 (3): 174–178. doi:10.1080/00224490209552139.


5 views0 comments

Recent Posts

See All

Comments


bottom of page