From the course: Generative AI Skills for Creative Content: Opportunities, Issues, and Ethics

Bias issues

- Let's explore the potential for bias in the world of generative AI. When I talk about bias in this way, I'm talking about the generation of content that contains prejudice in favor of or against one thing, person, or group compared with another. This can result in content that is offensive, harmful, inaccurate, or misleading. For a taste of this, take a look at this Midjourney image generated from the prompt, a photo of three lawyers, the result all white men. Here's another prompt, a photo of three laborers. The result, mostly people of color. How does this happen? Well, what do we know about how generative AI models are trained? They are fed a plethora of training data. This training data was created by humans, and unfortunately, human created work contains bias. That could include all sorts of prejudicial text. It could also include imagery that might be racist, or sexist, or classist. It might include inaccurate information or opinions disguised as truths called from internet forums. It could also simply consist of data that may be overly biased toward a particular point of view or even a particular culture. For example, the most popular AI tools have been trained by Western companies using data sets that skew toward Western culture. This article points out that people in different cultures smile differently. A broad toothy smile might seem warm and friendly in the United States, but come across as awkward, deceptive, or just wrong in other countries or contexts. And as the old saying goes, garbage in, garbage out. Simply put, this means that if the training data is biased, the AI content may also contain bias. And remember, AI isn't human. It has no understanding that the content it creates might contain stereotypes or harmful depictions. It doesn't realize the answers it provides might skew toward a particular perspective or simply be completely misinformed. And by the time you're entering a prompt, there aren't humans to oversee the result. This leaves people with assets that should not be taken at face value. As you can see, many of these AI models provide warnings about this type of consequence. ChatGPT says it may produce inaccurate information about people, places, or facts. Bard says it may display inaccurate or offensive information. There are some things that could potentially address these types of bias issues. The first lies at the phase where the training data is introduced to the AI model. Human overseers could try to make the data as diverse and unbiased as possible. But then again, are these human overseers completely objective and unbiased? Maybe not. Another approach could be to implement bias detection and mitigation models. These would attempt to identify and remove bias content at the output stage. Now, unfortunately, you don't have too much control over either of these options since they depend on measures taken by those responsible for designing the AI model. What you do have control over is your own awareness of the potential for bias. You have control not to take AI generated material at face value. If a piece of AI-generated content doesn't meet your standards for diversity, for example, you can redo it or modify it until it does. And you have control over your dedication to well-rounded research. All of these actions are ways that you can commit to responsible use of generative AI.

Contents