From the course: Generative AI Skills for Creative Content: Opportunities, Issues, and Ethics

Ethics of generative AI

- So far, we've spent most of the course talking about the many benefits and uses of generative AI for creative professionals. Now we'll shift to explore some of the issues and ethics of this powerful technology. I'd like to start by talking broadly about ethics. It's a huge part of the discussion, because when you're creating brand new content, there are serious ethical considerations in designing and implementing the AI model. As you can see, if you visit the websites for many of the technologies that we've discussed in this course, most of them have a dedicated ethics page or content policy statement. Responsible AI developers must commit to a strong ethical stance. Now, we obviously won't be able to go through all of them, but I'll show you a representative statement in each category, text, image, video, and audio to give you a sense of some of these commitments. First, text. Here's a look at ChatGPT's ethical guidelines. As you can see, it highly stresses the importance of user privacy, fairness and impartiality, transparency, safety, and responsibility. If I ask it to write a phishing email, for example, you can see that it refuses the request and explains why it can't ethically do so. Next, images. Here's DALL-E's content policies on preventing harmful generations and curbing misuse. It won't create violent, hate, adult, or political content. It ensures this by manually removing the most explicit content from the training data and by disallowing certain words and names and prompts, as well as implementing human monitoring systems. If I ask it to create images of a gunfight, for example, you'll notice that there isn't a gun in sight nor any obvious violent connotations within the imagery at all, no matter how many times I generate the prompt. Now, video. Rephrase's ethics policies stresses the importance of consent for any digital avatar they create, as well as strict access standards. They forbid the use of their tech for offensive content and stress the importance of transparency and education. And audio. Respeecher won't allow deceptive uses of the tech. It requires written consent from people whose voices are being used for the training. It also stresses the importance of education, synthetic speech detection, and collaboration with social media platforms to limit or ban unethical content. Responsible AI is serious stuff. After all, with great power comes great responsibility and these types of ethical commitments are commendable, but they aren't perfect. Neither is the technology itself or the training data that informs it or the people who use it. There are many points during the process where issues can arise. We'll spend the rest of this chapter talking about what some of these are.

Contents