From the course: Tech On the Go: Ethics in AI

Fairness and inclusivity

- [Instructor] When we don't build AI responsibly, we can run into a whole host of issues, including systems that treat people unfairly. When systems extend or withhold opportunities, resources, or information, we call them allocation harms. An AI system should also work as well for one person as it does another, even in the case that no opportunities were withheld. When they don't, we typically call these quality of service harms. Harms of representation can arise when AI systems over-represent, under-represent, or even erase entire groups of people. In order to create fairer models, we have to understand that fairness is not a zero sum game. Prioritizing fairness is always ongoing and requires us to make imperfect trade-offs and specify who we're working towards fairness for. An AI system cannot be fair for all groups at once, so we typically frame fairness in terms of fairness related harms as a mechanism to prioritize the most harmed and reduce unfairness for those groups. I want to introduce you to a checkbox framework to better understand the perspectives of people most likely to be harmed by these algorithms. When you're filling out a form, this can be at the doctor or applying to a job, how often do you have to check other? From a data perspective, it's rare that we have enough resolution about groups that are in the other category to properly investigate if they're represented enough in a dataset. With this knowledge, recognize that each sensitive attribute where you're represented or able to check an available box and not other is an access of prioritization. The form creators, whether employers or web designers, prioritize your attribute enough to represent you on the form. For those who are not prioritized this way or who are subject to unique harms because of their intersectionality, we must work from a counterfactual mindset. Those who have received the least priorities should be prioritized when mitigating the harms of technology. For example, in facial recognition, the Gender Shades project showed that black women experienced far less quality of service in comparison to white men. In many fairness contexts, we look at factors like gender or race but rarely both. Without this investigation, we never recognize that overlapping groups may deal with specific harms that are unidentifiable without looking at groups in combination. We ought to create technology that includes as many people as possible. When we consider a tech like facial recognition gender classifiers, we can see where we exclude certain groups of people. We cannot build ML models that are robust when we exclude various identities from being represented in our data. So, what do we do to build inclusively? We ought to create technology that includes as many people as possible. When we consider products like facial recognition gender classifiers, over 90% of them predict on binary gender alone, often excluding people who are non-binary or gender non-conforming. We can't build ML systems that are robust when we exclude various identities from being represented in our data. For systems like gender classifiers, we should also ask why we need a tool that tries to predict gender in the first place? To be clear, AI systems cannot identify unobservable concepts, such as emotions, gender, or race. They only use observable features, such as facial muscle movement, hair length, or skin tone to make predictions. Cameras were optimized using Shirley Cards to enhance the appearance of people with light skin specifically. And while camera makers did make some advancements, this wasn't because of the complaints of black camera owners but instead by chocolatiers and furniture makers. When we consider inclusivity, it shouldn't just be about who's represented but also about who has listened to you and how we advance our software using their feedback.

Contents