From the course: Introduction to AI Governance

Overview of AI governance

- [Presenter] Generative AI has taken everyone by surprise and shown immense potential in comprehending language like never before. It has become high stakes in boardroom discussions and is set to disrupt many of the ways businesses operate. In fact, most of the conferences these days center their agendas around a diverse range of generative AI topics to match the excitement of this trending technology. I am as excited as you are. I look forward to leveraging it across industries, especially in providing personalized healthcare and generating insights from complex domains like finance and legal. But while we see those benefits materialize, it also poses certain inadvertent risk that we must not overlook. You must be thinking, "Are these risk new because of generative AI?" Not really. Some of the risk persist from the pre-generative AI era, such as inheriting or at times amplifying bias from the training data, violating privacy by exposing sensitive user information without consent, lack of transparency in AI's decision making that impedes trust in its finding, subsequently creating a barrier to its adoption, and the notion of AI bringing efficiencies, resulting in loss of jobs for profiles requiring basic cognitive and manual skills like retail, salespersons, and customer service. With generative AI that can create new data, additional risk emerge that require urgent attention. Let us first focus on how convincingly generative AI can create fake content be it in the form of text, audio, images, and videos. And while such capability of generative AI proves to be effective in certain sectors like education, retail, fashion, and entertainment industry, these deepfakes make it difficult to identify what is real and are often used for notorious purposes such as fooling people, cyber attacks, manipulating opinions, and more. Such false information can lead to severe consequences for society, fueling misinformation and disinformation. Further, the content-generation ability of generative AI models is powered by the data from almost all of the internet, leading to the possibility that its output could closely resemble the existing work. That makes me think that if all of us are fractional contributors to generative models training data, then who owns the intellectual property right of such content? As we have just seen, generative AI applications present unique risk and challenges, which necessitate a more comprehensive approach to regulate and govern advanced AI systems. That's exactly what AI governance seems to do. However, most people think of governance as restrictive. When I'm consulting with clients, the first impression of governance is it hinders business growth. I feel that sentiment comes from the need to ship products faster, which is driven by catering to what is imminent. At a broader level, governance, in fact, allows us to foster innovation by harnessing the upside of AI while effectively managing its potential risk. Talking about risk, let us learn from historical examples where AI went wrong and how we could have solved it.

Contents