Daripada kursus: Prompt Engineering with ChatGPT

Aktifkan semua kursus pada hari ini

Sertai hari ini untuk mengakses lebih daripada 23,100 kursus yang ditawarkan oleh pakar industri.

Hallucinations

Hallucinations

- [Instructor] Large language models such as ChatGPT are very impressive. They can, however, be inaccurate and sometimes even make up things. It's often said that models such as ChatGPT are 90 or 97% accurate, but 100% confident. This notion stems from the fact that sometimes, in rare occasions, ChatGPT can say something that is false very, very confidently. Now, to mitigate the impact of hallucinations, there are a few things we can do. Always verify ChatGPT's output. Do not use any code output that you understand and that you have reviewed. Keep humans in the loop. So when integrating ChatGPT into a system, always make sure there's a human in the loop checking outputs. And whenever there's a severe hallucination or inaccuracy, it's a good idea to document it, perhaps in a shared document. And in this way, we can share knowledge about these shortcomings of ChatGPT.

Kandungan