From the course: Prompt Engineering with ChatGPT

How ChatGPT works

- [Instructor] ChatGPT is powered by a large language model. And at their core, text-to-text models are good at predicting the next token. You can think of a token as a small unit of language, such as a word. Or a word could be made up of a few tokens. Now, let's take this example where we give a language model a prompt, such as, "I try to learn something new." Now, a large language model will likely have trained on lots and lots of text, and can come up with the likelihood of different words being next. So with the prompt, "I try to learn something new," the next word could be "every." It could be "every day." It could be a brand-new line. It could be "each," or it could be "other." Now, a large language model will run this raffle, so to speak, and could come up with "every day." Now, notice that "every day" was not the most likely to come up, but 17% is still a good chance of the next word being "every day." Now ChatGPT is particularly impressive when it comes to responding to instructions and conversational abilities, and this is due to a fine-tuning process called reinforcement learning from humans. This fine-tuning technique involves human feedback and helps get more desirable results when it comes to giving instructions with few or no examples and when it comes to conversational chatbots.

Contents