Skip to content

Commit

Permalink
add chain of thought prompting example (#164)
Browse files Browse the repository at this point in the history
* add chain of thought prompting example

* update chain of thought prompting example

* Update notebook and reformat
  • Loading branch information
shilpakancharla committed Jun 11, 2024
1 parent 94cbc27 commit 069296b
Showing 1 changed file with 226 additions and 0 deletions.
226 changes: 226 additions & 0 deletions examples/prompting/Chain_of_thought_prompting.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,226 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "fzRFGWxAsTm2"
},
"source": [
"##### Copyright 2024 Google LLC."
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"cellView": "form",
"id": "Y0nQsAf2sSfs"
},
"outputs": [],
"source": [
"# @title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "sP8PQnz1QrcF"
},
"source": [
"# Gemini API: Chain of thought prompting"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "bxGr_x3MRA0z"
},
"source": [
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
" <td>\n",
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/google-gemini/cookbook/blob/main/examples/prompting/Chain_of_thought_prompting.ipynb\"><img src = \"https://www.tensorflow.org/images/colab_logo_32px.png\"/>Run in Google Colab</a>\n",
" </td>\n",
"</table>"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ysy--KfNRrCq"
},
"source": [
"Using chain of thought helps the LLM take a logical and arithmetic approach. Instead of outputting the answer immediately, the LLM uses smaller and easier steps to get to the answer."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"id": "Ne-3gnXqR0hI"
},
"outputs": [],
"source": [
"!pip install -U -q google-generativeai"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"id": "EconMHePQHGw"
},
"outputs": [],
"source": [
"import google.generativeai as genai\n",
"\n",
"from IPython.display import Markdown"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "eomJzCa6lb90"
},
"source": [
"## Configure your API key\n",
"\n",
"To run the following cell, your API key must be stored it in a Colab Secret named `GOOGLE_API_KEY`. If you don't already have an API key, or you're not sure how to create a Colab Secret, see [Authentication](https://github.com/google-gemini/cookbook/blob/main/quickstarts/Authentication.ipynb) for an example."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"id": "v-JZzORUpVR2"
},
"outputs": [],
"source": [
"from google.colab import userdata\n",
"GOOGLE_API_KEY=userdata.get('GOOGLE_API_KEY')\n",
"\n",
"genai.configure(api_key=GOOGLE_API_KEY)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "LlRYq9JSeRzR"
},
"source": [
"## Example\n",
"\n",
"Sometimes LLMs can return non-satisfactory answers. To simulate that behavior, you can implement a phrase like \"Return the answer immediately\" in your prompt.\n",
"\n",
"Without this, the model sometimes uses chain of thought by itself, but it is inconsistent and does not always result in the correct answer."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"id": "8oS9LnnXXedG"
},
"outputs": [],
"source": [
"model = genai.GenerativeModel(model_name='gemini-1.5-flash-latest')"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"id": "0u1IjBOmeQgG"
},
"outputs": [
{
"data": {
"text/markdown": "5 minutes \n",
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"execution_count": 13,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt = \"\"\"\n",
"5 people can create 5 donuts every 5 minutes. How much time would it take 25 people to make 100 donuts?\n",
"Return the answer immediately.\n",
"\"\"\"\n",
"Markdown(model.generate_content(prompt).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "kCFJ2dvBubSm"
},
"source": [
"To influence this you can implement chain of thought into your prompt and look at the difference in the response. Note the multiple steps within the prompt."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"id": "HdMNHASyeWoK"
},
"outputs": [
{
"data": {
"text/markdown": "Here's how to solve the donut problem:\n\n**1. Find the individual production rate:**\n\n* If 5 people make 5 donuts in 5 minutes, that means one person makes one donut in 5 minutes.\n\n**2. Calculate the total production rate:**\n\n* With 25 people, and each person making one donut every 5 minutes, they would make 25 donuts every 5 minutes.\n\n**3. Determine the time to make 100 donuts:**\n\n* Since they make 25 donuts every 5 minutes, it will take 4 sets of 5 minutes to make 100 donuts (100 donuts / 25 donuts per 5 minutes = 4 sets).\n\n**Answer:** It would take 25 people **20 minutes** (4 sets x 5 minutes per set) to make 100 donuts. \n",
"text/plain": [
"<IPython.core.display.Markdown object>"
]
},
"execution_count": 14,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"prompt = \"\"\"\n",
"Question: 11 factories can make 22 cars per hour. How much time would it take 22 factories to make 88 cars?\n",
"Answer: A factory can make 22/11=2 cars per hour. 22 factories can make 22*2=44 cars per hour. Making 88 cars would take 88/44=2 hours. The answer is 2 hours.\n",
"Question: 5 people can create 5 donuts every 5 minutes. How much time would it take 25 people to make 100 donuts?\n",
"Answer:\"\"\"\n",
"Markdown(model.generate_content(prompt).text)"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "w_K9Sn3Yu9eL"
},
"source": [
"## Next steps\n",
"\n",
"Be sure to explore other examples of prompting in the repository. Try writing prompts about classifying your own data, or try some of the other prompting techniques such as few-shot prompting."
]
}
],
"metadata": {
"colab": {
"name": "Chain_of_thought_prompting.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 0
}

0 comments on commit 069296b

Please sign in to comment.