Restez organisé à l'aide des collections
Enregistrez et classez les contenus selon vos préférences.
Ce guide explique comment migrer votre code Python depuis l'API PaLM vers l'API Gemini. Vous pouvez générer des conversations textuelles et multitours (chat) avec Gemini, mais veillez à vérifier vos réponses, car elles peuvent être différentes des sorties PaLM.
Résumé des différences entre les API
Les noms des méthodes ont changé. Au lieu d'avoir des méthodes distinctes pour générer du texte et du chat, une méthode generate_content permet d'effectuer les deux.
Chat dispose d'une méthode d'assistance start_chat qui simplifie la discussion.
Au lieu de fonctions autonomes, les nouvelles API sont des méthodes de la classe GenerativeModel.
La structure de réponse de sortie a été modifiée.
Les catégories de paramètres de sécurité ont changé. Consultez le guide des paramètres de sécurité pour plus de détails.
Génération de texte : Basic
PaLM
Gemini
pip install google-generativeai
import google.generativeai as palm
import os
palm.configure(
api_key=os.environ['API_KEY'])
response = palm.generate_text(
prompt="The opposite of hot is")
print(response.result) # 'cold.'
pip install google-generativeai
import google.generativeai as genai
import os
genai.configure(
api_key=os.environ['API_KEY'])
model = genai.GenerativeModel(
model_name='gemini-pro')
response = model.generate_content(
'The opposite of hot is')
print(response.text)
# The opposite of hot is cold.'
Génération de texte : paramètres facultatifs
PaLM
Gemini
pip install google-generativeai
import google.generativeai as palm
import os
palm.configure(
api_key=os.environ['API_KEY'])
prompt = """
You are an expert at solving word
problems.
Solve the following problem:
I have three houses, each with three
cats. Each cat owns 4 mittens, and a hat.
Each mitten was knit from 7m of yarn,
each hat from 4m. How much yarn was
needed to make all the items?
Think about it step by step, and show
your work.
"""
completion = palm.generate_text(
model=model,
prompt=prompt,
temperature=0,
# The maximum length of response
max_output_tokens=800,
)
print(completion.result)
pip install google-generativeai
import google.generativeai as genai
import os
genai.configure(
api_key=os.environ['API_KEY'])
model = genai.GenerativeModel(
model_name='gemini-pro')
prompt = """
You are an expert at solving word
problems.
Solve the following problem:
I have three houses, each with three
cats. Each cat owns 4 mittens, and a hat.
Each mitten was knit from 7m of yarn,
each hat from 4m. How much yarn was
needed to make all the items?
Think about it step by step, and show
your work.
"""
completion = model.generate_content(
prompt,
generation_config={
'temperature': 0,
'max_output_tokens': 800
}
)
print(completion.text)
Chat : version de base
PaLM
l'API
pip install google-generativeai
import google.generativeai as palm
import os
palm.configure(
api_key=os.environ['API_KEY'])
chat = palm.chat(
messages=["Hello."])
print(chat.last)
# 'Hello! What can I help you with?'
chat = chat.reply(
"Just chillin'")
print(chat.last)
# 'That's great! ...'
pip install google-generativeai
import google.generativeai as genai
import os
genai.configure(
api_key=os.environ['API_KEY'])
model = genai.GenerativeModel(
model_name='gemini-pro')
chat = model.start_chat()
response = chat.send_message(
"Hello.")
print(response.text)
response = chat.send_message(
"Just chillin'")
print(response.text)
Chat: historique des conversations
PaLM
l'API
chat.messages
[{'author': '0', 'content': 'Hello'},
{'author': '1', 'content': 'Hello! How can I help you today?'},
{'author': '0', 'content': "Just chillin'"},
{'author': '1',
'content': "That's great! I'm glad you're able to relax and
take some time for yourself. What are you up to today?"}]
chat.history
[parts {
text: "Hello."
}
role: "user",
parts {
text: "Greetings! How may I assist you today?"
}
role: "assistant",
parts {
text: "Just chillin\'"
}
role: "user",
parts {
text: "That\'s great! I\'m glad to hear
you\'re having a relaxing time.
May I offer you any virtual entertainment
or assistance? I can provide
you with music recommendations, play
games with you, or engage in a
friendly conversation.\n\nAdditionally,
I\'m capable of generating
creative content, such as poems, stories,
or even song lyrics.
If you\'d like, I can surprise you with
something unique.\n\nJust
let me know what you\'re in the mood for,
and I\'ll be happy to oblige."
}
role: "assistant"]
Chat: température
PaLM
Gemini
# Setting temperature=1 usually produces more zany responses!
chat = palm.chat(messages="What should I eat for dinner tonight? List a few options", temperature=1)
chat.last
'Here are a few ideas ...
model = genai.GenerativeModel(model_name='gemini-pro')
chat = model.start_chat()
# Setting temperature=1 usually produces more zany responses!
response = chat.send_message(
"What should I eat for dinner tonight? List a few options",
generation_config={
'temperature': 1.0
})
print(response.text)
'1. Grilled Salmon with Roasted Vegetables: ...'
Sauf indication contraire, le contenu de cette page est régi par une licence Creative Commons Attribution 4.0, et les échantillons de code sont régis par une licence Apache 2.0. Pour en savoir plus, consultez les Règles du site Google Developers. Java est une marque déposée d'Oracle et/ou de ses sociétés affiliées.