Skip to main content

OpenAI

caution

You are currently on a page documenting the use of OpenAI text completion models. The latest and most popular OpenAI models are chat completion models.

Unless you are specifically using gpt-3.5-turbo-instruct, you are probably looking for this page instead.

OpenAI offers a spectrum of models with different levels of power suitable for different tasks.

This example goes over how to use LangChain to interact with OpenAI models

Overview

Integration details

ClassPackageLocalSerializableJS supportPackage downloadsPackage latest
ChatOpenAIlangchain-openaibetaPyPI - DownloadsPyPI - Version

Setup

To access OpenAI models you'll need to create an OpenAI account, get an API key, and install the langchain-openai integration package.

Credentials

Head to https://platform.openai.com to sign up to OpenAI and generate an API key. Once you've done this set the OPENAI_API_KEY environment variable:

import getpass
import os

if "OPENAI_API_KEY" not in os.environ:
os.environ["OPENAI_API_KEY"] = getpass.getpass("Enter your OpenAI API key: ")

If you want to get automated best in-class tracing of your model calls you can also set your LangSmith API key by uncommenting below:

# os.environ["LANGSMITH_API_KEY"] = getpass.getpass("Enter your LangSmith API key: ")
# os.environ["LANGSMITH_TRACING"] = "true"

Installation

The LangChain OpenAI integration lives in the langchain-openai package:

%pip install -qU langchain-openai

Should you need to specify your organization ID, you can use the following cell. However, it is not required if you are only part of a single organization or intend to use your default organization. You can check your default organization here.

To specify your organization, you can use this:

OPENAI_ORGANIZATION = getpass()

os.environ["OPENAI_ORGANIZATION"] = OPENAI_ORGANIZATION

Instantiation

Now we can instantiate our model object and generate chat completions:

from langchain_openai import OpenAI

llm = OpenAI()
API Reference:OpenAI

Invocation

llm.invoke("Hello how are you?")
'\n\nI am an AI and do not have emotions like humans do, so I am always functioning at my optimal level. Thank you for asking! How can I assist you today?'

Chaining

from langchain_core.prompts import PromptTemplate

prompt = PromptTemplate.from_template("How to say {input} in {output_language}:\n")

chain = prompt | llm
chain.invoke(
{
"output_language": "German",
"input": "I love programming.",
}
)
API Reference:PromptTemplate
'\nIch liebe Programmieren.'

Using a proxy

If you are behind an explicit proxy, you can specify the http_client to pass through

%pip install httpx

import httpx

openai = OpenAI(
model_name="gpt-3.5-turbo-instruct",
http_client=httpx.Client(proxies="http://proxy.yourcompany.com:8080"),
)

API reference

For detailed documentation of all OpenAI llm features and configurations head to the API reference: https://python.langchain.com/api_reference/openai/llms/langchain_openai.llms.base.OpenAI.html


Was this page helpful?


You can also leave detailed feedback on GitHub.