Skip to main content

ChatOpenAI

  • TODO: Make sure API reference link is correct.

This will help you getting started with ChatOpenAI chat models. For detailed documentation of all ChatOpenAI features and configurations head to the API reference.

Overview

Integration details

  • TODO: Fill in table features.
  • TODO: Remove PY support link if not relevant, otherwise ensure link is correct.
  • TODO: Make sure API reference links are correct.
ClassPackageLocalSerializablePY supportPackage downloadsPackage latest
ChatOpenAI@langchain/openai✅/❌beta/❌✅/❌NPM - DownloadsNPM - Version

Model features

Tool callingStructured outputJSON modeImage inputAudio inputVideo inputToken-level streamingNative asyncToken usageLogprobs
✅/❌✅/❌✅/❌✅/❌✅/❌✅/❌✅/❌✅/❌✅/❌✅/❌

Setup

  • TODO: Update with relevant info.

To access ChatOpenAI models you’ll need to create a/an ChatOpenAI account, get an API key, and install the @langchain/openai integration package.

Credentials

  • TODO: Update with relevant info.

Head to (TODO: link) to sign up to ChatOpenAI and generate an API key. Once you’ve done this set the OPENAI_API_KEY environment variable:

export OPENAI_API_KEY="your-api-key"


If you want to get automated tracing of your model calls you can also set your [LangSmith](https://docs.smith.langchain.com/) API key by uncommenting below:

```{=mdx}

```bash
# export LANGCHAIN_TRACING_V2="true"
# export LANGCHAIN_API_KEY="your-api-key"

### Installation

The LangChain ChatOpenAI integration lives in the `@langchain/openai` package:

```{=mdx}

```bash npm2yarn
npm i @langchain/openai

## Instantiation

Now we can instantiate our model object and generate chat completions:

::: {.cell execution_count=2}
``` {.typescript .cell-code}
import { ChatOpenAI } from "@langchain/openai"

const llm = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
maxTokens: undefined,
timeout: undefined,
maxRetries: 2,
// other params...
})

:::

Invocation

const aiMsg = await llm.invoke([
[
"system",
"You are a helpful assistant that translates English to French. Translate the user sentence.",
],
["human", "I love programming."],
]);
aiMsg;
AIMessage {
"id": "chatcmpl-9qlrhSDIt1X2EaRf7juBxTo6zit5u",
"content": "J'adore la programmation.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 5,
"promptTokens": 31,
"totalTokens": 36
},
"finish_reason": "stop",
"system_fingerprint": "fp_4e2b2da518"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 31,
"output_tokens": 5,
"total_tokens": 36
}
}
console.log(aiMsg.content);

Chaining

We can chain our model with a prompt template like so:

import { ChatPromptTemplate } from "@langchain/core/prompts";

const prompt = ChatPromptTemplate.fromMessages([
[
"system",
"You are a helpful assistant that translates {input_language} to {output_language}.",
],
["human", "{input}"],
]);

const chain = prompt.pipe(llm);
await chain.invoke({
input_language: "English",
output_language: "German",
input: "I love programming.",
});
AIMessage {
"id": "chatcmpl-9qlr4a1l5wf1jCPjmUtTR6Tfd38SK",
"content": "Ich liebe Programmieren.",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 5,
"promptTokens": 26,
"totalTokens": 31
},
"finish_reason": "stop",
"system_fingerprint": "fp_4e2b2da518"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 26,
"output_tokens": 5,
"total_tokens": 31
}
}

TODO: Any functionality specific to this model provider

E.g. creating/using finetuned models via this provider. Delete if not relevant.

API reference

For detailed documentation of all ChatOpenAI features and configurations head to the API reference: https://api.js.langchain.com/classes/langchain_openai.ChatOpenAI.html


Was this page helpful?


You can also leave detailed feedback on GitHub.