Chat
post/chat/completions
Body params
- Name
model
- Type
- string
- Required
- Description
- ID of the model to use. Currently, only `gpt-3.5-turbo` and `gpt-3.5-turbo-0301` are supported.
- Name
messages
- Type
- array[object]
- Required
- Description
- The messages to generate chat completions for, in the [chat format](/docs/guides/chat/introduction).
- Name
role
- Type
- string
- Required
- Description
- The role of the author of this message.
- Name
content
- Type
- string
- Required
- Description
- The contents of the message
- Name
name
- Type
- string
- Description
- The name of the user in a multi-user chat
- Name
temperature
- Type
- number
- Description
- What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both.
- Name
top_p
- Type
- number
- Description
- An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both.
- Name
n
- Type
- integer
- Description
- How many chat completion choices to generate for each input message.
- Name
stream
- Type
- boolean
- Description
- If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message.
- Name
stop
- Description
- Up to 4 sequences where the API will stop generating further tokens.
- Name
max_tokens
- Type
- integer
- Description
- The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).
- Name
presence_penalty
- Type
- number
- Description
- Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details)
- Name
frequency_penalty
- Type
- number
- Description
- Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. [See more information about frequency and presence penalties.](/docs/api-reference/parameter-details)
- Name
logit_bias
- Type
- object
- Description
- Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token.
- Name
user
- Type
- string
- Description
- A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. [Learn more](/docs/guides/safety-best-practices/end-user-ids).
Response
JSON
{
"id": "string",
"object": "string",
"created": 0,
"model": "string",
"choices": [
{
"index": 0,
"message": {
"role": "string",
"content": "string"
},
"finish_reason": "string"
}
],
"usage": {
"prompt_tokens": 0,
"completion_tokens": 0,
"total_tokens": 0
}
}