0
and :obj:2
. Higher values make the output more random, while lower values make it more focused and deterministic. (default: :obj:None
)0.1
means only the tokens comprising the top 10% probability mass are considered. (default: :obj:None
)None
){"type": "json_object"}
enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=“length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.None
)4
sequences where the API will stop generating further tokens. (default: :obj:None
)None
)-2.0
and :obj:2.0
. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: :obj:None
)-2.0
and :obj:2.0
. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: :obj:None
)-100
to :obj:100
. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between:obj: -1
1
should decrease or increase likelihood of selection; values like :obj:-100
or :obj:100
should result in a ban or exclusive selection of the relevant token. (default: :obj:None
)