Configs
Camel.configs.gemini config
GeminiConfig
Defines the parameters for generating chat completions using the Gemini API.
Parameters:
- temperature (float, optional): Sampling temperature to use, between :obj:
0
and :obj:2
. Higher values make the output more random, while lower values make it more focused and deterministic. (default: :obj:None
) - top_p (float, optional): An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So :obj:
0.1
means only the tokens comprising the top 10% probability mass are considered. (default: :obj:None
) - n (int, optional): How many chat completion choices to generate for each input message. (default: :obj:
None
) - response_format (object, optional): An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to
{"type": "json_object"}
enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=“length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length. - stream (bool, optional): If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: :obj:
None
) - stop (str or list, optional): Up to :obj:
4
sequences where the API will stop generating further tokens. (default: :obj:None
) - max_tokens (int, optional): The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: :obj:
None
) - tools (list[FunctionTool], optional): A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.
- tool_choice (Union[dict[str, str], str], optional): Controls which (if any) tool is called by the model. :obj:
"none"
means the model will not call any tool and instead generates a message. :obj:"auto"
means the model can pick between generating a message or calling one or more tools. :obj:"required"
means the model must call one or more tools. Specifying a particular tool via{"type": "function", "function": {"name": "my_function"}}
forces the model to call that tool. :obj:"none"
is the default when no tools are present. :obj:"auto"
is the default if tools are present.