Models
Camel.models.gemini model
GeminiModel
Gemini API in a unified OpenAICompatibleModel interface.
Parameters:
- model_type (Union[ModelType, str]): Model for which a backend is created, one of Gemini series.
- model_config_dict (Optional[Dict[str, Any]], optional): A dictionary that will be fed into:obj:
openai.ChatCompletion.create()
. If :obj:None
, :obj:GeminiConfig().as_dict()
will be used. (default: :obj:None
) - api_key (Optional[str], optional): The API key for authenticating with the Gemini service. (default: :obj:
None
) - url (Optional[str], optional): The url to the Gemini service. (default: :obj:
https://generativelanguage.googleapis.com/v1beta/ openai/
) - token_counter (Optional[BaseTokenCounter], optional): Token counter to use for the model. If not provided, :obj:
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default: :obj:None
) - timeout (Optional[float], optional): The timeout value in seconds for API calls. If not provided, will fall back to the MODEL_TIMEOUT environment variable or default to 180 seconds. (default: :obj:
None
)
init
_process_messages
Process the messages for Gemini API to ensure no empty content, which is not accepted by Gemini.
_run
Runs inference of Gemini chat completion.
Parameters:
- messages (List[OpenAIMessage]): Message list with the chat history in OpenAI API format.
- response_format (Optional[Type[BaseModel]]): The format of the response.
- tools (Optional[List[Dict[str, Any]]]): The schema of the tools to use for the request.
Returns:
Union[ChatCompletion, Stream[ChatCompletionChunk]]:
ChatCompletion
in the non-stream mode, or
Stream[ChatCompletionChunk]
in the stream mode.