Models
Camel.models.watsonx model
WatsonXModel
WatsonX API in a unified BaseModelBackend interface.
Parameters:
- model_type (Union[ModelType, str]): Model type for which a backend is created, one of WatsonX series.
- model_config_dict (Optional[Dict[str, Any]], optional): A dictionary that will be fed into :obj:
ModelInference.chat()
. - If: obj:
None
, :obj:WatsonXConfig().as_dict()
will be used. (default: :obj:None
) - api_key (Optional[str], optional): The API key for authenticating with the WatsonX service. (default: :obj:
None
) - url (Optional[str], optional): The url to the WatsonX service. (default: :obj:
None
) - project_id (Optional[str], optional): The project ID authenticating with the WatsonX service. (default: :obj:
None
) - token_counter (Optional[BaseTokenCounter], optional): Token counter to use for the model. If not provided, :obj:
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default: :obj:None
) - timeout (Optional[float], optional): The timeout value in seconds for API calls. If not provided, will fall back to the MODEL_TIMEOUT environment variable or default to 180 seconds. (default: :obj:
None
)
init
_to_openai_response
Convert WatsonX response to OpenAI format.
token_counter
Returns:
BaseTokenCounter: The token counter following the model’s tokenization style.
_prepare_request
_run
Runs inference of WatsonX chat completion.
Parameters:
- messages (List[OpenAIMessage]): Message list with the chat history in OpenAI API format.
- response_format (Optional[Type[BaseModel]], optional): The response format. (default: :obj:
None
) - tools (Optional[List[Dict[str, Any]]], optional): tools to use. (default: :obj:
None
)
Returns:
ChatCompletion.
check_model_config
stream
Returns:
bool: Whether the model is in stream mode.