camel.models package#
Submodules#
camel.models.anthropic_model module#
- class camel.models.anthropic_model.AnthropicModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Anthropic API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of CLAUDE_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into Anthropic.messages.create(). If
None
,AnthropicConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Anthropic service. (default:
None
)url (Optional[str], optional) – The url to the Anthropic service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
AnthropicTokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration is valid for anthropic model backends.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API, or it does not contain
model_path
orserver_url
.
- count_tokens_from_prompt(prompt: str) int [source]#
Count the number of tokens from a prompt.
- Parameters:
prompt (str) – The prompt string.
- Returns:
The number of tokens in the prompt.
- Return type:
int
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam])[source]#
Run inference of Anthropic chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
Response in the OpenAI API format.
- Return type:
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.azure_openai_model module#
- class camel.models.azure_openai_model.AzureOpenAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None, api_version: str | None = None, azure_deployment_name: str | None = None)[source]#
Bases:
BaseModelBackend
Azure OpenAI API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of GPT_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,ChatGPTConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the OpenAI service. (default:
None
)url (Optional[str], optional) – The url to the OpenAI service. (default:
None
)api_version (Optional[str], optional) – The api version for the model. (default:
None
)azure_deployment_name (Optional[str], optional) – The deployment name you chose when you deployed an azure model. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
References
https://learn.microsoft.com/en-us/azure/ai-services/openai/
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Azure OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Azure OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of Azure OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.base_model module#
- class camel.models.base_model.BaseModelBackend(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
ABC
Base class for different model backends. It may be OpenAI API, a local LLM, a stub for unit tests, etc.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A config dictionary. (default:
{}
)api_key (Optional[str], optional) – The API key for authenticating with the model service. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- abstract check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- count_tokens_from_messages(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) int [source]#
Count the number of tokens in the messages using the specific tokenizer.
- Parameters:
messages (List[Dict]) – message list with the chat history in OpenAI API format.
- Returns:
Number of tokens in the messages.
- Return type:
int
- abstract run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs the query to the backend model.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- abstract property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for a given model.
This method retrieves the maximum token limit either from the model_config_dict or from the model’s default token limit.
- Returns:
The maximum token limit for the given model.
- Return type:
int
camel.models.gemini_model module#
- class camel.models.gemini_model.GeminiModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Gemini API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:genai.GenerativeModel.generate_content() `. If:obj:`None,
GeminiConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the gemini service. (default:
None
)url (Optional[str], optional) – The url to the gemini service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
GeminiTokenCounter
will be used. (default:None
)
Notes
Currently
"stream": True
is not supported with Gemini due to the limitation of the current camel design.- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Gemini API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Gemini model. This method can handle multimodal input
- Parameters:
messages – Message list or Message with the chat history in OpenAi format.
- Returns:
A ChatCompletion object formatted for the OpenAI API.
- Return type:
response
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- to_gemini_req(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ContentsType [source]#
Converts the request from the OpenAI API format to the Gemini API request format.
- Parameters:
messages – The request object from the OpenAI API.
- Returns:
A list of messages formatted for Gemini API.
- Return type:
converted_messages
- to_openai_response(response: GenerateContentResponse) ChatCompletion [source]#
Converts the response from the Gemini API to the OpenAI API response format.
- Parameters:
response – The response object returned by the Gemini API
- Returns:
- A ChatCompletion object formatted for
the OpenAI API.
- Return type:
openai_response
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.groq_model module#
- class camel.models.groq_model.GroqModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
LLM API served by Groq in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If:obj:None,
GroqConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Groq service. (default:
None
).url (Optional[str], optional) – The url to the Groq service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Groq API. But Groq API does not have any additional arguments to check.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Groq API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model supports streaming. But Groq API does not support streaming.
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.litellm_model module#
- class camel.models.litellm_model.LiteLLMModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Constructor for LiteLLM backend with OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, such as GPT-3.5-turbo, Claude-2, etc.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If:obj:None,
LiteLLMConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
LiteLLMTokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to LiteLLM API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of LiteLLM chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI format.
- Returns:
ChatCompletion
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.mistral_model module#
- class camel.models.mistral_model.MistralModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Mistral API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of MISTRAL_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:Mistral.chat.complete(). If:obj:None,
MistralConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the mistral service. (default:
None
)url (Optional[str], optional) – The url to the mistral service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Mistral API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Mistral API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Mistral chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time. Current it’s not supported.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
# NOTE: Temporarily using OpenAITokenCounter due to a current issue # with installing mistral-common alongside mistralai. # Refer to: mistralai/mistral-common#37
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.model_factory module#
- class camel.models.model_factory.ModelFactory[source]#
Bases:
object
Factory of backend models.
- Raises:
ValueError – in case the provided model type is unknown.
- static create(model_platform: ~camel.types.enums.ModelPlatformType, model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None, api_key: str | None = None, url: str | None = None) BaseModelBackend [source]#
Creates an instance of BaseModelBackend of the specified type.
- Parameters:
model_platform (ModelPlatformType) – Platform from which the model originates.
model_type (Union[ModelType, str]) – Model for which a backend is created. Can be a str for open source platforms.
model_config_dict (Optional[Dict]) – A dictionary that will be fed into the backend constructor. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter(ModelType.GPT_4O_MINI)
will be used if the model platform didn’t provide official token counter. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)
- Returns:
The initialized backend.
- Return type:
- Raises:
ValueError – If there is no backend for the model.
camel.models.nemotron_model module#
- class camel.models.nemotron_model.NemotronModel(model_type: ~<unknown>.ModelType | str, api_key: str | None = None, url: str | None = None)[source]#
Bases:
BaseModelBackend
Nemotron model API backend with OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
api_key (Optional[str], optional) – The API key for authenticating with the Nvidia service. (default:
None
)url (Optional[str], optional) – The url to the Nvidia service. (default:
https://integrate.api.nvidia.com/v1
)
Notes
Nemotron model doesn’t support additional model config like OpenAI.
- check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list.
- Returns:
ChatCompletion.
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.ollama_model module#
- class camel.models.ollama_model.OllamaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Ollama service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If:obj:None,
OllamaConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. Ollama doesn’t need API key, it would be ignored if set. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
References
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Ollama API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.open_source_model module#
- class camel.models.openai_audio_models.OpenAIAudioModels(api_key: str | None = None, url: str | None = None)[source]#
Bases:
object
Provides access to OpenAI’s Text-to-Speech (TTS) and Speech_to_Text (STT) models.
- speech_to_text(audio_file_path: str, translate_into_english: bool = False, **kwargs: Any) str [source]#
Convert speech audio to text.
- Parameters:
audio_file_path (str) – The audio file path, supporting one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
translate_into_english (bool, optional) – Whether to translate the speech into English. Defaults to False.
**kwargs (Any) – Extra keyword arguments passed to the Speech-to-Text (STT) API.
- Returns:
The output text.
- Return type:
str
- Raises:
ValueError – If the audio file format is not supported.
Exception – If there’s an error during the STT API call.
- text_to_speech(input: str, model_type: AudioModelType = AudioModelType.TTS_1, voice: VoiceType = VoiceType.ALLOY, storage_path: str | None = None, **kwargs: Any) List[HttpxBinaryResponseContent] | HttpxBinaryResponseContent [source]#
Convert text to speech using OpenAI’s TTS model. This method converts the given input text to speech using the specified model and voice.
- Parameters:
input (str) – The text to be converted to speech.
model_type (AudioModelType, optional) – The TTS model to use. Defaults to AudioModelType.TTS_1.
voice (VoiceType, optional) – The voice to be used for generating speech. Defaults to VoiceType.ALLOY.
storage_path (str, optional) – The local path to store the generated speech file if provided, defaults to None.
**kwargs (Any) – Extra kwargs passed to the TTS API.
- Returns:
- Union[List[_legacy_response.HttpxBinaryResponseContent],
_legacy_response.HttpxBinaryResponseContent]: List of response content object from OpenAI if input charaters more than 4096, single response content if input charaters less than 4096.
- Raises:
Exception – If there’s an error during the TTS API call.
camel.models.openai_compatible_model module#
- class camel.models.openai_compatible_model.OpenAICompatibleModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Constructor for model backend supporting OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,{}
will be used. (default:None
)api_key (str) – The API key for authenticating with the model service.
url (str) – The url to the model service.
token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.openai_model module#
- class camel.models.openai_model.OpenAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
OpenAI API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of GPT_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,ChatGPTConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the OpenAI service. (default:
None
)url (Optional[str], optional) – The url to the OpenAI service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.reka_model module#
- class camel.models.reka_model.RekaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Reka API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of REKA_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:Reka.chat.create(). If
None
,RekaConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Reka service. (default:
None
)url (Optional[str], optional) – The url to the Reka service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Reka API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Reka API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Mistral chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
# NOTE: Temporarily using OpenAITokenCounter
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.samba_model module#
- class camel.models.samba_model.SambaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
SambaNova service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a SambaNova backend is created. Supported models via SambaNova Cloud: https://community.sambanova.ai/t/supported-models/193. Supported models via SambaVerse API is listed in https://sambaverse.sambanova.ai/models.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,SambaCloudAPIConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the SambaNova service. (default:
None
)url (Optional[str], optional) – The url to the SambaNova service. Current support SambaVerse API:
"https://sambaverse.sambanova.ai/api/predict"
and SambaNova Cloud:"https://api.sambanova.ai/v1"
(default:https://api. sambanova.ai/v1
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to SambaNova API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to SambaNova API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs SambaNova’s service.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.stub_model module#
- class camel.models.stub_model.StubModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
A dummy model used for unit tests.
- model_type: UnifiedModelType = ModelType.STUB#
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Run fake inference by returning a fixed string. All arguments are unused for the dummy model.
- Returns:
Response in the OpenAI API format.
- Return type:
Dict[str, Any]
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.stub_model.StubTokenCounter[source]#
Bases:
BaseTokenCounter
- count_tokens_from_messages(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) int [source]#
Token counting for STUB models, directly returning a constant.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
- A constant to act as the number of the tokens in the
messages.
- Return type:
int
camel.models.togetherai_model module#
- class camel.models.togetherai_model.TogetherAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Constructor for Together AI backend with OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, supported model can be found here: https://docs.together.ai/docs/chat-models
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,TogetherAIConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Together service. (default:
None
)url (Optional[str], optional) – The url to the Together AI service. If not provided, “https://api.together.xyz/v1” will be used. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to TogetherAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to TogetherAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.vllm_model module#
- class camel.models.vllm_model.VLLMModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
vLLM service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,VLLMConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. vLLM doesn’t need API key, it would be ignored if set. (default:
None
)url (Optional[str], optional) – The url to the model service. If not provided,
"http://localhost:8000/v1"
will be used. (default:None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
References
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to vLLM API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.zhipuai_model module#
- class camel.models.zhipuai_model.ZhipuAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
ZhipuAI API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of GLM_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,ZhipuAIConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the ZhipuAI service. (default:
None
)url (Optional[str], optional) – The url to the ZhipuAI service. (default:
https://open.bigmodel.cn/api/paas/v4/
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to ZhipuAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
Module contents#
- class camel.models.AnthropicModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Anthropic API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of CLAUDE_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into Anthropic.messages.create(). If
None
,AnthropicConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Anthropic service. (default:
None
)url (Optional[str], optional) – The url to the Anthropic service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
AnthropicTokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration is valid for anthropic model backends.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API, or it does not contain
model_path
orserver_url
.
- count_tokens_from_prompt(prompt: str) int [source]#
Count the number of tokens from a prompt.
- Parameters:
prompt (str) – The prompt string.
- Returns:
The number of tokens in the prompt.
- Return type:
int
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam])[source]#
Run inference of Anthropic chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
Response in the OpenAI API format.
- Return type:
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.AzureOpenAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None, api_version: str | None = None, azure_deployment_name: str | None = None)[source]#
Bases:
BaseModelBackend
Azure OpenAI API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of GPT_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,ChatGPTConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the OpenAI service. (default:
None
)url (Optional[str], optional) – The url to the OpenAI service. (default:
None
)api_version (Optional[str], optional) – The api version for the model. (default:
None
)azure_deployment_name (Optional[str], optional) – The deployment name you chose when you deployed an azure model. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
References
https://learn.microsoft.com/en-us/azure/ai-services/openai/
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Azure OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Azure OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of Azure OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.BaseModelBackend(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
ABC
Base class for different model backends. It may be OpenAI API, a local LLM, a stub for unit tests, etc.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A config dictionary. (default:
{}
)api_key (Optional[str], optional) – The API key for authenticating with the model service. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- abstract check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- count_tokens_from_messages(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) int [source]#
Count the number of tokens in the messages using the specific tokenizer.
- Parameters:
messages (List[Dict]) – message list with the chat history in OpenAI API format.
- Returns:
Number of tokens in the messages.
- Return type:
int
- abstract run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs the query to the backend model.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- abstract property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for a given model.
This method retrieves the maximum token limit either from the model_config_dict or from the model’s default token limit.
- Returns:
The maximum token limit for the given model.
- Return type:
int
- class camel.models.CohereModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Cohere API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Cohere API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Cohere API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Cohere chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time. Current it’s not supported.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.GeminiModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Gemini API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:genai.GenerativeModel.generate_content() `. If:obj:`None,
GeminiConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the gemini service. (default:
None
)url (Optional[str], optional) – The url to the gemini service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
GeminiTokenCounter
will be used. (default:None
)
Notes
Currently
"stream": True
is not supported with Gemini due to the limitation of the current camel design.- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Gemini API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Gemini model. This method can handle multimodal input
- Parameters:
messages – Message list or Message with the chat history in OpenAi format.
- Returns:
A ChatCompletion object formatted for the OpenAI API.
- Return type:
response
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- to_gemini_req(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ContentsType [source]#
Converts the request from the OpenAI API format to the Gemini API request format.
- Parameters:
messages – The request object from the OpenAI API.
- Returns:
A list of messages formatted for Gemini API.
- Return type:
converted_messages
- to_openai_response(response: GenerateContentResponse) ChatCompletion [source]#
Converts the response from the Gemini API to the OpenAI API response format.
- Parameters:
response – The response object returned by the Gemini API
- Returns:
- A ChatCompletion object formatted for
the OpenAI API.
- Return type:
openai_response
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.GroqModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
LLM API served by Groq in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If:obj:None,
GroqConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Groq service. (default:
None
).url (Optional[str], optional) – The url to the Groq service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Groq API. But Groq API does not have any additional arguments to check.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Groq API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model supports streaming. But Groq API does not support streaming.
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.LiteLLMModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Constructor for LiteLLM backend with OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, such as GPT-3.5-turbo, Claude-2, etc.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If:obj:None,
LiteLLMConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
LiteLLMTokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to LiteLLM API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of LiteLLM chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI format.
- Returns:
ChatCompletion
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.MistralModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Mistral API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of MISTRAL_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:Mistral.chat.complete(). If:obj:None,
MistralConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the mistral service. (default:
None
)url (Optional[str], optional) – The url to the mistral service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Mistral API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Mistral API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Mistral chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time. Current it’s not supported.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
# NOTE: Temporarily using OpenAITokenCounter due to a current issue # with installing mistral-common alongside mistralai. # Refer to: mistralai/mistral-common#37
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.ModelFactory[source]#
Bases:
object
Factory of backend models.
- Raises:
ValueError – in case the provided model type is unknown.
- static create(model_platform: ~camel.types.enums.ModelPlatformType, model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None, api_key: str | None = None, url: str | None = None) BaseModelBackend [source]#
Creates an instance of BaseModelBackend of the specified type.
- Parameters:
model_platform (ModelPlatformType) – Platform from which the model originates.
model_type (Union[ModelType, str]) – Model for which a backend is created. Can be a str for open source platforms.
model_config_dict (Optional[Dict]) – A dictionary that will be fed into the backend constructor. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter(ModelType.GPT_4O_MINI)
will be used if the model platform didn’t provide official token counter. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)
- Returns:
The initialized backend.
- Return type:
- Raises:
ValueError – If there is no backend for the model.
- class camel.models.NemotronModel(model_type: ~<unknown>.ModelType | str, api_key: str | None = None, url: str | None = None)[source]#
Bases:
BaseModelBackend
Nemotron model API backend with OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
api_key (Optional[str], optional) – The API key for authenticating with the Nvidia service. (default:
None
)url (Optional[str], optional) – The url to the Nvidia service. (default:
https://integrate.api.nvidia.com/v1
)
Notes
Nemotron model doesn’t support additional model config like OpenAI.
- check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list.
- Returns:
ChatCompletion.
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.OllamaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Ollama service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If:obj:None,
OllamaConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. Ollama doesn’t need API key, it would be ignored if set. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
References
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Ollama API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.OpenAIAudioModels(api_key: str | None = None, url: str | None = None)[source]#
Bases:
object
Provides access to OpenAI’s Text-to-Speech (TTS) and Speech_to_Text (STT) models.
- speech_to_text(audio_file_path: str, translate_into_english: bool = False, **kwargs: Any) str [source]#
Convert speech audio to text.
- Parameters:
audio_file_path (str) – The audio file path, supporting one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
translate_into_english (bool, optional) – Whether to translate the speech into English. Defaults to False.
**kwargs (Any) – Extra keyword arguments passed to the Speech-to-Text (STT) API.
- Returns:
The output text.
- Return type:
str
- Raises:
ValueError – If the audio file format is not supported.
Exception – If there’s an error during the STT API call.
- text_to_speech(input: str, model_type: AudioModelType = AudioModelType.TTS_1, voice: VoiceType = VoiceType.ALLOY, storage_path: str | None = None, **kwargs: Any) List[HttpxBinaryResponseContent] | HttpxBinaryResponseContent [source]#
Convert text to speech using OpenAI’s TTS model. This method converts the given input text to speech using the specified model and voice.
- Parameters:
input (str) – The text to be converted to speech.
model_type (AudioModelType, optional) – The TTS model to use. Defaults to AudioModelType.TTS_1.
voice (VoiceType, optional) – The voice to be used for generating speech. Defaults to VoiceType.ALLOY.
storage_path (str, optional) – The local path to store the generated speech file if provided, defaults to None.
**kwargs (Any) – Extra kwargs passed to the TTS API.
- Returns:
- Union[List[_legacy_response.HttpxBinaryResponseContent],
_legacy_response.HttpxBinaryResponseContent]: List of response content object from OpenAI if input charaters more than 4096, single response content if input charaters less than 4096.
- Raises:
Exception – If there’s an error during the TTS API call.
- class camel.models.OpenAICompatibleModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Constructor for model backend supporting OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,{}
will be used. (default:None
)api_key (str) – The API key for authenticating with the model service.
url (str) – The url to the model service.
token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.OpenAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
OpenAI API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of GPT_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,ChatGPTConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the OpenAI service. (default:
None
)url (Optional[str], optional) – The url to the OpenAI service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.QwenModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Qwen API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of Qwen series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,QwenConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Qwen service. (default:
None
)url (Optional[str], optional) – The url to the Qwen service. (default:
https://dashscope.aliyuncs.com/compatible-mode/v1
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Qwen API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Qwen API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of Qwen chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.RekaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Reka API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of REKA_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:Reka.chat.create(). If
None
,RekaConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Reka service. (default:
None
)url (Optional[str], optional) – The url to the Reka service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Reka API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Reka API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Mistral chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
# NOTE: Temporarily using OpenAITokenCounter
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.SambaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
SambaNova service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a SambaNova backend is created. Supported models via SambaNova Cloud: https://community.sambanova.ai/t/supported-models/193. Supported models via SambaVerse API is listed in https://sambaverse.sambanova.ai/models.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,SambaCloudAPIConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the SambaNova service. (default:
None
)url (Optional[str], optional) – The url to the SambaNova service. Current support SambaVerse API:
"https://sambaverse.sambanova.ai/api/predict"
and SambaNova Cloud:"https://api.sambanova.ai/v1"
(default:https://api. sambanova.ai/v1
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to SambaNova API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to SambaNova API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs SambaNova’s service.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.StubModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
A dummy model used for unit tests.
- model_type: UnifiedModelType = ModelType.STUB#
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Run fake inference by returning a fixed string. All arguments are unused for the dummy model.
- Returns:
Response in the OpenAI API format.
- Return type:
Dict[str, Any]
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.TogetherAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Constructor for Together AI backend with OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, supported model can be found here: https://docs.together.ai/docs/chat-models
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,TogetherAIConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Together service. (default:
None
)url (Optional[str], optional) – The url to the Together AI service. If not provided, “https://api.together.xyz/v1” will be used. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to TogetherAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to TogetherAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.VLLMModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
vLLM service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,VLLMConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. vLLM doesn’t need API key, it would be ignored if set. (default:
None
)url (Optional[str], optional) – The url to the model service. If not provided,
"http://localhost:8000/v1"
will be used. (default:None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
References
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to vLLM API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.YiModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Yi API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of Yi series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,YiConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Yi service. (default:
None
)url (Optional[str], optional) – The url to the Yi service. (default:
https://api.lingyiwanwu.com/v1
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Yi API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Yi API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of Yi chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.ZhipuAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
ZhipuAI API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of GLM_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,ZhipuAIConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the ZhipuAI service. (default:
None
)url (Optional[str], optional) – The url to the ZhipuAI service. (default:
https://open.bigmodel.cn/api/paas/v4/
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to ZhipuAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type: