camel.models package#
Submodules#
camel.models.anthropic_model module#
- class camel.models.anthropic_model.AnthropicModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Anthropic API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of CLAUDE_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into Anthropic.messages.create(). If
None
,AnthropicConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Anthropic service. (default:
None
)url (Optional[str], optional) – The url to the Anthropic service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
AnthropicTokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration is valid for anthropic model backends.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API, or it does not contain
model_path
orserver_url
.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam])[source]#
Run inference of Anthropic chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
Response in the OpenAI API format.
- Return type:
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.azure_openai_model module#
- class camel.models.azure_openai_model.AzureOpenAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None, api_version: str | None = None, azure_deployment_name: str | None = None)[source]#
Bases:
BaseModelBackend
Azure OpenAI API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of GPT_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,ChatGPTConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the OpenAI service. (default:
None
)url (Optional[str], optional) – The url to the OpenAI service. (default:
None
)api_version (Optional[str], optional) – The api version for the model. (default:
None
)azure_deployment_name (Optional[str], optional) – The deployment name you chose when you deployed an azure model. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
References
https://learn.microsoft.com/en-us/azure/ai-services/openai/
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Azure OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Azure OpenAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of Azure OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.base_model module#
- class camel.models.base_model.BaseModelBackend(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
ABC
Base class for different model backends. It may be OpenAI API, a local LLM, a stub for unit tests, etc.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A config dictionary. (default:
{}
)api_key (Optional[str], optional) – The API key for authenticating with the model service. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- abstract check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- count_tokens_from_messages(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) int [source]#
Count the number of tokens in the messages using the specific tokenizer.
- Parameters:
messages (List[Dict]) – message list with the chat history in OpenAI API format.
- Returns:
Number of tokens in the messages.
- Return type:
int
- abstract run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs the query to the backend model.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- abstract property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for a given model.
This method retrieves the maximum token limit either from the model_config_dict or from the model’s default token limit.
- Returns:
The maximum token limit for the given model.
- Return type:
int
camel.models.gemini_model module#
- class camel.models.gemini_model.GeminiModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Gemini API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of Gemini series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,GeminiConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Gemini service. (default:
None
)url (Optional[str], optional) – The url to the Gemini service. (default:
https://generativelanguage.googleapis.com/v1beta/ openai/
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Gemini API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Gemini API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of Gemini chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.groq_model module#
- class camel.models.groq_model.GroqModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
LLM API served by Groq in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If:obj:None,
GroqConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Groq service. (default:
None
).url (Optional[str], optional) – The url to the Groq service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Groq API. But Groq API does not have any additional arguments to check.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Groq API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model supports streaming. But Groq API does not support streaming.
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.litellm_model module#
- class camel.models.litellm_model.LiteLLMModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Constructor for LiteLLM backend with OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, such as GPT-3.5-turbo, Claude-2, etc.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If:obj:None,
LiteLLMConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
LiteLLMTokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to LiteLLM API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of LiteLLM chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI format.
- Returns:
ChatCompletion
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.mistral_model module#
- class camel.models.mistral_model.MistralModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Mistral API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of MISTRAL_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:Mistral.chat.complete(). If:obj:None,
MistralConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the mistral service. (default:
None
)url (Optional[str], optional) – The url to the mistral service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Mistral API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Mistral API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Mistral chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time. Current it’s not supported.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
# NOTE: Temporarily using OpenAITokenCounter due to a current issue # with installing mistral-common alongside mistralai. # Refer to: mistralai/mistral-common#37
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.model_factory module#
- class camel.models.model_factory.ModelFactory[source]#
Bases:
object
Factory of backend models.
- Raises:
ValueError – in case the provided model type is unknown.
- static create(model_platform: ~camel.types.enums.ModelPlatformType, model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None, api_key: str | None = None, url: str | None = None) BaseModelBackend [source]#
Creates an instance of BaseModelBackend of the specified type.
- Parameters:
model_platform (ModelPlatformType) – Platform from which the model originates.
model_type (Union[ModelType, str]) – Model for which a backend is created. Can be a str for open source platforms.
model_config_dict (Optional[Dict]) – A dictionary that will be fed into the backend constructor. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter(ModelType.GPT_4O_MINI)
will be used if the model platform didn’t provide official token counter. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)
- Returns:
The initialized backend.
- Return type:
- Raises:
ValueError – If there is no backend for the model.
camel.models.nemotron_model module#
- class camel.models.nemotron_model.NemotronModel(model_type: ~<unknown>.ModelType | str, api_key: str | None = None, url: str | None = None)[source]#
Bases:
BaseModelBackend
Nemotron model API backend with OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
api_key (Optional[str], optional) – The API key for authenticating with the Nvidia service. (default:
None
)url (Optional[str], optional) – The url to the Nvidia service. (default:
https://integrate.api.nvidia.com/v1
)
Notes
Nemotron model doesn’t support additional model config like OpenAI.
- check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list.
- Returns:
ChatCompletion.
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.ollama_model module#
- class camel.models.ollama_model.OllamaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Ollama service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If:obj:None,
OllamaConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. Ollama doesn’t need API key, it would be ignored if set. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
References
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Ollama API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.open_source_model module#
- class camel.models.openai_audio_models.OpenAIAudioModels(api_key: str | None = None, url: str | None = None)[source]#
Bases:
object
Provides access to OpenAI’s Text-to-Speech (TTS) and Speech_to_Text (STT) models.
- speech_to_text(audio_file_path: str, translate_into_english: bool = False, **kwargs: Any) str [source]#
Convert speech audio to text.
- Parameters:
audio_file_path (str) – The audio file path, supporting one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
translate_into_english (bool, optional) – Whether to translate the speech into English. Defaults to False.
**kwargs (Any) – Extra keyword arguments passed to the Speech-to-Text (STT) API.
- Returns:
The output text.
- Return type:
str
- Raises:
ValueError – If the audio file format is not supported.
Exception – If there’s an error during the STT API call.
- text_to_speech(input: str, model_type: AudioModelType = AudioModelType.TTS_1, voice: VoiceType = VoiceType.ALLOY, storage_path: str | None = None, **kwargs: Any) List[HttpxBinaryResponseContent] | HttpxBinaryResponseContent [source]#
Convert text to speech using OpenAI’s TTS model. This method converts the given input text to speech using the specified model and voice.
- Parameters:
input (str) – The text to be converted to speech.
model_type (AudioModelType, optional) – The TTS model to use. Defaults to AudioModelType.TTS_1.
voice (VoiceType, optional) – The voice to be used for generating speech. Defaults to VoiceType.ALLOY.
storage_path (str, optional) – The local path to store the generated speech file if provided, defaults to None.
**kwargs (Any) – Extra kwargs passed to the TTS API.
- Returns:
- Union[List[_legacy_response.HttpxBinaryResponseContent],
_legacy_response.HttpxBinaryResponseContent]: List of response content object from OpenAI if input charaters more than 4096, single response content if input charaters less than 4096.
- Raises:
Exception – If there’s an error during the TTS API call.
camel.models.openai_compatible_model module#
- class camel.models.openai_compatible_model.OpenAICompatibleModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Constructor for model backend supporting OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,{}
will be used. (default:None
)api_key (str) – The API key for authenticating with the model service.
url (str) – The url to the model service.
token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.openai_model module#
- class camel.models.openai_model.OpenAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
OpenAI API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of GPT_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,ChatGPTConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the OpenAI service. (default:
None
)url (Optional[str], optional) – The url to the OpenAI service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.reka_model module#
- class camel.models.reka_model.RekaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Reka API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of REKA_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:Reka.chat.create(). If
None
,RekaConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Reka service. (default:
None
)url (Optional[str], optional) – The url to the Reka service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Reka API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Reka API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Mistral chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
# NOTE: Temporarily using OpenAITokenCounter
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.samba_model module#
- class camel.models.samba_model.SambaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
SambaNova service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a SambaNova backend is created. Supported models via SambaNova Cloud: https://community.sambanova.ai/t/supported-models/193. Supported models via SambaVerse API is listed in https://sambaverse.sambanova.ai/models.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,SambaCloudAPIConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the SambaNova service. (default:
None
)url (Optional[str], optional) – The url to the SambaNova service. Current support SambaVerse API:
"https://sambaverse.sambanova.ai/api/predict"
and SambaNova Cloud:"https://api.sambanova.ai/v1"
(default:https://api. sambanova.ai/v1
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to SambaNova API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to SambaNova API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs SambaNova’s service.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.stub_model module#
- class camel.models.stub_model.StubModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
A dummy model used for unit tests.
- model_type: UnifiedModelType = ModelType.STUB#
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Run fake inference by returning a fixed string. All arguments are unused for the dummy model.
- Returns:
Response in the OpenAI API format.
- Return type:
Dict[str, Any]
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.stub_model.StubTokenCounter[source]#
Bases:
BaseTokenCounter
- count_tokens_from_messages(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) int [source]#
Token counting for STUB models, directly returning a constant.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
- A constant to act as the number of the tokens in the
messages.
- Return type:
int
camel.models.togetherai_model module#
- class camel.models.togetherai_model.TogetherAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Constructor for Together AI backend with OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, supported model can be found here: https://docs.together.ai/docs/chat-models
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,TogetherAIConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Together service. (default:
None
)url (Optional[str], optional) – The url to the Together AI service. If not provided, “https://api.together.xyz/v1” will be used. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to TogetherAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to TogetherAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.vllm_model module#
- class camel.models.vllm_model.VLLMModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
vLLM service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,VLLMConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. vLLM doesn’t need API key, it would be ignored if set. (default:
None
)url (Optional[str], optional) – The url to the model service. If not provided,
"http://localhost:8000/v1"
will be used. (default:None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
References
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to vLLM API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.zhipuai_model module#
- class camel.models.zhipuai_model.ZhipuAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
ZhipuAI API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of GLM_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,ZhipuAIConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the ZhipuAI service. (default:
None
)url (Optional[str], optional) – The url to the ZhipuAI service. (default:
https://open.bigmodel.cn/api/paas/v4/
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to ZhipuAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
Module contents#
- class camel.models.AnthropicModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Anthropic API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of CLAUDE_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into Anthropic.messages.create(). If
None
,AnthropicConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Anthropic service. (default:
None
)url (Optional[str], optional) – The url to the Anthropic service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
AnthropicTokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration is valid for anthropic model backends.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API, or it does not contain
model_path
orserver_url
.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam])[source]#
Run inference of Anthropic chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
Response in the OpenAI API format.
- Return type:
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.AzureOpenAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None, api_version: str | None = None, azure_deployment_name: str | None = None)[source]#
Bases:
BaseModelBackend
Azure OpenAI API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of GPT_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,ChatGPTConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the OpenAI service. (default:
None
)url (Optional[str], optional) – The url to the OpenAI service. (default:
None
)api_version (Optional[str], optional) – The api version for the model. (default:
None
)azure_deployment_name (Optional[str], optional) – The deployment name you chose when you deployed an azure model. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
References
https://learn.microsoft.com/en-us/azure/ai-services/openai/
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Azure OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Azure OpenAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of Azure OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.BaseModelBackend(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
ABC
Base class for different model backends. It may be OpenAI API, a local LLM, a stub for unit tests, etc.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A config dictionary. (default:
{}
)api_key (Optional[str], optional) – The API key for authenticating with the model service. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- abstract check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- count_tokens_from_messages(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) int [source]#
Count the number of tokens in the messages using the specific tokenizer.
- Parameters:
messages (List[Dict]) – message list with the chat history in OpenAI API format.
- Returns:
Number of tokens in the messages.
- Return type:
int
- abstract run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs the query to the backend model.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- abstract property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for a given model.
This method retrieves the maximum token limit either from the model_config_dict or from the model’s default token limit.
- Returns:
The maximum token limit for the given model.
- Return type:
int
- class camel.models.CohereModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Cohere API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Cohere API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Cohere API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Cohere chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time. Current it’s not supported.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.DeepSeekModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
DeepSeek API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,DeepSeekConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the DeepSeek service. (default:
None
)url (Optional[str], optional) – The url to the DeepSeek service. (default:
https://api.deepseek.com
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
References
https://api-docs.deepseek.com/
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to DeepSeek API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to DeepSeek API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of DeepSeek chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.FishAudioModel(api_key: str | None = None, url: str | None = None)[source]#
Bases:
object
Provides access to FishAudio’s Text-to-Speech (TTS) and Speech_to_Text (STT) models.
- speech_to_text(audio_file_path: str, language: str | None = None, ignore_timestamps: bool | None = None, **kwargs: Any) str [source]#
Convert speech to text from an audio file.
- Parameters:
audio_file_path (str) – The path to the audio file to transcribe.
language (Optional[str]) – The language of the audio. (default:
None
)ignore_timestamps (Optional[bool]) – Whether to ignore timestamps. (default:
None
)**kwargs (Any) – Additional parameters to pass to the STT request.
- Returns:
The transcribed text from the audio.
- Return type:
str
- Raises:
FileNotFoundError – If the audio file cannot be found.
- text_to_speech(input: str, storage_path: str, reference_id: str | None = None, reference_audio: str | None = None, reference_audio_text: str | None = None, **kwargs: Any) Any [source]#
Convert text to speech and save the output to a file.
- Parameters:
input_text (str) – The text to convert to speech.
storage_path (str) – The file path where the resulting speech will be saved.
reference_id (Optional[str]) – An optional reference ID to associate with the request. (default:
None
)reference_audio (Optional[str]) – Path to an audio file for reference speech. (default:
None
)reference_audio_text (Optional[str]) – Text for the reference audio. (default:
None
)**kwargs (Any) – Additional parameters to pass to the TTS request.
- Raises:
FileNotFoundError – If the reference audio file cannot be found.
- class camel.models.GeminiModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Gemini API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of Gemini series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,GeminiConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Gemini service. (default:
None
)url (Optional[str], optional) – The url to the Gemini service. (default:
https://generativelanguage.googleapis.com/v1beta/ openai/
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Gemini API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Gemini API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of Gemini chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.GroqModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
LLM API served by Groq in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If:obj:None,
GroqConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Groq service. (default:
None
).url (Optional[str], optional) – The url to the Groq service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Groq API. But Groq API does not have any additional arguments to check.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Groq API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model supports streaming. But Groq API does not support streaming.
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.LiteLLMModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Constructor for LiteLLM backend with OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, such as GPT-3.5-turbo, Claude-2, etc.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If:obj:None,
LiteLLMConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
LiteLLMTokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to LiteLLM API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of LiteLLM chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI format.
- Returns:
ChatCompletion
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.MistralModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Mistral API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of MISTRAL_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:Mistral.chat.complete(). If:obj:None,
MistralConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the mistral service. (default:
None
)url (Optional[str], optional) – The url to the mistral service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Mistral API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Mistral API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Mistral chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time. Current it’s not supported.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
# NOTE: Temporarily using OpenAITokenCounter due to a current issue # with installing mistral-common alongside mistralai. # Refer to: mistralai/mistral-common#37
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.ModelFactory[source]#
Bases:
object
Factory of backend models.
- Raises:
ValueError – in case the provided model type is unknown.
- static create(model_platform: ~camel.types.enums.ModelPlatformType, model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None, api_key: str | None = None, url: str | None = None) BaseModelBackend [source]#
Creates an instance of BaseModelBackend of the specified type.
- Parameters:
model_platform (ModelPlatformType) – Platform from which the model originates.
model_type (Union[ModelType, str]) – Model for which a backend is created. Can be a str for open source platforms.
model_config_dict (Optional[Dict]) – A dictionary that will be fed into the backend constructor. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter(ModelType.GPT_4O_MINI)
will be used if the model platform didn’t provide official token counter. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)
- Returns:
The initialized backend.
- Return type:
- Raises:
ValueError – If there is no backend for the model.
- class camel.models.ModelManager(models: BaseModelBackend | List[BaseModelBackend], scheduling_strategy: str = 'round_robin')[source]#
Bases:
object
ModelManager choosing a model from provided list. Models are picked according to defined strategy.
- Parameters:
models (Union[BaseModelBackend, List[BaseModelBackend]]) – model backend or list of model backends (e.g., model instances, APIs)
scheduling_strategy (str) – name of function that defines how to select the next model. (default: :str:`round_robin`)
- add_strategy(name: str, strategy_fn: Callable)[source]#
- Add a scheduling strategy method provided by user in case when none
of existent strategies fits. When custom strategy is provided, it will be set as “self.scheduling_strategy” attribute.
- Parameters:
name (str) – The name of the strategy.
strategy_fn (Callable) – The scheduling strategy function.
- always_first() BaseModelBackend [source]#
Always return the first model from self.models.
- Returns:
BaseModelBackend for processing incoming messages.
- property current_model_index: int#
Return the index of current model in self.models list.
- Returns:
index of current model in given list of models.
- Return type:
int
- property model_config_dict: Dict[str, Any]#
Return model_config_dict of the current model.
- Returns:
Config dictionary of the current model.
- Return type:
Dict[str, Any]
- property model_type: UnifiedModelType#
Return type of the current model.
- Returns:
Current model type.
- Return type:
Union[ModelType, str]
- random_model() BaseModelBackend [source]#
Return random model from self.models list.
- Returns:
BaseModelBackend for processing incoming messages.
- round_robin() BaseModelBackend [source]#
Return models one by one in simple round-robin fashion.
- Returns:
BaseModelBackend for processing incoming messages.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
- Process a list of messages by selecting a model based on
the scheduling strategy. Sends the entire list of messages to the selected model, and returns a single response.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property token_counter: BaseTokenCounter#
Return token_counter of the current model.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit#
Returns the maximum token limit for current model.
This method retrieves the maximum token limit either from the model_config_dict or from the model’s default token limit.
- Returns:
The maximum token limit for the given model.
- Return type:
int
- exception camel.models.ModelProcessingError[source]#
Bases:
Exception
Raised when an error occurs during model processing.
- class camel.models.NemotronModel(model_type: ~<unknown>.ModelType | str, api_key: str | None = None, url: str | None = None)[source]#
Bases:
BaseModelBackend
Nemotron model API backend with OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
api_key (Optional[str], optional) – The API key for authenticating with the Nvidia service. (default:
None
)url (Optional[str], optional) – The url to the Nvidia service. (default:
https://integrate.api.nvidia.com/v1
)
Notes
Nemotron model doesn’t support additional model config like OpenAI.
- check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list.
- Returns:
ChatCompletion.
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.NvidiaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
NVIDIA API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of NVIDIA series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,NvidiaConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the NVIDIA service. (default:
None
)url (Optional[str], optional) – The url to the NVIDIA service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to NVIDIA API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to NVIDIA API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of NVIDIA chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.OllamaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Ollama service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If:obj:None,
OllamaConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. Ollama doesn’t need API key, it would be ignored if set. (default:
None
)url (Optional[str], optional) – The url to the model service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
References
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Ollama API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.OpenAIAudioModels(api_key: str | None = None, url: str | None = None)[source]#
Bases:
object
Provides access to OpenAI’s Text-to-Speech (TTS) and Speech_to_Text (STT) models.
- speech_to_text(audio_file_path: str, translate_into_english: bool = False, **kwargs: Any) str [source]#
Convert speech audio to text.
- Parameters:
audio_file_path (str) – The audio file path, supporting one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
translate_into_english (bool, optional) – Whether to translate the speech into English. Defaults to False.
**kwargs (Any) – Extra keyword arguments passed to the Speech-to-Text (STT) API.
- Returns:
The output text.
- Return type:
str
- Raises:
ValueError – If the audio file format is not supported.
Exception – If there’s an error during the STT API call.
- text_to_speech(input: str, model_type: AudioModelType = AudioModelType.TTS_1, voice: VoiceType = VoiceType.ALLOY, storage_path: str | None = None, **kwargs: Any) List[HttpxBinaryResponseContent] | HttpxBinaryResponseContent [source]#
Convert text to speech using OpenAI’s TTS model. This method converts the given input text to speech using the specified model and voice.
- Parameters:
input (str) – The text to be converted to speech.
model_type (AudioModelType, optional) – The TTS model to use. Defaults to AudioModelType.TTS_1.
voice (VoiceType, optional) – The voice to be used for generating speech. Defaults to VoiceType.ALLOY.
storage_path (str, optional) – The local path to store the generated speech file if provided, defaults to None.
**kwargs (Any) – Extra kwargs passed to the TTS API.
- Returns:
- Union[List[_legacy_response.HttpxBinaryResponseContent],
_legacy_response.HttpxBinaryResponseContent]: List of response content object from OpenAI if input charaters more than 4096, single response content if input charaters less than 4096.
- Raises:
Exception – If there’s an error during the TTS API call.
- class camel.models.OpenAICompatibleModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Constructor for model backend supporting OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,{}
will be used. (default:None
)api_key (str) – The API key for authenticating with the model service.
url (str) – The url to the model service.
token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.OpenAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
OpenAI API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of GPT_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,ChatGPTConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the OpenAI service. (default:
None
)url (Optional[str], optional) – The url to the OpenAI service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.QwenModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Qwen API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of Qwen series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,QwenConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Qwen service. (default:
None
)url (Optional[str], optional) – The url to the Qwen service. (default:
https://dashscope.aliyuncs.com/compatible-mode/v1
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Qwen API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Qwen API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of Qwen chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.RekaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Reka API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of REKA_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:Reka.chat.create(). If
None
,RekaConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Reka service. (default:
None
)url (Optional[str], optional) – The url to the Reka service. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Reka API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Reka API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Mistral chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
# NOTE: Temporarily using OpenAITokenCounter
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.SGLangModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
SGLang service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,SGLangConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. SGLang doesn’t need API key, it would be ignored if set. (default:
None
)url (Optional[str], optional) – The url to the model service. If not provided,
"http://127.0.0.1:30000/v1"
will be used. (default:None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
Reference: https://sgl-project.github.io/backend/openai_api_completions.html
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to SGLang API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.SambaModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
SambaNova service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a SambaNova backend is created. Supported models via SambaNova Cloud: https://community.sambanova.ai/t/supported-models/193. Supported models via SambaVerse API is listed in https://sambaverse.sambanova.ai/models.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,SambaCloudAPIConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the SambaNova service. (default:
None
)url (Optional[str], optional) – The url to the SambaNova service. Current support SambaVerse API:
"https://sambaverse.sambanova.ai/api/predict"
and SambaNova Cloud:"https://api.sambanova.ai/v1"
(default:https://api. sambanova.ai/v1
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to SambaNova API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to SambaNova API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs SambaNova’s service.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.StubModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
A dummy model used for unit tests.
- model_type: UnifiedModelType = ModelType.STUB#
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Run fake inference by returning a fixed string. All arguments are unused for the dummy model.
- Returns:
Response in the OpenAI API format.
- Return type:
Dict[str, Any]
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.TogetherAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Constructor for Together AI backend with OpenAI compatibility.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, supported model can be found here: https://docs.together.ai/docs/chat-models
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,TogetherAIConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Together service. (default:
None
)url (Optional[str], optional) – The url to the Together AI service. If not provided, “https://api.together.xyz/v1” will be used. (default:
None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to TogetherAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to TogetherAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.VLLMModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
vLLM service interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,VLLMConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the model service. vLLM doesn’t need API key, it would be ignored if set. (default:
None
)url (Optional[str], optional) – The url to the model service. If not provided,
"http://localhost:8000/v1"
will be used. (default:None
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
References
https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to vLLM API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.YiModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Yi API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of Yi series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,YiConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the Yi service. (default:
None
)url (Optional[str], optional) – The url to the Yi service. (default:
https://api.lingyiwanwu.com/v1
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Yi API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Yi API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of Yi chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.ZhipuAIModel(model_type: ~<unknown>.ModelType | str, model_config_dict: ~typing.Dict[str, ~typing.Any] | None = None, api_key: str | None = None, url: str | None = None, token_counter: ~camel.utils.token_counting.BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
ZhipuAI API in a unified BaseModelBackend interface.
- Parameters:
model_type (Union[ModelType, str]) – Model for which a backend is created, one of GLM_* series.
model_config_dict (Optional[Dict[str, Any]], optional) – A dictionary that will be fed into:obj:openai.ChatCompletion.create(). If
None
,ZhipuAIConfig().as_dict()
will be used. (default:None
)api_key (Optional[str], optional) – The API key for authenticating with the ZhipuAI service. (default:
None
)url (Optional[str], optional) – The url to the ZhipuAI service. (default:
https://open.bigmodel.cn/api/paas/v4/
)token_counter (Optional[BaseTokenCounter], optional) – Token counter to use for the model. If not provided,
OpenAITokenCounter( ModelType.GPT_4O_MINI)
will be used. (default:None
)
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to ZhipuAI API.
- run(messages: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type: