camel.models package#
Submodules#
camel.models.anthropic_model module#
- class camel.models.anthropic_model.AnthropicModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Anthropic API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration is valid for anthropic model backends.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API, or it does not contain
model_path
orserver_url
.
- count_tokens_from_prompt(prompt: str) int [source]#
Count the number of tokens from a prompt.
- Parameters:
prompt (str) – The prompt string.
- Returns:
The number of tokens in the prompt.
- Return type:
int
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam])[source]#
Run inference of Anthropic chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
Response in the OpenAI API format.
- Return type:
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.azure_openai_model module#
- class camel.models.azure_openai_model.AzureOpenAIModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, api_version: str | None = None, azure_deployment_name: str | None = None)[source]#
Bases:
BaseModelBackend
Azure OpenAI API in a unified BaseModelBackend interface. Doc: https://learn.microsoft.com/en-us/azure/ai-services/openai/
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Azure OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Azure OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of Azure OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.base_model module#
- class camel.models.base_model.BaseModelBackend(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
ABC
Base class for different model backends. May be OpenAI API, a local LLM, a stub for unit tests, etc.
- abstract check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- count_tokens_from_messages(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) int [source]#
Count the number of tokens in the messages using the specific tokenizer.
- Parameters:
messages (List[Dict]) – message list with the chat history in OpenAI API format.
- Returns:
Number of tokens in the messages.
- Return type:
int
- abstract run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs the query to the backend model.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- abstract property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for a given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
camel.models.gemini_model module#
- class camel.models.gemini_model.GeminiModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Gemini API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Gemini API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Gemini model. This method can handle multimodal input
- Parameters:
messages – Message list or Message with the chat history in OpenAi format.
- Returns:
A ChatCompletion object formatted for the OpenAI API.
- Return type:
response
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- to_gemini_req(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ContentsType [source]#
- Converts the request from the OpenAI API format to
the Gemini API request format.
- Parameters:
messages – The request object from the OpenAI API.
- Returns:
A list of messages formatted for Gemini API.
- Return type:
converted_messages
- to_openai_response(response: GenerateContentResponse) ChatCompletion [source]#
Converts the response from the Gemini API to the OpenAI API response format.
- Parameters:
response – The response object returned by the Gemini API
- Returns:
- A ChatCompletion object formatted for
the OpenAI API.
- Return type:
openai_response
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.groq_model module#
- class camel.models.groq_model.GroqModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
LLM API served by Groq in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Groq API. But Groq API does not have any additional arguments to check.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Groq API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model supports streaming. But Groq API does not support streaming.
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.litellm_model module#
- class camel.models.litellm_model.LiteLLMModel(model_type: str, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
object
Constructor for LiteLLM backend with OpenAI compatibility.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to LiteLLM API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments.
- property client#
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of LiteLLM chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI format.
- Returns:
ChatCompletion
- property token_counter: LiteLLMTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for the given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
camel.models.mistral_model module#
- class camel.models.mistral_model.MistralModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Mistral API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Mistral API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Mistral API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Mistral chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time. Current it’s not supported.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
# NOTE: Temporarily using OpenAITokenCounter due to a current issue # with installing mistral-common alongside mistralai. # Refer to: mistralai/mistral-common#37
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.model_factory module#
- class camel.models.model_factory.ModelFactory[source]#
Bases:
object
Factory of backend models.
- Raises:
ValueError – in case the provided model type is unknown.
- static create(model_platform: ModelPlatformType, model_type: ModelType | str, model_config_dict: Dict, token_counter: BaseTokenCounter | None = None, api_key: str | None = None, url: str | None = None) BaseModelBackend [source]#
Creates an instance of BaseModelBackend of the specified type.
- Parameters:
model_platform (ModelPlatformType) – Platform from which the model originates.
model_type (Union[ModelType, str]) – Model for which a backend is created can be a str for open source platforms.
model_config_dict (Dict) – A dictionary that will be fed into the backend constructor.
token_counter (Optional[BaseTokenCounter]) – Token counter to use for the model. If not provided, OpenAITokenCounter(ModelType. GPT_3_5_TURBO) will be used if the model platform didn’t provide official token counter.
api_key (Optional[str]) – The API key for authenticating with the model service.
url (Optional[str]) – The url to the model service.
- Raises:
ValueError – If there is not backend for the model.
- Returns:
The initialized backend.
- Return type:
camel.models.nemotron_model module#
- class camel.models.nemotron_model.NemotronModel(model_type: ModelType, api_key: str | None = None, url: str | None = None)[source]#
Bases:
object
Nemotron model API backend with OpenAI compatibility.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list.
- Returns:
ChatCompletion.
camel.models.ollama_model module#
- class camel.models.ollama_model.OllamaModel(model_type: str, model_config_dict: Dict[str, Any], url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
object
Ollama service interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Ollama API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for the given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
camel.models.open_source_model module#
- class camel.models.open_source_model.OpenSourceModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Class for interace with OpenAI-API-compatible servers running open-source models.
- check_model_config()[source]#
Check whether the model configuration is valid for open-source model backends.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API, or it does not contain
model_path
orserver_url
.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI-API-style chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.openai_audio_models module#
- class camel.models.openai_audio_models.OpenAIAudioModels(api_key: str | None = None, url: str | None = None)[source]#
Bases:
object
Provides access to OpenAI’s Text-to-Speech (TTS) and Speech_to_Text (STT) models.
- speech_to_text(audio_file_path: str, translate_into_english: bool = False, **kwargs: Any) str [source]#
Convert speech audio to text.
- Parameters:
audio_file_path (str) – The audio file path, supporting one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
translate_into_english (bool, optional) – Whether to translate the speech into English. Defaults to False.
**kwargs (Any) – Extra keyword arguments passed to the Speech-to-Text (STT) API.
- Returns:
The output text.
- Return type:
str
- Raises:
ValueError – If the audio file format is not supported.
Exception – If there’s an error during the STT API call.
- text_to_speech(input: str, model_type: AudioModelType = AudioModelType.TTS_1, voice: VoiceType = VoiceType.ALLOY, storage_path: str | None = None, **kwargs: Any) List[HttpxBinaryResponseContent] | HttpxBinaryResponseContent [source]#
Convert text to speech using OpenAI’s TTS model. This method converts the given input text to speech using the specified model and voice.
- Parameters:
input (str) – The text to be converted to speech.
model_type (AudioModelType, optional) – The TTS model to use. Defaults to AudioModelType.TTS_1.
voice (VoiceType, optional) – The voice to be used for generating speech. Defaults to VoiceType.ALLOY.
storage_path (str, optional) – The local path to store the generated speech file if provided, defaults to None.
**kwargs (Any) – Extra kwargs passed to the TTS API.
- Returns:
- Union[List[_legacy_response.HttpxBinaryResponseContent],
_legacy_response.HttpxBinaryResponseContent]: List of response content object from OpenAI if input charaters more than 4096, single response content if input charaters less than 4096.
- Raises:
Exception – If there’s an error during the TTS API call.
camel.models.openai_compatibility_model module#
- class camel.models.openai_compatibility_model.OpenAICompatibilityModel(model_type: str, model_config_dict: Dict[str, Any], api_key: str, url: str, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
object
Constructor for model backend supporting OpenAI compatibility.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for the given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
camel.models.openai_model module#
- class camel.models.openai_model.OpenAIModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
OpenAI API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.reka_model module#
- class camel.models.reka_model.RekaModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Reka API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Reka API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Reka API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Mistral chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
# NOTE: Temporarily using OpenAITokenCounter
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
camel.models.samba_model module#
- class camel.models.samba_model.SambaModel(model_type: str, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
object
SambaNova service interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to SambaNova API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to SambaNova API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs SambaNova’s FastAPI service.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for the given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
camel.models.stub_model module#
- class camel.models.stub_model.StubModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
A dummy model used for unit tests.
- model_type = 'stub'#
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Run fake inference by returning a fixed string. All arguments are unused for the dummy model.
- Returns:
Response in the OpenAI API format.
- Return type:
Dict[str, Any]
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.stub_model.StubTokenCounter[source]#
Bases:
BaseTokenCounter
- count_tokens_from_messages(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) int [source]#
Token counting for STUB models, directly returning a constant.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
- A constant to act as the number of the tokens in the
messages.
- Return type:
int
camel.models.togetherai_model module#
- class camel.models.togetherai_model.TogetherAIModel(model_type: str, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
object
Constructor for Together AI backend with OpenAI compatibility. TODO: Add function calling support
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to TogetherAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to TogetherAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for the given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
camel.models.vllm_model module#
- class camel.models.vllm_model.VLLMModel(model_type: str, model_config_dict: Dict[str, Any], url: str | None = None, api_key: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
object
vLLM service interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to vLLM API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for the given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
camel.models.zhipuai_model module#
- class camel.models.zhipuai_model.ZhipuAIModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
ZhipuAI API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to ZhipuAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
Module contents#
- class camel.models.AnthropicModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Anthropic API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration is valid for anthropic model backends.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API, or it does not contain
model_path
orserver_url
.
- count_tokens_from_prompt(prompt: str) int [source]#
Count the number of tokens from a prompt.
- Parameters:
prompt (str) – The prompt string.
- Returns:
The number of tokens in the prompt.
- Return type:
int
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam])[source]#
Run inference of Anthropic chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
Response in the OpenAI API format.
- Return type:
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.AzureOpenAIModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, api_version: str | None = None, azure_deployment_name: str | None = None)[source]#
Bases:
BaseModelBackend
Azure OpenAI API in a unified BaseModelBackend interface. Doc: https://learn.microsoft.com/en-us/azure/ai-services/openai/
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Azure OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Azure OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of Azure OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.BaseModelBackend(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
ABC
Base class for different model backends. May be OpenAI API, a local LLM, a stub for unit tests, etc.
- abstract check_model_config()[source]#
Check whether the input model configuration contains unexpected arguments
- Raises:
ValueError – If the model configuration dictionary contains any unexpected argument for this model class.
- count_tokens_from_messages(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) int [source]#
Count the number of tokens in the messages using the specific tokenizer.
- Parameters:
messages (List[Dict]) – message list with the chat history in OpenAI API format.
- Returns:
Number of tokens in the messages.
- Return type:
int
- abstract run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs the query to the backend model.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- abstract property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for a given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
- class camel.models.GeminiModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Gemini API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Gemini API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Gemini model. This method can handle multimodal input
- Parameters:
messages – Message list or Message with the chat history in OpenAi format.
- Returns:
A ChatCompletion object formatted for the OpenAI API.
- Return type:
response
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- to_gemini_req(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ContentsType [source]#
- Converts the request from the OpenAI API format to
the Gemini API request format.
- Parameters:
messages – The request object from the OpenAI API.
- Returns:
A list of messages formatted for Gemini API.
- Return type:
converted_messages
- to_openai_response(response: GenerateContentResponse) ChatCompletion [source]#
Converts the response from the Gemini API to the OpenAI API response format.
- Parameters:
response – The response object returned by the Gemini API
- Returns:
- A ChatCompletion object formatted for
the OpenAI API.
- Return type:
openai_response
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.GroqModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
LLM API served by Groq in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Groq API. But Groq API does not have any additional arguments to check.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Groq API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model supports streaming. But Groq API does not support streaming.
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.LiteLLMModel(model_type: str, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
object
Constructor for LiteLLM backend with OpenAI compatibility.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to LiteLLM API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments.
- property client#
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of LiteLLM chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI format.
- Returns:
ChatCompletion
- property token_counter: LiteLLMTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for the given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
- class camel.models.MistralModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Mistral API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Mistral API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Mistral API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Mistral chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time. Current it’s not supported.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
# NOTE: Temporarily using OpenAITokenCounter due to a current issue # with installing mistral-common alongside mistralai. # Refer to: mistralai/mistral-common#37
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.ModelFactory[source]#
Bases:
object
Factory of backend models.
- Raises:
ValueError – in case the provided model type is unknown.
- static create(model_platform: ModelPlatformType, model_type: ModelType | str, model_config_dict: Dict, token_counter: BaseTokenCounter | None = None, api_key: str | None = None, url: str | None = None) BaseModelBackend [source]#
Creates an instance of BaseModelBackend of the specified type.
- Parameters:
model_platform (ModelPlatformType) – Platform from which the model originates.
model_type (Union[ModelType, str]) – Model for which a backend is created can be a str for open source platforms.
model_config_dict (Dict) – A dictionary that will be fed into the backend constructor.
token_counter (Optional[BaseTokenCounter]) – Token counter to use for the model. If not provided, OpenAITokenCounter(ModelType. GPT_3_5_TURBO) will be used if the model platform didn’t provide official token counter.
api_key (Optional[str]) – The API key for authenticating with the model service.
url (Optional[str]) – The url to the model service.
- Raises:
ValueError – If there is not backend for the model.
- Returns:
The initialized backend.
- Return type:
- class camel.models.NemotronModel(model_type: ModelType, api_key: str | None = None, url: str | None = None)[source]#
Bases:
object
Nemotron model API backend with OpenAI compatibility.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list.
- Returns:
ChatCompletion.
- class camel.models.OllamaModel(model_type: str, model_config_dict: Dict[str, Any], url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
object
Ollama service interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Ollama API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for the given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
- class camel.models.OpenAIAudioModels(api_key: str | None = None, url: str | None = None)[source]#
Bases:
object
Provides access to OpenAI’s Text-to-Speech (TTS) and Speech_to_Text (STT) models.
- speech_to_text(audio_file_path: str, translate_into_english: bool = False, **kwargs: Any) str [source]#
Convert speech audio to text.
- Parameters:
audio_file_path (str) – The audio file path, supporting one of these formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
translate_into_english (bool, optional) – Whether to translate the speech into English. Defaults to False.
**kwargs (Any) – Extra keyword arguments passed to the Speech-to-Text (STT) API.
- Returns:
The output text.
- Return type:
str
- Raises:
ValueError – If the audio file format is not supported.
Exception – If there’s an error during the STT API call.
- text_to_speech(input: str, model_type: AudioModelType = AudioModelType.TTS_1, voice: VoiceType = VoiceType.ALLOY, storage_path: str | None = None, **kwargs: Any) List[HttpxBinaryResponseContent] | HttpxBinaryResponseContent [source]#
Convert text to speech using OpenAI’s TTS model. This method converts the given input text to speech using the specified model and voice.
- Parameters:
input (str) – The text to be converted to speech.
model_type (AudioModelType, optional) – The TTS model to use. Defaults to AudioModelType.TTS_1.
voice (VoiceType, optional) – The voice to be used for generating speech. Defaults to VoiceType.ALLOY.
storage_path (str, optional) – The local path to store the generated speech file if provided, defaults to None.
**kwargs (Any) – Extra kwargs passed to the TTS API.
- Returns:
- Union[List[_legacy_response.HttpxBinaryResponseContent],
_legacy_response.HttpxBinaryResponseContent]: List of response content object from OpenAI if input charaters more than 4096, single response content if input charaters less than 4096.
- Raises:
Exception – If there’s an error during the TTS API call.
- class camel.models.OpenAICompatibilityModel(model_type: str, model_config_dict: Dict[str, Any], api_key: str, url: str, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
object
Constructor for model backend supporting OpenAI compatibility.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for the given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
- class camel.models.OpenAIModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
OpenAI API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.OpenSourceModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Class for interace with OpenAI-API-compatible servers running open-source models.
- check_model_config()[source]#
Check whether the model configuration is valid for open-source model backends.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API, or it does not contain
model_path
orserver_url
.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI-API-style chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
- Returns whether the model is in stream mode,
which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.RekaModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
Reka API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to Reka API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to Reka API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion [source]#
Runs inference of Mistral chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion.
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
# NOTE: Temporarily using OpenAITokenCounter
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.SambaModel(model_type: str, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
object
SambaNova service interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to SambaNova API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to SambaNova API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs SambaNova’s FastAPI service.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for the given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
- class camel.models.StubModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
A dummy model used for unit tests.
- model_type = 'stub'#
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Run fake inference by returning a fixed string. All arguments are unused for the dummy model.
- Returns:
Response in the OpenAI API format.
- Return type:
Dict[str, Any]
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- class camel.models.TogetherAIModel(model_type: str, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
object
Constructor for Together AI backend with OpenAI compatibility. TODO: Add function calling support
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to TogetherAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to TogetherAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for the given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
- class camel.models.VLLMModel(model_type: str, model_config_dict: Dict[str, Any], url: str | None = None, api_key: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
object
vLLM service interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to vLLM API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to OpenAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type:
- property token_limit: int#
Returns the maximum token limit for the given model.
- Returns:
The maximum token limit for the given model.
- Return type:
int
- class camel.models.ZhipuAIModel(model_type: ModelType, model_config_dict: Dict[str, Any], api_key: str | None = None, url: str | None = None, token_counter: BaseTokenCounter | None = None)[source]#
Bases:
BaseModelBackend
ZhipuAI API in a unified BaseModelBackend interface.
- check_model_config()[source]#
Check whether the model configuration contains any unexpected arguments to OpenAI API.
- Raises:
ValueError – If the model configuration dictionary contains any unexpected arguments to ZhipuAI API.
- run(messages: List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]) ChatCompletion | Stream[ChatCompletionChunk] [source]#
Runs inference of OpenAI chat completion.
- Parameters:
messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.
- Returns:
ChatCompletion in the non-stream mode, or Stream[ChatCompletionChunk] in the stream mode.
- Return type:
Union[ChatCompletion, Stream[ChatCompletionChunk]]
- property stream: bool#
Returns whether the model is in stream mode, which sends partial results each time.
- Returns:
Whether the model is in stream mode.
- Return type:
bool
- property token_counter: BaseTokenCounter#
Initialize the token counter for the model backend.
- Returns:
- The token counter following the model’s
tokenization style.
- Return type: