camel.configs package

Contents

camel.configs package#

Submodules#

camel.configs.anthropic_config module#

class camel.configs.anthropic_config.AnthropicConfig(*, tools: List[Any] | None = None, max_tokens: int = 256, stop_sequences: List[str] | NotGiven = NOT_GIVEN, temperature: float = 1, top_p: float | NotGiven = NOT_GIVEN, top_k: int | NotGiven = NOT_GIVEN, metadata: NotGiven = NOT_GIVEN, stream: bool = False)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the Anthropic API.

See: https://docs.anthropic.com/claude/reference/complete_post :param max_tokens: The maximum number of tokens to

generate before stopping. Note that Anthropic models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. (default: 256)

Parameters:
  • stop_sequences (List[str], optional) – Sequences that will cause the model to stop generating completion text. Anthropic models stop on “nnHuman:”, and may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may include additional strings that will cause the model to stop generating.

  • temperature (float, optional) – Amount of randomness injected into the response. Defaults to 1. Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and closer to 1 for creative and generative tasks. (default: 1)

  • top_p (float, optional) – Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both. (default: 0.7)

  • top_k (int, optional) – Only sample from the top K options for each subsequent token. Used to remove “long tail” low probability responses. (default: 5)

  • metadata – An object describing metadata about the request.

  • stream (bool, optional) – Whether to incrementally stream the response using server-sent events. (default: False)

max_tokens: int#
metadata: NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'max_tokens': FieldInfo(annotation=int, required=False, default=256), 'metadata': FieldInfo(annotation=NotGiven, required=False, default=NOT_GIVEN), 'stop_sequences': FieldInfo(annotation=Union[List[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=1), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_k': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'top_p': FieldInfo(annotation=Union[float, NotGiven], required=False, default=NOT_GIVEN)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

stop_sequences: List[str] | NotGiven#
stream: bool#
temperature: float#
top_k: int | NotGiven#
top_p: float | NotGiven#

camel.configs.base_config module#

class camel.configs.base_config.BaseConfig(*, tools: List[Any] | None = None)[source]#

Bases: ABC, BaseModel

Base configuration class for all models.

This class provides a common interface for all models, ensuring that all models have a consistent set of attributes and methods.

as_dict() dict[str, Any][source]#

Convert the current configuration to a dictionary.

This method converts the current configuration object to a dictionary representation, which can be used for serialization or other purposes.

Returns:

A dictionary representation of the current

configuration.

Return type:

dict[str, Any]

classmethod fields_type_checking(tools)[source]#

Validate the type of tools in the configuration.

This method ensures that the tools provided in the configuration are instances of FunctionTool. If any tool is not an instance of FunctionTool, it raises a ValueError.

model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

camel.configs.gemini_config module#

class camel.configs.gemini_config.GeminiConfig(*, tools: List[Any] | None = None, candidate_count: int | None = None, stop_sequences: Iterable[str] | None = None, max_output_tokens: int | None = None, temperature: float | None = None, top_p: float | None = None, top_k: int | None = None, response_mime_type: str | None = None, response_schema: Any | None = None, safety_settings: Any | None = None, tool_config: Any | None = None, request_options: Any | None = None)[source]#

Bases: BaseConfig

A simple dataclass used to configure the generation parameters of GenerativeModel.generate_content.

Parameters:
  • candidate_count (int, optional) – Number of responses to return.

  • stop_sequences (Iterable[str], optional) – The set of character sequences (up to 5) that will stop output generation. If specified the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response.

  • max_output_tokens (int, optional) – The maximum number of tokens to include in a candidate. If unset, this will default to output_token_limit specified in the model’s specification.

  • temperature (float, optional) – Controls the randomness of the output. Note: The default value varies by model, see the Model.temperature attribute of the Model returned the genai.get_model function. Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model.

  • top_p (int, optional) – The maximum cumulative probability of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability. Note: The default value varies by model, see the Model.top_p attribute of the Model returned the genai.get_model function.

  • top_k (int, optional) – The maximum number of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling.Top-k sampling considers the set of top_k most probable tokens. Defaults to 40. Note: The default value varies by model, see the Model.top_k attribute of the Model returned the genai.get_model function.

  • response_mime_type (str, optional) – Output response mimetype of the generated candidate text. Supported mimetype: text/plain: (default) Text output. application/json: JSON response in the candidates.

  • response_schema (Schema, optional) – Specifies the format of the JSON requested if response_mime_type is application/json.

  • safety_settings (SafetySettingOptions, optional) – Overrides for the model’s safety settings.

  • tools (FunctionLibraryType, optional) – protos.Tools more info coming soon.

  • tool_config (ToolConfigType, optional) – more info coming soon.

  • request_options (RequestOptionsType, optional) – Options for the request.

candidate_count: int | None#
max_output_tokens: int | None#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'candidate_count': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'max_output_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'request_options': FieldInfo(annotation=Union[Any, NoneType], required=False, default=None), 'response_mime_type': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'response_schema': FieldInfo(annotation=Union[Any, NoneType], required=False, default=None), 'safety_settings': FieldInfo(annotation=Union[Any, NoneType], required=False, default=None), 'stop_sequences': FieldInfo(annotation=Union[Iterable[str], NoneType], required=False, default=None), 'temperature': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'tool_config': FieldInfo(annotation=Union[Any, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_k': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=Union[float, NoneType], required=False, default=None)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

classmethod model_type_checking(data: Any)[source]#

Validate the type of tools in the configuration.

This method ensures that the tools provided in the configuration are instances of FunctionTool. If any tool is not an instance of FunctionTool, it raises a ValueError.

request_options: Any | None#
response_mime_type: str | None#
response_schema: Any | None#
safety_settings: Any | None#
stop_sequences: Iterable[str] | None#
temperature: float | None#
tool_config: Any | None#
top_k: int | None#
top_p: float | None#

camel.configs.groq_config module#

class camel.configs.groq_config.GroqConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, response_format: dict | NotGiven = NOT_GIVEN, frequency_penalty: float = 0.0, user: str = '', tool_choice: dict[str, str] | str | None = 'auto')[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using OpenAI compatibility.

Reference: https://console.groq.com/docs/openai

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • n (int, optional) – How many chat completion choices to generate for each input message. (default: 1)

  • response_format (object, optional) – An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to {“type”: “json_object”} enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=”length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • user (str, optional) – A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. (default: "")

  • tools (list[FunctionTool], optional) – A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

  • tool_choice (Union[dict[str, str], str], optional) – Controls which (if any) tool is called by the model. "none" means the model will not call any tool and instead generates a message. "auto" means the model can pick between generating a message or calling one or more tools. "required" means the model must call one or more tools. Specifying a particular tool via {“type”: “function”, “function”: {“name”: “my_function”}} forces the model to call that tool. "none" is the default when no tools are present. "auto" is the default if tools are present.

frequency_penalty: float#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'n': FieldInfo(annotation=int, required=False, default=1), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'response_format': FieldInfo(annotation=Union[dict, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tool_choice': FieldInfo(annotation=Union[dict[str, str], str, NoneType], required=False, default='auto'), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=1.0), 'user': FieldInfo(annotation=str, required=False, default='')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

n: int#
presence_penalty: float#
response_format: dict | NotGiven#
stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
tool_choice: dict[str, str] | str | None#
top_p: float#
user: str#

camel.configs.litellm_config module#

class camel.configs.litellm_config.LiteLLMConfig(*, tools: List[Any] | None = None, timeout: float | str | None = None, temperature: float | None = None, top_p: float | None = None, n: int | None = None, stream: bool | None = None, stream_options: dict | None = None, stop: str | List[str] | None = None, max_tokens: int | None = None, presence_penalty: float | None = None, frequency_penalty: float | None = None, logit_bias: dict | None = None, user: str | None = None, response_format: dict | None = None, seed: int | None = None, tool_choice: str | dict | None = None, logprobs: bool | None = None, top_logprobs: int | None = None, deployment_id: str | None = None, extra_headers: dict | None = None, api_version: str | None = None, mock_response: str | None = None, custom_llm_provider: str | None = None, max_retries: int | None = None)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the LiteLLM API.

Parameters:
  • timeout (Optional[Union[float, str]], optional) – Request timeout. (default: None)

  • temperature (Optional[float], optional) – Temperature parameter for controlling randomness. (default: None)

  • top_p (Optional[float], optional) – Top-p parameter for nucleus sampling. (default: None)

  • n (Optional[int], optional) – Number of completions to generate. (default: None)

  • stream (Optional[bool], optional) – Whether to return a streaming response. (default: None)

  • stream_options (Optional[dict], optional) – Options for the streaming response. (default: None)

  • stop (Optional[Union[str, List[str]]], optional) – Sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (Optional[int], optional) – Maximum number of tokens to generate. (default: None)

  • presence_penalty (Optional[float], optional) – Penalize new tokens based on their existence in the text so far. (default: None)

  • frequency_penalty (Optional[float], optional) – Penalize new tokens based on their frequency in the text so far. (default: None)

  • logit_bias (Optional[dict], optional) – Modify the probability of specific tokens appearing in the completion. (default: None)

  • user (Optional[str], optional) – A unique identifier representing the end-user. (default: None)

  • response_format (Optional[dict], optional) – Response format parameters. (default: None)

  • seed (Optional[int], optional) – Random seed. (default: None)

  • tools (Optional[List], optional) – List of tools. (default: None)

  • tool_choice (Optional[Union[str, dict]], optional) – Tool choice parameters. (default: None)

  • logprobs (Optional[bool], optional) – Whether to return log probabilities of the output tokens. (default: None)

  • top_logprobs (Optional[int], optional) – Number of most likely tokens to return at each token position. (default: None)

  • deployment_id (Optional[str], optional) – Deployment ID. (default: None)

  • extra_headers (Optional[dict], optional) – Additional headers for the request. (default: None)

  • api_version (Optional[str], optional) – API version. (default: None)

  • mock_response (Optional[str], optional) – Mock completion response for testing or debugging. (default: None)

  • custom_llm_provider (Optional[str], optional) – Non-OpenAI LLM provider. (default: None)

  • max_retries (Optional[int], optional) – Maximum number of retries. (default: None)

api_version: str | None#
custom_llm_provider: str | None#
deployment_id: str | None#
extra_headers: dict | None#
frequency_penalty: float | None#
logit_bias: dict | None#
logprobs: bool | None#
max_retries: int | None#
max_tokens: int | None#
mock_response: str | None#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'api_version': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'custom_llm_provider': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'deployment_id': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'extra_headers': FieldInfo(annotation=Union[dict, NoneType], required=False, default=None), 'frequency_penalty': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'logit_bias': FieldInfo(annotation=Union[dict, NoneType], required=False, default=None), 'logprobs': FieldInfo(annotation=Union[bool, NoneType], required=False, default=None), 'max_retries': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'max_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'mock_response': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'n': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'presence_penalty': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'response_format': FieldInfo(annotation=Union[dict, NoneType], required=False, default=None), 'seed': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'stop': FieldInfo(annotation=Union[str, List[str], NoneType], required=False, default=None), 'stream': FieldInfo(annotation=Union[bool, NoneType], required=False, default=None), 'stream_options': FieldInfo(annotation=Union[dict, NoneType], required=False, default=None), 'temperature': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'timeout': FieldInfo(annotation=Union[float, str, NoneType], required=False, default=None), 'tool_choice': FieldInfo(annotation=Union[str, dict, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_logprobs': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'user': FieldInfo(annotation=Union[str, NoneType], required=False, default=None)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

n: int | None#
presence_penalty: float | None#
response_format: dict | None#
seed: int | None#
stop: str | List[str] | None#
stream: bool | None#
stream_options: dict | None#
temperature: float | None#
timeout: float | str | None#
tool_choice: str | dict | None#
top_logprobs: int | None#
top_p: float | None#
user: str | None#

camel.configs.mistral_config module#

class camel.configs.mistral_config.MistralConfig(*, tools: List[Any] | None = None, temperature: float | None = None, top_p: float | None = None, max_tokens: int | None = None, min_tokens: int | None = None, stop: str | list[str] | None = None, random_seed: int | None = None, safe_prompt: bool = False, response_format: Dict[str, str] | Any | None = None, tool_choice: str | None = 'auto')[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the Mistral API.

reference: mistralai/client-python

#TODO: Support stream mode

Parameters:
  • temperature (Optional[float], optional) – temperature the temperature to use for sampling, e.g. 0.5.

  • top_p (Optional[float], optional) – the cumulative probability of tokens to generate, e.g. 0.9. Defaults to None.

  • max_tokens (Optional[int], optional) – the maximum number of tokens to generate, e.g. 100. Defaults to None.

  • min_tokens (Optional[int], optional) – the minimum number of tokens to generate, e.g. 100. Defaults to None.

  • stop (Optional[Union[str,list[str]]]) – Stop generation if this token is detected. Or if one of these tokens is detected when providing a string list.

  • random_seed (Optional[int], optional) – the random seed to use for sampling, e.g. 42. Defaults to None.

  • safe_prompt (bool, optional) – whether to use safe prompt, e.g. true. Defaults to False.

  • response_format (Union[Dict[str, str], ResponseFormat) – format of the response.

  • tool_choice (str, optional) – Controls which (if any) tool is called by the model. "none" means the model will not call any tool and instead generates a message. "auto" means the model can pick between generating a message or calling one or more tools. "any" means the model must call one or more tools. "auto" is the default value.

classmethod fields_type_checking(response_format)[source]#

Validate the type of tools in the configuration.

This method ensures that the tools provided in the configuration are instances of FunctionTool. If any tool is not an instance of FunctionTool, it raises a ValueError.

max_tokens: int | None#
min_tokens: int | None#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'max_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'min_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'random_seed': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'response_format': FieldInfo(annotation=Union[Dict[str, str], Any, NoneType], required=False, default=None), 'safe_prompt': FieldInfo(annotation=bool, required=False, default=False), 'stop': FieldInfo(annotation=Union[str, list[str], NoneType], required=False, default=None), 'temperature': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'tool_choice': FieldInfo(annotation=Union[str, NoneType], required=False, default='auto'), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=Union[float, NoneType], required=False, default=None)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

random_seed: int | None#
response_format: Dict[str, str] | Any | None#
safe_prompt: bool#
stop: str | list[str] | None#
temperature: float | None#
tool_choice: str | None#
top_p: float | None#

camel.configs.ollama_config module#

class camel.configs.ollama_config.OllamaConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 1.0, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, response_format: dict | NotGiven = NOT_GIVEN, frequency_penalty: float = 0.0)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using OpenAI compatibility

Reference: ollama/ollama

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • response_format (object, optional) – An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to {“type”: “json_object”} enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=”length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

frequency_penalty: float#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'response_format': FieldInfo(annotation=Union[dict, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=1.0)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

presence_penalty: float#
response_format: dict | NotGiven#
stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
top_p: float#

camel.configs.openai_config module#

class camel.configs.openai_config.ChatGPTConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, response_format: Type[BaseModel] | dict | NotGiven = NOT_GIVEN, frequency_penalty: float = 0.0, logit_bias: dict = None, user: str = '', tool_choice: dict[str, str] | str | None = None)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the OpenAI API.

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • n (int, optional) – How many chat completion choices to generate for each input message. (default: 1)

  • response_format (object, optional) – An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to {“type”: “json_object”} enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=”length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • logit_bias (dict, optional) – Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between:obj:` -1` and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. (default: {})

  • user (str, optional) – A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. (default: "")

  • tools (list[FunctionTool], optional) – A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

  • tool_choice (Union[dict[str, str], str], optional) – Controls which (if any) tool is called by the model. "none" means the model will not call any tool and instead generates a message. "auto" means the model can pick between generating a message or calling one or more tools. "required" means the model must call one or more tools. Specifying a particular tool via {“type”: “function”, “function”: {“name”: “my_function”}} forces the model to call that tool. "none" is the default when no tools are present. "auto" is the default if tools are present.

as_dict() dict[str, Any][source]#

Convert the current configuration to a dictionary.

This method converts the current configuration object to a dictionary representation, which can be used for serialization or other purposes.

Returns:

A dictionary representation of the current

configuration.

Return type:

dict[str, Any]

frequency_penalty: float#
logit_bias: dict#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'logit_bias': FieldInfo(annotation=dict, required=False, default_factory=dict), 'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'n': FieldInfo(annotation=int, required=False, default=1), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'response_format': FieldInfo(annotation=Union[Type[BaseModel], dict, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tool_choice': FieldInfo(annotation=Union[dict[str, str], str, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=1.0), 'user': FieldInfo(annotation=str, required=False, default='')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

n: int#
presence_penalty: float#
response_format: Type[BaseModel] | dict | NotGiven#
stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
tool_choice: dict[str, str] | str | None#
top_p: float#
user: str#

camel.configs.reka_config module#

class camel.configs.reka_config.RekaConfig(*, tools: List[Any] | None = None, temperature: float | None = None, top_p: float | None = None, top_k: int | None = None, max_tokens: int | None = None, stop: str | list[str] | None = None, seed: int | None = None, frequency_penalty: float = 0.0, presence_penalty: float = 0.0, use_search_engine: bool | None = False)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the Reka API.

Reference: https://docs.reka.ai/api-reference/chat/create

Parameters:
  • temperature (Optional[float], optional) – temperature the temperature to use for sampling, e.g. 0.5.

  • top_p (Optional[float], optional) – the cumulative probability of tokens to generate, e.g. 0.9. Defaults to None.

  • top_k (Optional[int], optional) – Parameter which forces the model to only consider the tokens with the top_k highest probabilities at the next step. Defaults to 1024.

  • max_tokens (Optional[int], optional) – the maximum number of tokens to generate, e.g. 100. Defaults to None.

  • stop (Optional[Union[str,list[str]]]) – Stop generation if this token is detected. Or if one of these tokens is detected when providing a string list.

  • seed (Optional[int], optional) – the random seed to use for sampling, e. g. 42. Defaults to None.

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • use_search_engine (Optional[bool]) – Whether to consider using search engine to complete the request. Note that even if this is set to True, the model might decide to not use search.

as_dict() dict[str, Any][source]#

Convert the current configuration to a dictionary.

This method converts the current configuration object to a dictionary representation, which can be used for serialization or other purposes.

Returns:

A dictionary representation of the current

configuration.

Return type:

dict[str, Any]

frequency_penalty: float#
max_tokens: int | None#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'max_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'seed': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'stop': FieldInfo(annotation=Union[str, list[str], NoneType], required=False, default=None), 'temperature': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_k': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'use_search_engine': FieldInfo(annotation=Union[bool, NoneType], required=False, default=False)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

presence_penalty: float#
seed: int | None#
stop: str | list[str] | None#
temperature: float | None#
top_k: int | None#
top_p: float | None#
use_search_engine: bool | None#

camel.configs.samba_config module#

class camel.configs.samba_config.SambaCloudAPIConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, response_format: dict | NotGiven = NOT_GIVEN, frequency_penalty: float = 0.0, logit_bias: dict = None, user: str = '', tool_choice: dict[str, str] | str | None = None)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the OpenAI API.

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • n (int, optional) – How many chat completion choices to generate for each input message. (default: 1)

  • response_format (object, optional) – An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to {“type”: “json_object”} enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=”length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • logit_bias (dict, optional) – Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between:obj:` -1` and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. (default: {})

  • user (str, optional) – A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. (default: "")

  • tools (list[FunctionTool], optional) – A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

  • tool_choice (Union[dict[str, str], str], optional) – Controls which (if any) tool is called by the model. "none" means the model will not call any tool and instead generates a message. "auto" means the model can pick between generating a message or calling one or more tools. "required" means the model must call one or more tools. Specifying a particular tool via {“type”: “function”, “function”: {“name”: “my_function”}} forces the model to call that tool. "none" is the default when no tools are present. "auto" is the default if tools are present.

frequency_penalty: float#
logit_bias: dict#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'logit_bias': FieldInfo(annotation=dict, required=False, default_factory=dict), 'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'n': FieldInfo(annotation=int, required=False, default=1), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'response_format': FieldInfo(annotation=Union[dict, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tool_choice': FieldInfo(annotation=Union[dict[str, str], str, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=1.0), 'user': FieldInfo(annotation=str, required=False, default='')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

n: int#
presence_penalty: float#
response_format: dict | NotGiven#
stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
tool_choice: dict[str, str] | str | None#
top_p: float#
user: str#
class camel.configs.samba_config.SambaVerseAPIConfig(*, tools: List[Any] | None = None, temperature: float | None = 0.7, top_p: float | None = 0.95, top_k: int | None = 50, max_tokens: int | None = 2048, repetition_penalty: float | None = 1.0, stop: str | list[str] | None = '', stream: bool | None = False)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the SambaVerse API.

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.7)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 0.95)

  • top_k (int, optional) – Only sample from the top K options for each subsequent token. Used to remove “long tail” low probability responses. (default: 50)

  • max_tokens (Optional[int], optional) – The maximum number of tokens to generate, e.g. 100. (default: 2048)

  • repetition_penalty (Optional[float], optional) – The parameter for repetition penalty. 1.0 means no penalty. (default: 1.0)

  • stop (Optional[Union[str,list[str]]]) – Stop generation if this token is detected. Or if one of these tokens is detected when providing a string list. (default: "")

  • stream (Optional[bool]) – If True, partial message deltas will be sent as data-only server-sent events as they become available. Currently SambaVerse API doesn’t support stream mode. (default: False)

as_dict() dict[str, Any][source]#

Convert the current configuration to a dictionary.

This method converts the current configuration object to a dictionary representation, which can be used for serialization or other purposes.

Returns:

A dictionary representation of the current

configuration.

Return type:

dict[str, Any]

max_tokens: int | None#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'max_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=2048), 'repetition_penalty': FieldInfo(annotation=Union[float, NoneType], required=False, default=1.0), 'stop': FieldInfo(annotation=Union[str, list[str], NoneType], required=False, default=''), 'stream': FieldInfo(annotation=Union[bool, NoneType], required=False, default=False), 'temperature': FieldInfo(annotation=Union[float, NoneType], required=False, default=0.7), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_k': FieldInfo(annotation=Union[int, NoneType], required=False, default=50), 'top_p': FieldInfo(annotation=Union[float, NoneType], required=False, default=0.95)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

repetition_penalty: float | None#
stop: str | list[str] | None#
stream: bool | None#
temperature: float | None#
top_k: int | None#
top_p: float | None#

camel.configs.togetherai_config module#

class camel.configs.togetherai_config.TogetherAIConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, response_format: dict | NotGiven = NOT_GIVEN, frequency_penalty: float = 0.0, logit_bias: dict = None, user: str = '')[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the OpenAI API.

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • n (int, optional) – How many chat completion choices to generate for each input message. (default: 1)

  • response_format (object, optional) – An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to {“type”: “json_object”} enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=”length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • logit_bias (dict, optional) – Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between:obj:` -1` and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. (default: {})

  • user (str, optional) – A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. (default: "")

as_dict() dict[str, Any][source]#

Convert the current configuration to a dictionary.

This method converts the current configuration object to a dictionary representation, which can be used for serialization or other purposes.

Returns:

A dictionary representation of the current

configuration.

Return type:

dict[str, Any]

frequency_penalty: float#
logit_bias: dict#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'logit_bias': FieldInfo(annotation=dict, required=False, default_factory=dict), 'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'n': FieldInfo(annotation=int, required=False, default=1), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'response_format': FieldInfo(annotation=Union[dict, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=1.0), 'user': FieldInfo(annotation=str, required=False, default='')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

n: int#
presence_penalty: float#
response_format: dict | NotGiven#
stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
top_p: float#
user: str#

camel.configs.vllm_config module#

class camel.configs.vllm_config.VLLMConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, response_format: dict | NotGiven = NOT_GIVEN, frequency_penalty: float = 0.0, logit_bias: dict = None, user: str = '')[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the OpenAI API.

Reference: https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • n (int, optional) – How many chat completion choices to generate for each input message. (default: 1)

  • response_format (object, optional) – An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to {“type”: “json_object”} enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=”length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • logit_bias (dict, optional) – Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between:obj:` -1` and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. (default: {})

  • user (str, optional) – A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. (default: "")

frequency_penalty: float#
logit_bias: dict#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'logit_bias': FieldInfo(annotation=dict, required=False, default_factory=dict), 'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'n': FieldInfo(annotation=int, required=False, default=1), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'response_format': FieldInfo(annotation=Union[dict, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=1.0), 'user': FieldInfo(annotation=str, required=False, default='')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

n: int#
presence_penalty: float#
response_format: dict | NotGiven#
stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
top_p: float#
user: str#

camel.configs.zhipuai_config module#

class camel.configs.zhipuai_config.ZhipuAIConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 0.6, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, tool_choice: dict[str, str] | str | None = None)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using OpenAI compatibility

Reference: https://open.bigmodel.cn/dev/api#glm-4v

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 0.6)

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • tools (list[FunctionTool], optional) – A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

  • tool_choice (Union[dict[str, str], str], optional) – Controls which (if any) tool is called by the model. "none" means the model will not call any tool and instead generates a message. "auto" means the model can pick between generating a message or calling one or more tools. "required" means the model must call one or more tools. Specifying a particular tool via {“type”: “function”, “function”: {“name”: “my_function”}} forces the model to call that tool. "none" is the default when no tools are present. "auto" is the default if tools are present.

max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tool_choice': FieldInfo(annotation=Union[dict[str, str], str, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=0.6)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
tool_choice: dict[str, str] | str | None#
top_p: float#

Module contents#

class camel.configs.AnthropicConfig(*, tools: List[Any] | None = None, max_tokens: int = 256, stop_sequences: List[str] | NotGiven = NOT_GIVEN, temperature: float = 1, top_p: float | NotGiven = NOT_GIVEN, top_k: int | NotGiven = NOT_GIVEN, metadata: NotGiven = NOT_GIVEN, stream: bool = False)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the Anthropic API.

See: https://docs.anthropic.com/claude/reference/complete_post :param max_tokens: The maximum number of tokens to

generate before stopping. Note that Anthropic models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate. (default: 256)

Parameters:
  • stop_sequences (List[str], optional) – Sequences that will cause the model to stop generating completion text. Anthropic models stop on “nnHuman:”, and may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may include additional strings that will cause the model to stop generating.

  • temperature (float, optional) – Amount of randomness injected into the response. Defaults to 1. Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and closer to 1 for creative and generative tasks. (default: 1)

  • top_p (float, optional) – Use nucleus sampling. In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both. (default: 0.7)

  • top_k (int, optional) – Only sample from the top K options for each subsequent token. Used to remove “long tail” low probability responses. (default: 5)

  • metadata – An object describing metadata about the request.

  • stream (bool, optional) – Whether to incrementally stream the response using server-sent events. (default: False)

max_tokens: int#
metadata: NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'max_tokens': FieldInfo(annotation=int, required=False, default=256), 'metadata': FieldInfo(annotation=NotGiven, required=False, default=NOT_GIVEN), 'stop_sequences': FieldInfo(annotation=Union[List[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=1), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_k': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'top_p': FieldInfo(annotation=Union[float, NotGiven], required=False, default=NOT_GIVEN)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

stop_sequences: List[str] | NotGiven#
stream: bool#
temperature: float#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_k: int | NotGiven#
top_p: float | NotGiven#
class camel.configs.BaseConfig(*, tools: List[Any] | None = None)[source]#

Bases: ABC, BaseModel

Base configuration class for all models.

This class provides a common interface for all models, ensuring that all models have a consistent set of attributes and methods.

as_dict() dict[str, Any][source]#

Convert the current configuration to a dictionary.

This method converts the current configuration object to a dictionary representation, which can be used for serialization or other purposes.

Returns:

A dictionary representation of the current

configuration.

Return type:

dict[str, Any]

classmethod fields_type_checking(tools)[source]#

Validate the type of tools in the configuration.

This method ensures that the tools provided in the configuration are instances of FunctionTool. If any tool is not an instance of FunctionTool, it raises a ValueError.

model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

class camel.configs.ChatGPTConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, response_format: Type[BaseModel] | dict | NotGiven = NOT_GIVEN, frequency_penalty: float = 0.0, logit_bias: dict = None, user: str = '', tool_choice: dict[str, str] | str | None = None)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the OpenAI API.

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • n (int, optional) – How many chat completion choices to generate for each input message. (default: 1)

  • response_format (object, optional) – An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to {“type”: “json_object”} enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=”length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • logit_bias (dict, optional) – Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between:obj:` -1` and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. (default: {})

  • user (str, optional) – A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. (default: "")

  • tools (list[FunctionTool], optional) – A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

  • tool_choice (Union[dict[str, str], str], optional) – Controls which (if any) tool is called by the model. "none" means the model will not call any tool and instead generates a message. "auto" means the model can pick between generating a message or calling one or more tools. "required" means the model must call one or more tools. Specifying a particular tool via {“type”: “function”, “function”: {“name”: “my_function”}} forces the model to call that tool. "none" is the default when no tools are present. "auto" is the default if tools are present.

as_dict() dict[str, Any][source]#

Convert the current configuration to a dictionary.

This method converts the current configuration object to a dictionary representation, which can be used for serialization or other purposes.

Returns:

A dictionary representation of the current

configuration.

Return type:

dict[str, Any]

frequency_penalty: float#
logit_bias: dict#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'logit_bias': FieldInfo(annotation=dict, required=False, default_factory=dict), 'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'n': FieldInfo(annotation=int, required=False, default=1), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'response_format': FieldInfo(annotation=Union[Type[BaseModel], dict, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tool_choice': FieldInfo(annotation=Union[dict[str, str], str, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=1.0), 'user': FieldInfo(annotation=str, required=False, default='')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

n: int#
presence_penalty: float#
response_format: Type[BaseModel] | dict | NotGiven#
stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
tool_choice: dict[str, str] | str | None#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_p: float#
user: str#
class camel.configs.CohereConfig(*, tools: List[Any] | None = None, temperature: float | None = 0.2, documents: list | None = None, max_tokens: int | None = None, stop_sequences: List[str] | None = None, seed: int | None = None, frequency_penalty: float | None = 0.0, presence_penalty: float | None = 0.0, k: int | None = 0, p: float | None = 0.75)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the Cohere API.

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.3)

  • documents (list, optional) – A list of relevant documents that the model can cite to generate a more accurate reply. Each document is either a string or document object with content and metadata. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens the model will generate as part of the response. (default: None)

  • stop_sequences (List(str), optional) – A list of up to 5 strings that the model will use to stop generation. If the model generates a string that matches any of the strings in the list, it will stop generating tokens and return the generated text up to that point not including the stop sequence. (default: None)

  • seed (int, optional) – If specified, the backend will make a best effort to sample tokens deterministically, such that repeated requests with the same seed and parameters should return the same result. However, determinism cannot be totally guaranteed. (default: None)

  • frequency_penalty (float, optional) – Min value of 0.0, max value of 1.0. Used to reduce repetitiveness of generated tokens. The higher the value, the stronger a penalty is applied to previously present tokens, proportional to how many times they have already appeared in the prompt or prior generation. (default: 0.0)

  • presence_penalty (float, optional) – Min value of 0.0, max value of 1.0. Used to reduce repetitiveness of generated tokens. Similar to frequency_penalty, except that this penalty is applied equally to all tokens that have already appeared, regardless of their exact frequencies. (default: 0.0)

  • k (int, optional) – Ensures only the top k most likely tokens are considered for generation at each step. Min value of 0, max value of 500. (default: 0)

  • p (float, optional) – Ensures that only the most likely tokens, with total probability mass of p, are considered for generation at each step. If both k and p are enabled, p acts after k. Min value of 0.01, max value of 0.99. (default: 0.75)

documents: list | None#
frequency_penalty: float | None#
k: int | None#
max_tokens: int | None#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'documents': FieldInfo(annotation=Union[list, NoneType], required=False, default=None), 'frequency_penalty': FieldInfo(annotation=Union[float, NoneType], required=False, default=0.0), 'k': FieldInfo(annotation=Union[int, NoneType], required=False, default=0), 'max_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'p': FieldInfo(annotation=Union[float, NoneType], required=False, default=0.75), 'presence_penalty': FieldInfo(annotation=Union[float, NoneType], required=False, default=0.0), 'seed': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'stop_sequences': FieldInfo(annotation=Union[List[str], NoneType], required=False, default=None), 'temperature': FieldInfo(annotation=Union[float, NoneType], required=False, default=0.2), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

p: float | None#
presence_penalty: float | None#
seed: int | None#
stop_sequences: List[str] | None#
temperature: float | None#
class camel.configs.GeminiConfig(*, tools: List[Any] | None = None, candidate_count: int | None = None, stop_sequences: Iterable[str] | None = None, max_output_tokens: int | None = None, temperature: float | None = None, top_p: float | None = None, top_k: int | None = None, response_mime_type: str | None = None, response_schema: Any | None = None, safety_settings: Any | None = None, tool_config: Any | None = None, request_options: Any | None = None)[source]#

Bases: BaseConfig

A simple dataclass used to configure the generation parameters of GenerativeModel.generate_content.

Parameters:
  • candidate_count (int, optional) – Number of responses to return.

  • stop_sequences (Iterable[str], optional) – The set of character sequences (up to 5) that will stop output generation. If specified the API will stop at the first appearance of a stop sequence. The stop sequence will not be included as part of the response.

  • max_output_tokens (int, optional) – The maximum number of tokens to include in a candidate. If unset, this will default to output_token_limit specified in the model’s specification.

  • temperature (float, optional) – Controls the randomness of the output. Note: The default value varies by model, see the Model.temperature attribute of the Model returned the genai.get_model function. Values can range from [0.0,1.0], inclusive. A value closer to 1.0 will produce responses that are more varied and creative, while a value closer to 0.0 will typically result in more straightforward responses from the model.

  • top_p (int, optional) – The maximum cumulative probability of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling. Tokens are sorted based on their assigned probabilities so that only the most likely tokens are considered. Top-k sampling directly limits the maximum number of tokens to consider, while Nucleus sampling limits number of tokens based on the cumulative probability. Note: The default value varies by model, see the Model.top_p attribute of the Model returned the genai.get_model function.

  • top_k (int, optional) – The maximum number of tokens to consider when sampling. The model uses combined Top-k and nucleus sampling.Top-k sampling considers the set of top_k most probable tokens. Defaults to 40. Note: The default value varies by model, see the Model.top_k attribute of the Model returned the genai.get_model function.

  • response_mime_type (str, optional) – Output response mimetype of the generated candidate text. Supported mimetype: text/plain: (default) Text output. application/json: JSON response in the candidates.

  • response_schema (Schema, optional) – Specifies the format of the JSON requested if response_mime_type is application/json.

  • safety_settings (SafetySettingOptions, optional) – Overrides for the model’s safety settings.

  • tools (FunctionLibraryType, optional) – protos.Tools more info coming soon.

  • tool_config (ToolConfigType, optional) – more info coming soon.

  • request_options (RequestOptionsType, optional) – Options for the request.

candidate_count: int | None#
max_output_tokens: int | None#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'candidate_count': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'max_output_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'request_options': FieldInfo(annotation=Union[Any, NoneType], required=False, default=None), 'response_mime_type': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'response_schema': FieldInfo(annotation=Union[Any, NoneType], required=False, default=None), 'safety_settings': FieldInfo(annotation=Union[Any, NoneType], required=False, default=None), 'stop_sequences': FieldInfo(annotation=Union[Iterable[str], NoneType], required=False, default=None), 'temperature': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'tool_config': FieldInfo(annotation=Union[Any, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_k': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=Union[float, NoneType], required=False, default=None)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

classmethod model_type_checking(data: Any)[source]#

Validate the type of tools in the configuration.

This method ensures that the tools provided in the configuration are instances of FunctionTool. If any tool is not an instance of FunctionTool, it raises a ValueError.

request_options: Any | None#
response_mime_type: str | None#
response_schema: Any | None#
safety_settings: Any | None#
stop_sequences: Iterable[str] | None#
temperature: float | None#
tool_config: Any | None#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_k: int | None#
top_p: float | None#
class camel.configs.GroqConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, response_format: dict | NotGiven = NOT_GIVEN, frequency_penalty: float = 0.0, user: str = '', tool_choice: dict[str, str] | str | None = 'auto')[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using OpenAI compatibility.

Reference: https://console.groq.com/docs/openai

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • n (int, optional) – How many chat completion choices to generate for each input message. (default: 1)

  • response_format (object, optional) – An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to {“type”: “json_object”} enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=”length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • user (str, optional) – A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. (default: "")

  • tools (list[FunctionTool], optional) – A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

  • tool_choice (Union[dict[str, str], str], optional) – Controls which (if any) tool is called by the model. "none" means the model will not call any tool and instead generates a message. "auto" means the model can pick between generating a message or calling one or more tools. "required" means the model must call one or more tools. Specifying a particular tool via {“type”: “function”, “function”: {“name”: “my_function”}} forces the model to call that tool. "none" is the default when no tools are present. "auto" is the default if tools are present.

frequency_penalty: float#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'n': FieldInfo(annotation=int, required=False, default=1), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'response_format': FieldInfo(annotation=Union[dict, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tool_choice': FieldInfo(annotation=Union[dict[str, str], str, NoneType], required=False, default='auto'), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=1.0), 'user': FieldInfo(annotation=str, required=False, default='')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

n: int#
presence_penalty: float#
response_format: dict | NotGiven#
stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
tool_choice: dict[str, str] | str | None#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_p: float#
user: str#
class camel.configs.LiteLLMConfig(*, tools: List[Any] | None = None, timeout: float | str | None = None, temperature: float | None = None, top_p: float | None = None, n: int | None = None, stream: bool | None = None, stream_options: dict | None = None, stop: str | List[str] | None = None, max_tokens: int | None = None, presence_penalty: float | None = None, frequency_penalty: float | None = None, logit_bias: dict | None = None, user: str | None = None, response_format: dict | None = None, seed: int | None = None, tool_choice: str | dict | None = None, logprobs: bool | None = None, top_logprobs: int | None = None, deployment_id: str | None = None, extra_headers: dict | None = None, api_version: str | None = None, mock_response: str | None = None, custom_llm_provider: str | None = None, max_retries: int | None = None)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the LiteLLM API.

Parameters:
  • timeout (Optional[Union[float, str]], optional) – Request timeout. (default: None)

  • temperature (Optional[float], optional) – Temperature parameter for controlling randomness. (default: None)

  • top_p (Optional[float], optional) – Top-p parameter for nucleus sampling. (default: None)

  • n (Optional[int], optional) – Number of completions to generate. (default: None)

  • stream (Optional[bool], optional) – Whether to return a streaming response. (default: None)

  • stream_options (Optional[dict], optional) – Options for the streaming response. (default: None)

  • stop (Optional[Union[str, List[str]]], optional) – Sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (Optional[int], optional) – Maximum number of tokens to generate. (default: None)

  • presence_penalty (Optional[float], optional) – Penalize new tokens based on their existence in the text so far. (default: None)

  • frequency_penalty (Optional[float], optional) – Penalize new tokens based on their frequency in the text so far. (default: None)

  • logit_bias (Optional[dict], optional) – Modify the probability of specific tokens appearing in the completion. (default: None)

  • user (Optional[str], optional) – A unique identifier representing the end-user. (default: None)

  • response_format (Optional[dict], optional) – Response format parameters. (default: None)

  • seed (Optional[int], optional) – Random seed. (default: None)

  • tools (Optional[List], optional) – List of tools. (default: None)

  • tool_choice (Optional[Union[str, dict]], optional) – Tool choice parameters. (default: None)

  • logprobs (Optional[bool], optional) – Whether to return log probabilities of the output tokens. (default: None)

  • top_logprobs (Optional[int], optional) – Number of most likely tokens to return at each token position. (default: None)

  • deployment_id (Optional[str], optional) – Deployment ID. (default: None)

  • extra_headers (Optional[dict], optional) – Additional headers for the request. (default: None)

  • api_version (Optional[str], optional) – API version. (default: None)

  • mock_response (Optional[str], optional) – Mock completion response for testing or debugging. (default: None)

  • custom_llm_provider (Optional[str], optional) – Non-OpenAI LLM provider. (default: None)

  • max_retries (Optional[int], optional) – Maximum number of retries. (default: None)

api_version: str | None#
custom_llm_provider: str | None#
deployment_id: str | None#
extra_headers: dict | None#
frequency_penalty: float | None#
logit_bias: dict | None#
logprobs: bool | None#
max_retries: int | None#
max_tokens: int | None#
mock_response: str | None#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'api_version': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'custom_llm_provider': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'deployment_id': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'extra_headers': FieldInfo(annotation=Union[dict, NoneType], required=False, default=None), 'frequency_penalty': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'logit_bias': FieldInfo(annotation=Union[dict, NoneType], required=False, default=None), 'logprobs': FieldInfo(annotation=Union[bool, NoneType], required=False, default=None), 'max_retries': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'max_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'mock_response': FieldInfo(annotation=Union[str, NoneType], required=False, default=None), 'n': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'presence_penalty': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'response_format': FieldInfo(annotation=Union[dict, NoneType], required=False, default=None), 'seed': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'stop': FieldInfo(annotation=Union[str, List[str], NoneType], required=False, default=None), 'stream': FieldInfo(annotation=Union[bool, NoneType], required=False, default=None), 'stream_options': FieldInfo(annotation=Union[dict, NoneType], required=False, default=None), 'temperature': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'timeout': FieldInfo(annotation=Union[float, str, NoneType], required=False, default=None), 'tool_choice': FieldInfo(annotation=Union[str, dict, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_logprobs': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'user': FieldInfo(annotation=Union[str, NoneType], required=False, default=None)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

n: int | None#
presence_penalty: float | None#
response_format: dict | None#
seed: int | None#
stop: str | List[str] | None#
stream: bool | None#
stream_options: dict | None#
temperature: float | None#
timeout: float | str | None#
tool_choice: str | dict | None#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_logprobs: int | None#
top_p: float | None#
user: str | None#
class camel.configs.MistralConfig(*, tools: List[Any] | None = None, temperature: float | None = None, top_p: float | None = None, max_tokens: int | None = None, min_tokens: int | None = None, stop: str | list[str] | None = None, random_seed: int | None = None, safe_prompt: bool = False, response_format: Dict[str, str] | Any | None = None, tool_choice: str | None = 'auto')[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the Mistral API.

reference: mistralai/client-python

#TODO: Support stream mode

Parameters:
  • temperature (Optional[float], optional) – temperature the temperature to use for sampling, e.g. 0.5.

  • top_p (Optional[float], optional) – the cumulative probability of tokens to generate, e.g. 0.9. Defaults to None.

  • max_tokens (Optional[int], optional) – the maximum number of tokens to generate, e.g. 100. Defaults to None.

  • min_tokens (Optional[int], optional) – the minimum number of tokens to generate, e.g. 100. Defaults to None.

  • stop (Optional[Union[str,list[str]]]) – Stop generation if this token is detected. Or if one of these tokens is detected when providing a string list.

  • random_seed (Optional[int], optional) – the random seed to use for sampling, e.g. 42. Defaults to None.

  • safe_prompt (bool, optional) – whether to use safe prompt, e.g. true. Defaults to False.

  • response_format (Union[Dict[str, str], ResponseFormat) – format of the response.

  • tool_choice (str, optional) – Controls which (if any) tool is called by the model. "none" means the model will not call any tool and instead generates a message. "auto" means the model can pick between generating a message or calling one or more tools. "any" means the model must call one or more tools. "auto" is the default value.

classmethod fields_type_checking(response_format)[source]#

Validate the type of tools in the configuration.

This method ensures that the tools provided in the configuration are instances of FunctionTool. If any tool is not an instance of FunctionTool, it raises a ValueError.

max_tokens: int | None#
min_tokens: int | None#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'max_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'min_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'random_seed': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'response_format': FieldInfo(annotation=Union[Dict[str, str], Any, NoneType], required=False, default=None), 'safe_prompt': FieldInfo(annotation=bool, required=False, default=False), 'stop': FieldInfo(annotation=Union[str, list[str], NoneType], required=False, default=None), 'temperature': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'tool_choice': FieldInfo(annotation=Union[str, NoneType], required=False, default='auto'), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=Union[float, NoneType], required=False, default=None)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

random_seed: int | None#
response_format: Dict[str, str] | Any | None#
safe_prompt: bool#
stop: str | list[str] | None#
temperature: float | None#
tool_choice: str | None#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_p: float | None#
class camel.configs.OllamaConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 1.0, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, response_format: dict | NotGiven = NOT_GIVEN, frequency_penalty: float = 0.0)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using OpenAI compatibility

Reference: ollama/ollama

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • response_format (object, optional) – An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to {“type”: “json_object”} enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=”length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

frequency_penalty: float#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'response_format': FieldInfo(annotation=Union[dict, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=1.0)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

presence_penalty: float#
response_format: dict | NotGiven#
stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_p: float#
class camel.configs.QwenConfig(include_usage: bool = True, *, tools: List[Any] | None = None, stream: bool = False, temperature: float = 0.3, top_p: float = 0.9, presence_penalty: float = 0.0, max_tokens: int | NotGiven = NOT_GIVEN, seed: int | None = None, stop: str | list | None = None)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the Qwen API. You can refer to the following link for more details: https://help.aliyun.com/zh/model-studio/developer-reference/use-qwen-by-calling-api

Parameters:
  • stream (bool, optional) – Whether to stream the response. (default: False)

  • temperature (float, optional) – Controls the diversity and focus of the generated results. Lower values make the output more focused, while higher values make it more diverse. (default: 0.3)

  • top_p (float, optional) – Controls the diversity and focus of the generated results. Higher values make the output more diverse, while lower values make it more focused. (default: 0.9)

  • presence_penalty (float, optional) – Controls the repetition of content in the generated results. Positive values reduce the repetition of content, while negative values increase it. (default: 0.0)

  • response_format (object, optional) – Specifies the format of the returned content. The available values are {“type”: “text”} or {“type”: “json_object”}. Setting it to {“type”: “json_object”} will output a standard JSON string. (default: {"type": "text"})

  • max_tokens (Union[int, NotGiven], optional) – Allows the model to generate the maximum number of tokens. (default: NOT_GIVEN)

  • seed (int, optional) – Sets the seed parameter to make the text generation process more deterministic, typically used to ensure that the results are consistent across model runs. By passing the same seed value (specified by you) in each model call while keeping other parameters unchanged, the model is likely to return the same result. (default: None)

  • stop (str or list, optional) – Using the stop parameter, the model will automatically stop generating text when it is about to include the specified string or token_id. You can use the stop parameter to control the output of the model by passing sensitive words. (default: None)

  • tools (list, optional) – Specifies an array of tools that the model can call. It can contain one or more tool objects. During a function call process, the model will select one tool from the array. (default: None)

  • extra_body (dict, optional) – Additional parameters to be sent to the Qwen API. If you want to enable internet search, you can set this parameter to {“enable_search”: True}. (default: {"enable_search": False})

  • include_usage (bool, optional) – When streaming, specifies whether to include usage information in stream_options. (default: True)

extra_body: ClassVar[dict] = {'enable_search': False}#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'seed': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'stop': FieldInfo(annotation=Union[str, list, NoneType], required=False, default=None), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.3), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=0.9)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

presence_penalty: float#
response_format: ClassVar[dict] = {'type': 'text'}#
seed: int | None#
stop: str | list | None#
stream: bool#
temperature: float#
top_p: float#
class camel.configs.RekaConfig(*, tools: List[Any] | None = None, temperature: float | None = None, top_p: float | None = None, top_k: int | None = None, max_tokens: int | None = None, stop: str | list[str] | None = None, seed: int | None = None, frequency_penalty: float = 0.0, presence_penalty: float = 0.0, use_search_engine: bool | None = False)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the Reka API.

Reference: https://docs.reka.ai/api-reference/chat/create

Parameters:
  • temperature (Optional[float], optional) – temperature the temperature to use for sampling, e.g. 0.5.

  • top_p (Optional[float], optional) – the cumulative probability of tokens to generate, e.g. 0.9. Defaults to None.

  • top_k (Optional[int], optional) – Parameter which forces the model to only consider the tokens with the top_k highest probabilities at the next step. Defaults to 1024.

  • max_tokens (Optional[int], optional) – the maximum number of tokens to generate, e.g. 100. Defaults to None.

  • stop (Optional[Union[str,list[str]]]) – Stop generation if this token is detected. Or if one of these tokens is detected when providing a string list.

  • seed (Optional[int], optional) – the random seed to use for sampling, e. g. 42. Defaults to None.

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • use_search_engine (Optional[bool]) – Whether to consider using search engine to complete the request. Note that even if this is set to True, the model might decide to not use search.

as_dict() dict[str, Any][source]#

Convert the current configuration to a dictionary.

This method converts the current configuration object to a dictionary representation, which can be used for serialization or other purposes.

Returns:

A dictionary representation of the current

configuration.

Return type:

dict[str, Any]

frequency_penalty: float#
max_tokens: int | None#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'max_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'seed': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'stop': FieldInfo(annotation=Union[str, list[str], NoneType], required=False, default=None), 'temperature': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_k': FieldInfo(annotation=Union[int, NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=Union[float, NoneType], required=False, default=None), 'use_search_engine': FieldInfo(annotation=Union[bool, NoneType], required=False, default=False)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

presence_penalty: float#
seed: int | None#
stop: str | list[str] | None#
temperature: float | None#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_k: int | None#
top_p: float | None#
use_search_engine: bool | None#
class camel.configs.SambaCloudAPIConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, response_format: dict | NotGiven = NOT_GIVEN, frequency_penalty: float = 0.0, logit_bias: dict = None, user: str = '', tool_choice: dict[str, str] | str | None = None)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the OpenAI API.

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • n (int, optional) – How many chat completion choices to generate for each input message. (default: 1)

  • response_format (object, optional) – An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to {“type”: “json_object”} enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=”length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • logit_bias (dict, optional) – Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between:obj:` -1` and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. (default: {})

  • user (str, optional) – A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. (default: "")

  • tools (list[FunctionTool], optional) – A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

  • tool_choice (Union[dict[str, str], str], optional) – Controls which (if any) tool is called by the model. "none" means the model will not call any tool and instead generates a message. "auto" means the model can pick between generating a message or calling one or more tools. "required" means the model must call one or more tools. Specifying a particular tool via {“type”: “function”, “function”: {“name”: “my_function”}} forces the model to call that tool. "none" is the default when no tools are present. "auto" is the default if tools are present.

frequency_penalty: float#
logit_bias: dict#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'logit_bias': FieldInfo(annotation=dict, required=False, default_factory=dict), 'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'n': FieldInfo(annotation=int, required=False, default=1), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'response_format': FieldInfo(annotation=Union[dict, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tool_choice': FieldInfo(annotation=Union[dict[str, str], str, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=1.0), 'user': FieldInfo(annotation=str, required=False, default='')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

n: int#
presence_penalty: float#
response_format: dict | NotGiven#
stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
tool_choice: dict[str, str] | str | None#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_p: float#
user: str#
class camel.configs.SambaVerseAPIConfig(*, tools: List[Any] | None = None, temperature: float | None = 0.7, top_p: float | None = 0.95, top_k: int | None = 50, max_tokens: int | None = 2048, repetition_penalty: float | None = 1.0, stop: str | list[str] | None = '', stream: bool | None = False)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the SambaVerse API.

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.7)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 0.95)

  • top_k (int, optional) – Only sample from the top K options for each subsequent token. Used to remove “long tail” low probability responses. (default: 50)

  • max_tokens (Optional[int], optional) – The maximum number of tokens to generate, e.g. 100. (default: 2048)

  • repetition_penalty (Optional[float], optional) – The parameter for repetition penalty. 1.0 means no penalty. (default: 1.0)

  • stop (Optional[Union[str,list[str]]]) – Stop generation if this token is detected. Or if one of these tokens is detected when providing a string list. (default: "")

  • stream (Optional[bool]) – If True, partial message deltas will be sent as data-only server-sent events as they become available. Currently SambaVerse API doesn’t support stream mode. (default: False)

as_dict() dict[str, Any][source]#

Convert the current configuration to a dictionary.

This method converts the current configuration object to a dictionary representation, which can be used for serialization or other purposes.

Returns:

A dictionary representation of the current

configuration.

Return type:

dict[str, Any]

max_tokens: int | None#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'max_tokens': FieldInfo(annotation=Union[int, NoneType], required=False, default=2048), 'repetition_penalty': FieldInfo(annotation=Union[float, NoneType], required=False, default=1.0), 'stop': FieldInfo(annotation=Union[str, list[str], NoneType], required=False, default=''), 'stream': FieldInfo(annotation=Union[bool, NoneType], required=False, default=False), 'temperature': FieldInfo(annotation=Union[float, NoneType], required=False, default=0.7), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_k': FieldInfo(annotation=Union[int, NoneType], required=False, default=50), 'top_p': FieldInfo(annotation=Union[float, NoneType], required=False, default=0.95)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

repetition_penalty: float | None#
stop: str | list[str] | None#
stream: bool | None#
temperature: float | None#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_k: int | None#
top_p: float | None#
class camel.configs.TogetherAIConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, response_format: dict | NotGiven = NOT_GIVEN, frequency_penalty: float = 0.0, logit_bias: dict = None, user: str = '')[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the OpenAI API.

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • n (int, optional) – How many chat completion choices to generate for each input message. (default: 1)

  • response_format (object, optional) – An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to {“type”: “json_object”} enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=”length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • logit_bias (dict, optional) – Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between:obj:` -1` and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. (default: {})

  • user (str, optional) – A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. (default: "")

as_dict() dict[str, Any][source]#

Convert the current configuration to a dictionary.

This method converts the current configuration object to a dictionary representation, which can be used for serialization or other purposes.

Returns:

A dictionary representation of the current

configuration.

Return type:

dict[str, Any]

frequency_penalty: float#
logit_bias: dict#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'logit_bias': FieldInfo(annotation=dict, required=False, default_factory=dict), 'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'n': FieldInfo(annotation=int, required=False, default=1), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'response_format': FieldInfo(annotation=Union[dict, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=1.0), 'user': FieldInfo(annotation=str, required=False, default='')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

n: int#
presence_penalty: float#
response_format: dict | NotGiven#
stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_p: float#
user: str#
class camel.configs.VLLMConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 1.0, n: int = 1, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, presence_penalty: float = 0.0, response_format: dict | NotGiven = NOT_GIVEN, frequency_penalty: float = 0.0, logit_bias: dict = None, user: str = '')[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the OpenAI API.

Reference: https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 1.0)

  • n (int, optional) – How many chat completion choices to generate for each input message. (default: 1)

  • response_format (object, optional) – An object specifying the format that the model must output. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106. Setting to {“type”: “json_object”} enables JSON mode, which guarantees the message the model generates is valid JSON. Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if finish_reason=”length”, which indicates the generation exceeded max_tokens or the conversation exceeded the max context length.

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • presence_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model’s likelihood to talk about new topics. See more information about frequency and presence penalties. (default: 0.0)

  • frequency_penalty (float, optional) – Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model’s likelihood to repeat the same line verbatim. See more information about frequency and presence penalties. (default: 0.0)

  • logit_bias (dict, optional) – Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between:obj:` -1` and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. (default: {})

  • user (str, optional) – A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. (default: "")

frequency_penalty: float#
logit_bias: dict#
max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'frequency_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'logit_bias': FieldInfo(annotation=dict, required=False, default_factory=dict), 'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'n': FieldInfo(annotation=int, required=False, default=1), 'presence_penalty': FieldInfo(annotation=float, required=False, default=0.0), 'response_format': FieldInfo(annotation=Union[dict, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=1.0), 'user': FieldInfo(annotation=str, required=False, default='')}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

n: int#
presence_penalty: float#
response_format: dict | NotGiven#
stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_p: float#
user: str#
class camel.configs.YiConfig(*, tools: List[Any] | None = None, tool_choice: dict[str, str] | str | None = None, max_tokens: int | NotGiven = NOT_GIVEN, top_p: float = 0.9, temperature: float = 0.3, stream: bool = False)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using the Yi API. You can refer to the following link for more details: https://platform.lingyiwanwu.com/docs/api-reference

Parameters:
  • tool_choice (Union[dict[str, str], str], optional) – Controls which (if any) tool is called by the model. "none" means the model will not call any tool and instead generates a message. "auto" means the model can pick between generating a message or calling one or more tools. "required" or specifying a particular tool via {“type”: “function”, “function”: {“name”: “some_function”}} can be used to guide the model to use tools more strongly. (default: None)

  • max_tokens (int, optional) – Specifies the maximum number of tokens the model can generate. This sets an upper limit, but does not guarantee that this number will always be reached. (default: 5000)

  • top_p (float, optional) – Controls the randomness of the generated results. Lower values lead to less randomness, while higher values increase randomness. (default: 0.9)

  • temperature (float, optional) – Controls the diversity and focus of the generated results. Lower values make the output more focused, while higher values make it more diverse. (default: 0.3)

  • stream (bool, optional) – If True, enables streaming output. (default: False)

max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.3), 'tool_choice': FieldInfo(annotation=Union[dict[str, str], str, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=0.9)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

stream: bool#
temperature: float#
tool_choice: dict[str, str] | str | None#
top_p: float#
class camel.configs.ZhipuAIConfig(*, tools: List[Any] | None = None, temperature: float = 0.2, top_p: float = 0.6, stream: bool = False, stop: str | Sequence[str] | NotGiven = NOT_GIVEN, max_tokens: int | NotGiven = NOT_GIVEN, tool_choice: dict[str, str] | str | None = None)[source]#

Bases: BaseConfig

Defines the parameters for generating chat completions using OpenAI compatibility

Reference: https://open.bigmodel.cn/dev/api#glm-4v

Parameters:
  • temperature (float, optional) – Sampling temperature to use, between 0 and 2. Higher values make the output more random, while lower values make it more focused and deterministic. (default: 0.2)

  • top_p (float, optional) – An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. (default: 0.6)

  • stream (bool, optional) – If True, partial message deltas will be sent as data-only server-sent events as they become available. (default: False)

  • stop (str or list, optional) – Up to 4 sequences where the API will stop generating further tokens. (default: None)

  • max_tokens (int, optional) – The maximum number of tokens to generate in the chat completion. The total length of input tokens and generated tokens is limited by the model’s context length. (default: None)

  • tools (list[FunctionTool], optional) – A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

  • tool_choice (Union[dict[str, str], str], optional) – Controls which (if any) tool is called by the model. "none" means the model will not call any tool and instead generates a message. "auto" means the model can pick between generating a message or calling one or more tools. "required" means the model must call one or more tools. Specifying a particular tool via {“type”: “function”, “function”: {“name”: “my_function”}} forces the model to call that tool. "none" is the default when no tools are present. "auto" is the default if tools are present.

max_tokens: int | NotGiven#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'forbid', 'frozen': True, 'protected_namespaces': ()}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'max_tokens': FieldInfo(annotation=Union[int, NotGiven], required=False, default=NOT_GIVEN), 'stop': FieldInfo(annotation=Union[str, Sequence[str], NotGiven], required=False, default=NOT_GIVEN), 'stream': FieldInfo(annotation=bool, required=False, default=False), 'temperature': FieldInfo(annotation=float, required=False, default=0.2), 'tool_choice': FieldInfo(annotation=Union[dict[str, str], str, NoneType], required=False, default=None), 'tools': FieldInfo(annotation=Union[List[Any], NoneType], required=False, default=None), 'top_p': FieldInfo(annotation=float, required=False, default=0.6)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

stop: str | Sequence[str] | NotGiven#
stream: bool#
temperature: float#
tool_choice: dict[str, str] | str | None#
tools: List[Any] | None#

A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported.

top_p: float#