camel.utils package

On this page

camel.utils package#

Submodules#

camel.utils.async_func module#

camel.utils.async_func.sync_funcs_to_async(funcs: list[FunctionTool]) list[FunctionTool][source]#

Convert a list of Python synchronous functions to Python asynchronous functions.

Parameters:

funcs (list[FunctionTool]) – List of Python synchronous functions in the FunctionTool format.

Returns:

List of Python asynchronous functions

in the FunctionTool format.

Return type:

list[FunctionTool]

camel.utils.commons module#

class camel.utils.commons.AgentOpsMeta(name, bases, dct)[source]#

Bases: type

Metaclass that automatically decorates all callable attributes with the agentops_decorator, except for the ‘get_tools’ method.

Methods: __new__(cls, name, bases, dct):

Creates a new class with decorated methods.

class camel.utils.commons.BatchProcessor(max_workers: int | None = None, initial_batch_size: int | None = None, monitoring_interval: float = 5.0, cpu_threshold: float = 80.0, memory_threshold: float = 85.0)[source]#

Bases: object

Handles batch processing with dynamic sizing and error handling based on system load.

adjust_batch_size(success: bool, processing_time: float | None = None) None[source]#

Adjust batch size based on success/failure and system resources.

Parameters:
  • success (bool) – Whether the last batch completed successfully

  • processing_time (Optional[float]) – Time taken to process the last batch. (default: None)

get_performance_metrics() Dict[str, Any][source]#

Get current performance metrics.

Returns:

  • total_processed: Total number of batches processed

  • error_rate: Percentage of failed batches

  • avg_processing_time: Average time per batch

  • current_batch_size: Current batch size

  • current_workers: Current number of workers

  • current_cpu: Current CPU usage percentage

  • current_memory: Current memory usage percentage

Return type:

Dict containing performance metrics including

camel.utils.commons.agentops_decorator(func)[source]#

Decorator that records the execution of a function if ToolEvent is available.

Parameters:

func (callable) – The function to be decorated.

Returns:

The wrapped function which records its execution details.

Return type:

callable

camel.utils.commons.api_keys_required(param_env_list: List[Tuple[str | None, str]]) Callable[[F], F][source]#

A decorator to check if the required API keys are provided in the environment variables or as function arguments.

Parameters:

param_env_list (List[Tuple[Optional[str], str]]) – A list of tuples where each tuple contains a function argument name (as the first element, or None) and the corresponding environment variable name (as the second element) that holds the API key.

Returns:

The original function wrapped with the added check

for the required API keys.

Return type:

Callable[[F], F]

Raises:

ValueError – If any of the required API keys are missing, either from the function arguments or environment variables.

Example

@api_keys_required([
    ('api_key_arg', 'API_KEY_1'),
    ('another_key_arg', 'API_KEY_2'),
    (None, 'API_KEY_3'),
])
def some_api_function(api_key_arg=None, another_key_arg=None):
    # Function implementation that requires API keys
camel.utils.commons.check_server_running(server_url: str) bool[source]#

Check whether the port refered by the URL to the server is open.

Parameters:

server_url (str) – The URL to the server running LLM inference service.

Returns:

Whether the port is open for packets (server is running).

Return type:

bool

camel.utils.commons.create_chunks(text: str, n: int) List[str][source]#

Returns successive n-sized chunks from provided text. Split a text into smaller chunks of size n”.

Parameters:
  • text (str) – The text to be split.

  • n (int) – The max length of a single chunk.

Returns:

A list of split texts.

Return type:

List[str]

camel.utils.commons.dependencies_required(*required_modules: str) Callable[[F], F][source]#

A decorator to ensure that specified Python modules are available before a function executes.

Parameters:

required_modules (str) – The required modules to be checked for availability.

Returns:

The original function with the added check for

required module dependencies.

Return type:

Callable[[F], F]

Raises:

ImportError – If any of the required modules are not available.

Example

@dependencies_required('numpy', 'pandas')
def data_processing_function():
    # Function implementation...
camel.utils.commons.download_github_subdirectory(repo: str, subdir: str, data_dir: Path, branch='main')[source]#

Download subdirectory of the Github repo of the benchmark.

This function downloads all files and subdirectories from a specified subdirectory of a GitHub repository and saves them to a local directory.

Parameters:
  • repo (str) – The name of the GitHub repository in the format “owner/repo”.

  • subdir (str) – The path to the subdirectory within the repository to download.

  • data_dir (Path) – The local directory where the files will be saved.

  • branch (str, optional) – The branch of the repository to use. Defaults to “main”.

camel.utils.commons.download_tasks(task: TaskType, folder_path: str) None[source]#

Downloads task-related files from a specified URL and extracts them.

This function downloads a zip file containing tasks based on the specified task type from a predefined URL, saves it to folder_path, and then extracts the contents of the zip file into the same folder. After extraction, the zip file is deleted.

Parameters:
  • task (TaskType) – An enum representing the type of task to download.

  • folder_path (str) – The path of the folder where the zip file will be downloaded and extracted.

camel.utils.commons.func_string_to_callable(code: str)[source]#

Convert a function code string to a callable function object.

Parameters:

code (str) – The function code as a string.

Returns:

The callable function object extracted from the

code string.

Return type:

Callable[…, Any]

camel.utils.commons.generate_prompt_for_structured_output(response_format: Type[BaseModel] | None, user_message: str) str[source]#

This function generates a prompt based on the provided Pydantic model and user message.

Parameters:
  • response_format (Type[BaseModel]) – The Pydantic model class.

  • user_message (str) – The user message to be used in the prompt.

Returns:

A prompt string for the LLM.

Return type:

str

camel.utils.commons.get_first_int(string: str) int | None[source]#

Returns the first integer number found in the given string.

If no integer number is found, returns None.

Parameters:

string (str) – The input string.

Returns:

The first integer number found in the string, or None if

no integer number is found.

Return type:

int or None

camel.utils.commons.get_prompt_template_key_words(template: str) Set[str][source]#

Given a string template containing curly braces {}, return a set of the words inside the braces.

Parameters:

template (str) – A string containing curly braces.

Returns:

A list of the words inside the curly braces.

Return type:

List[str]

Example

>>> get_prompt_template_key_words('Hi, {name}! How are you {status}?')
{'name', 'status'}
camel.utils.commons.get_pydantic_major_version() int[source]#

Get the major version of Pydantic.

Returns:

The major version number of Pydantic if installed, otherwise 0.

Return type:

int

camel.utils.commons.get_pydantic_object_schema(pydantic_params: Type[BaseModel]) Dict[source]#

Get the JSON schema of a Pydantic model.

Parameters:

pydantic_params (Type[BaseModel]) – The Pydantic model class to retrieve the schema for.

Returns:

The JSON schema of the Pydantic model.

Return type:

dict

camel.utils.commons.get_system_information()[source]#

Gathers information about the operating system.

Returns:

A dictionary containing various pieces of OS information.

Return type:

dict

camel.utils.commons.get_task_list(task_response: str) List[str][source]#

Parse the response of the Agent and return task list.

Parameters:

task_response (str) – The string response of the Agent.

Returns:

A list of the string tasks.

Return type:

List[str]

camel.utils.commons.handle_http_error(response: Response) str[source]#

Handles the HTTP errors based on the status code of the response.

Parameters:

response (requests.Response) – The HTTP response from the API call.

Returns:

The error type, based on the status code.

Return type:

str

camel.utils.commons.is_docker_running() bool[source]#

Check if the Docker daemon is running.

Returns:

True if the Docker daemon is running, False otherwise.

Return type:

bool

camel.utils.commons.is_module_available(module_name: str) bool[source]#

Check if a module is available for import.

Parameters:

module_name (str) – The name of the module to check for availability.

Returns:

True if the module can be imported, False otherwise.

Return type:

bool

camel.utils.commons.json_to_function_code(json_obj: Dict) str[source]#

Generate a Python function code from a JSON schema.

Parameters:

json_obj (dict) – The JSON schema object containing properties and required fields, and json format is follow openai tools schema

Returns:

The generated Python function code as a string.

Return type:

str

camel.utils.commons.print_text_animated(text, delay: float = 0.02, end: str = '')[source]#

Prints the given text with an animated effect.

Parameters:
  • text (str) – The text to print.

  • delay (float, optional) – The delay between each character printed. (default: 0.02)

  • end (str, optional) – The end character to print after each character of text. (default: "")

camel.utils.commons.retry_on_error(max_retries: int = 3, initial_delay: float = 1.0) Callable[source]#

Decorator to retry function calls on exception with exponential backoff.

Parameters:
  • max_retries (int) – Maximum number of retry attempts

  • initial_delay (float) – Initial delay between retries in seconds

Returns:

Decorated function with retry logic

Return type:

Callable

camel.utils.commons.text_extract_from_web(url: str) str[source]#

Get the text information from given url.

Parameters:

url (str) – The website you want to search.

Returns:

All texts extract from the web.

Return type:

str

camel.utils.commons.to_pascal(snake: str) str[source]#

Convert a snake_case string to PascalCase.

Parameters:

snake (str) – The snake_case string to be converted.

Returns:

The converted PascalCase string.

Return type:

str

camel.utils.commons.track_agent(*args, **kwargs)[source]#

Mock track agent decorator for AgentOps.

camel.utils.commons.with_timeout(timeout=None)[source]#

Decorator that adds timeout functionality to functions.

Executes functions with a specified timeout value. Returns a timeout message if execution time is exceeded.

Parameters:

timeout (float, optional) – The timeout duration in seconds. If None, will try to get timeout from the instance’s timeout attribute. (default: None)

Example

>>> @with_timeout(5)
... def my_function():
...     return "Success"
>>> my_function()
>>> class MyClass:
...     timeout = 5
...     @with_timeout()
...     def my_method(self):
...         return "Success"

camel.utils.constants module#

class camel.utils.constants.Constants[source]#

Bases: object

A class containing constants used in CAMEL.

DEFAULT_SIMILARITY_THRESHOLD = 0.7#
DEFAULT_TOP_K_RESULTS = 1#
FUNC_NAME_FOR_STRUCTURED_OUTPUT = 'return_json_response'#
VIDEO_DEFAULT_IMAGE_SIZE = 768#
VIDEO_DEFAULT_PLUG_PYAV = 'pyav'#
VIDEO_IMAGE_EXTRACTION_INTERVAL = 50#

camel.utils.token_counting module#

class camel.utils.token_counting.AnthropicTokenCounter(model: str)[source]#

Bases: BaseTokenCounter

count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list using loaded tokenizer specific for this type of model.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

class camel.utils.token_counting.BaseTokenCounter[source]#

Bases: ABC

Base class for token counters of different kinds of models.

abstract count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

class camel.utils.token_counting.LiteLLMTokenCounter(model_type: UnifiedModelType)[source]#

Bases: BaseTokenCounter

calculate_cost_from_response(response: dict) float[source]#

Calculate the cost of the given completion response.

Parameters:

response (dict) – The completion response from LiteLLM.

Returns:

The cost of the completion call in USD.

Return type:

float

property completion_cost#
count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list using the tokenizer specific to this type of model.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in LiteLLM API format.

Returns:

Number of tokens in the messages.

Return type:

int

property token_counter#
class camel.utils.token_counting.MistralTokenCounter(model_type: ~<unknown>.ModelType)[source]#

Bases: BaseTokenCounter

count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list using loaded tokenizer specific for this type of model.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Total number of tokens in the messages.

Return type:

int

class camel.utils.token_counting.OpenAITokenCounter(model: UnifiedModelType)[source]#

Bases: BaseTokenCounter

count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list with the help of package tiktoken.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

camel.utils.token_counting.get_model_encoding(value_for_tiktoken: str)[source]#

Get model encoding from tiktoken.

Parameters:

value_for_tiktoken – Model value for tiktoken.

Returns:

Model encoding.

Return type:

tiktoken.Encoding

Module contents#

class camel.utils.AgentOpsMeta(name, bases, dct)[source]#

Bases: type

Metaclass that automatically decorates all callable attributes with the agentops_decorator, except for the ‘get_tools’ method.

Methods: __new__(cls, name, bases, dct):

Creates a new class with decorated methods.

class camel.utils.AnthropicTokenCounter(model: str)[source]#

Bases: BaseTokenCounter

count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list using loaded tokenizer specific for this type of model.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

class camel.utils.BaseTokenCounter[source]#

Bases: ABC

Base class for token counters of different kinds of models.

abstract count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

class camel.utils.BatchProcessor(max_workers: int | None = None, initial_batch_size: int | None = None, monitoring_interval: float = 5.0, cpu_threshold: float = 80.0, memory_threshold: float = 85.0)[source]#

Bases: object

Handles batch processing with dynamic sizing and error handling based on system load.

adjust_batch_size(success: bool, processing_time: float | None = None) None[source]#

Adjust batch size based on success/failure and system resources.

Parameters:
  • success (bool) – Whether the last batch completed successfully

  • processing_time (Optional[float]) – Time taken to process the last batch. (default: None)

get_performance_metrics() Dict[str, Any][source]#

Get current performance metrics.

Returns:

  • total_processed: Total number of batches processed

  • error_rate: Percentage of failed batches

  • avg_processing_time: Average time per batch

  • current_batch_size: Current batch size

  • current_workers: Current number of workers

  • current_cpu: Current CPU usage percentage

  • current_memory: Current memory usage percentage

Return type:

Dict containing performance metrics including

class camel.utils.Constants[source]#

Bases: object

A class containing constants used in CAMEL.

DEFAULT_SIMILARITY_THRESHOLD = 0.7#
DEFAULT_TOP_K_RESULTS = 1#
FUNC_NAME_FOR_STRUCTURED_OUTPUT = 'return_json_response'#
VIDEO_DEFAULT_IMAGE_SIZE = 768#
VIDEO_DEFAULT_PLUG_PYAV = 'pyav'#
VIDEO_IMAGE_EXTRACTION_INTERVAL = 50#
class camel.utils.DeduplicationResult(*, original_texts: List[str], unique_ids: List[int], unique_embeddings_dict: Dict[int, List[float]], duplicate_to_target_map: Dict[int, int])[source]#

Bases: BaseModel

The result of deduplication.

original_texts#

The original texts.

Type:

List[str]

unique_ids#

A list of ids that are unique (not duplicates).

Type:

List[int]

unique_embeddings_dict#

A mapping from the index of each unique text to its embedding.

Type:

Dict[int, List[float]]

duplicate_to_target_map#

A mapping from the index of the duplicate text to the index of the text it is considered a duplicate of.

Type:

Dict[int, int]

duplicate_to_target_map: Dict[int, int]#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'duplicate_to_target_map': FieldInfo(annotation=Dict[int, int], required=True), 'original_texts': FieldInfo(annotation=List[str], required=True), 'unique_embeddings_dict': FieldInfo(annotation=Dict[int, List[float]], required=True), 'unique_ids': FieldInfo(annotation=List[int], required=True)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

original_texts: List[str]#
unique_embeddings_dict: Dict[int, List[float]]#
unique_ids: List[int]#
class camel.utils.LiteLLMTokenCounter(model_type: UnifiedModelType)[source]#

Bases: BaseTokenCounter

calculate_cost_from_response(response: dict) float[source]#

Calculate the cost of the given completion response.

Parameters:

response (dict) – The completion response from LiteLLM.

Returns:

The cost of the completion call in USD.

Return type:

float

property completion_cost#
count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list using the tokenizer specific to this type of model.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in LiteLLM API format.

Returns:

Number of tokens in the messages.

Return type:

int

property token_counter#
class camel.utils.MistralTokenCounter(model_type: ~<unknown>.ModelType)[source]#

Bases: BaseTokenCounter

count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list using loaded tokenizer specific for this type of model.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Total number of tokens in the messages.

Return type:

int

class camel.utils.OpenAITokenCounter(model: UnifiedModelType)[source]#

Bases: BaseTokenCounter

count_tokens_from_messages(messages: List[OpenAIMessage]) int[source]#

Count number of tokens in the provided message list with the help of package tiktoken.

Parameters:

messages (List[OpenAIMessage]) – Message list with the chat history in OpenAI API format.

Returns:

Number of tokens in the messages.

Return type:

int

camel.utils.agentops_decorator(func)[source]#

Decorator that records the execution of a function if ToolEvent is available.

Parameters:

func (callable) – The function to be decorated.

Returns:

The wrapped function which records its execution details.

Return type:

callable

camel.utils.api_keys_required(param_env_list: List[Tuple[str | None, str]]) Callable[[F], F][source]#

A decorator to check if the required API keys are provided in the environment variables or as function arguments.

Parameters:

param_env_list (List[Tuple[Optional[str], str]]) – A list of tuples where each tuple contains a function argument name (as the first element, or None) and the corresponding environment variable name (as the second element) that holds the API key.

Returns:

The original function wrapped with the added check

for the required API keys.

Return type:

Callable[[F], F]

Raises:

ValueError – If any of the required API keys are missing, either from the function arguments or environment variables.

Example

@api_keys_required([
    ('api_key_arg', 'API_KEY_1'),
    ('another_key_arg', 'API_KEY_2'),
    (None, 'API_KEY_3'),
])
def some_api_function(api_key_arg=None, another_key_arg=None):
    # Function implementation that requires API keys
camel.utils.check_server_running(server_url: str) bool[source]#

Check whether the port refered by the URL to the server is open.

Parameters:

server_url (str) – The URL to the server running LLM inference service.

Returns:

Whether the port is open for packets (server is running).

Return type:

bool

camel.utils.create_chunks(text: str, n: int) List[str][source]#

Returns successive n-sized chunks from provided text. Split a text into smaller chunks of size n”.

Parameters:
  • text (str) – The text to be split.

  • n (int) – The max length of a single chunk.

Returns:

A list of split texts.

Return type:

List[str]

camel.utils.deduplicate_internally(texts: List[str], threshold: float = 0.65, embedding_instance: BaseEmbedding[str] | None = None, embeddings: List[List[float]] | None = None, strategy: Literal['top1', 'llm-supervise'] = 'top1', batch_size: int = 1000) DeduplicationResult[source]#

Deduplicate a list of strings based on their cosine similarity.

You can either: 1) Provide a CAMEL BaseEmbedding instance via embedding_instance to let

this function handle the embedding internally, OR

  1. Directly pass a list of pre-computed embeddings to embeddings.

If both embedding_instance and embeddings are provided, the function will raise a ValueError to avoid ambiguous usage.

strategy is used to specify different strategies, where ‘top1’ selects the one with highest similarity, and ‘llm-supervise’ uses LLM to determine if texts are duplicates (not yet implemented).

Parameters:
  • texts (List[str]) – The list of texts to be deduplicated.

  • threshold (float, optional) – The similarity threshold for considering two texts as duplicates. (default: 0.65)

  • embedding_instance (Optional[BaseEmbedding[str]], optional) – A CAMEL embedding instance for automatic embedding. (default: None)

  • embeddings (Optional[List[List[float]]], optional) – Pre-computed embeddings of texts. Each element in the list corresponds to the embedding of the text in the same index of texts. (default: None)

  • strategy (Literal["top1", "llm-supervise"], optional) – The strategy to use for deduplication. (default: "top1")

  • batch_size (int, optional) – The size of the batch to use for calculating cosine similarities. (default: 1000)

Returns:

An object that contains:
  • original_texts: The original texts.

  • unique_ids: The unique ids after deduplication.

  • unique_embeddings_dict: A dict mapping from (unique) text id to its embedding.

  • duplicate_to_target_map: A dict mapping from the id of a duplicate text to the id of the text it is considered a duplicate of.

Return type:

DeduplicationResult

Raises:
  • NotImplementedError – If the strategy is not “top1”.

  • ValueError – If neither embeddings nor embedding_instance is provided, or if both are provided at the same time.

  • ValueError – If the length of embeddings does not match the length of texts.

Example

>>> from camel.embeddings.openai_embedding import OpenAIEmbedding
>>> # Suppose we have 5 texts, some of which may be duplicates
>>> texts = [
...     "What is AI?",
...     "Artificial Intelligence is about machines",
...     "What is AI?",
...     "Deep Learning is a subset of AI",
...     "What is artificial intelligence?"
... ]
>>> # or any other BaseEmbedding instance
>>> embedding_model = OpenAIEmbedding()
>>> result = deduplicate_internally(
...     texts=texts,
...     threshold=0.7,
...     embedding_instance=embedding_model
... )
>>> print("Unique ids:")
>>> for uid in result.unique_ids:
...     print(texts[uid])
Unique ids:
What is AI?
Artificial Intelligence is about machines
Deep Learning is a subset of AI
What is artificial intelligence?
>>> print("Duplicate map:")
>>> print(result.duplicate_to_target_map)
{2: 0}
# This indicates the text at index 2 is considered
# a duplicate of index 0.
camel.utils.dependencies_required(*required_modules: str) Callable[[F], F][source]#

A decorator to ensure that specified Python modules are available before a function executes.

Parameters:

required_modules (str) – The required modules to be checked for availability.

Returns:

The original function with the added check for

required module dependencies.

Return type:

Callable[[F], F]

Raises:

ImportError – If any of the required modules are not available.

Example

@dependencies_required('numpy', 'pandas')
def data_processing_function():
    # Function implementation...
camel.utils.download_github_subdirectory(repo: str, subdir: str, data_dir: Path, branch='main')[source]#

Download subdirectory of the Github repo of the benchmark.

This function downloads all files and subdirectories from a specified subdirectory of a GitHub repository and saves them to a local directory.

Parameters:
  • repo (str) – The name of the GitHub repository in the format “owner/repo”.

  • subdir (str) – The path to the subdirectory within the repository to download.

  • data_dir (Path) – The local directory where the files will be saved.

  • branch (str, optional) – The branch of the repository to use. Defaults to “main”.

camel.utils.download_tasks(task: TaskType, folder_path: str) None[source]#

Downloads task-related files from a specified URL and extracts them.

This function downloads a zip file containing tasks based on the specified task type from a predefined URL, saves it to folder_path, and then extracts the contents of the zip file into the same folder. After extraction, the zip file is deleted.

Parameters:
  • task (TaskType) – An enum representing the type of task to download.

  • folder_path (str) – The path of the folder where the zip file will be downloaded and extracted.

camel.utils.func_string_to_callable(code: str)[source]#

Convert a function code string to a callable function object.

Parameters:

code (str) – The function code as a string.

Returns:

The callable function object extracted from the

code string.

Return type:

Callable[…, Any]

camel.utils.get_first_int(string: str) int | None[source]#

Returns the first integer number found in the given string.

If no integer number is found, returns None.

Parameters:

string (str) – The input string.

Returns:

The first integer number found in the string, or None if

no integer number is found.

Return type:

int or None

camel.utils.get_model_encoding(value_for_tiktoken: str)[source]#

Get model encoding from tiktoken.

Parameters:

value_for_tiktoken – Model value for tiktoken.

Returns:

Model encoding.

Return type:

tiktoken.Encoding

camel.utils.get_prompt_template_key_words(template: str) Set[str][source]#

Given a string template containing curly braces {}, return a set of the words inside the braces.

Parameters:

template (str) – A string containing curly braces.

Returns:

A list of the words inside the curly braces.

Return type:

List[str]

Example

>>> get_prompt_template_key_words('Hi, {name}! How are you {status}?')
{'name', 'status'}
camel.utils.get_pydantic_major_version() int[source]#

Get the major version of Pydantic.

Returns:

The major version number of Pydantic if installed, otherwise 0.

Return type:

int

camel.utils.get_pydantic_model(input_data: str | Type[BaseModel] | Callable) Type[BaseModel][source]#
A multi-purpose function that can be used as a normal function,

a class decorator, or a function decorator.

Args: input_data (Union[str, type, Callable]):

  • If a string is provided, it should be a JSON-encoded string

    that will be converted into a BaseModel.

  • If a function is provided, it will be decorated such that

    its arguments are converted into a BaseModel.

  • If a BaseModel class is provided, it will be returned directly.

Returns:

The BaseModel class that will be used to

structure the input data.

Return type:

Type[BaseModel]

camel.utils.get_pydantic_object_schema(pydantic_params: Type[BaseModel]) Dict[source]#

Get the JSON schema of a Pydantic model.

Parameters:

pydantic_params (Type[BaseModel]) – The Pydantic model class to retrieve the schema for.

Returns:

The JSON schema of the Pydantic model.

Return type:

dict

camel.utils.get_system_information()[source]#

Gathers information about the operating system.

Returns:

A dictionary containing various pieces of OS information.

Return type:

dict

camel.utils.get_task_list(task_response: str) List[str][source]#

Parse the response of the Agent and return task list.

Parameters:

task_response (str) – The string response of the Agent.

Returns:

A list of the string tasks.

Return type:

List[str]

camel.utils.handle_http_error(response: Response) str[source]#

Handles the HTTP errors based on the status code of the response.

Parameters:

response (requests.Response) – The HTTP response from the API call.

Returns:

The error type, based on the status code.

Return type:

str

camel.utils.is_docker_running() bool[source]#

Check if the Docker daemon is running.

Returns:

True if the Docker daemon is running, False otherwise.

Return type:

bool

camel.utils.json_to_function_code(json_obj: Dict) str[source]#

Generate a Python function code from a JSON schema.

Parameters:

json_obj (dict) – The JSON schema object containing properties and required fields, and json format is follow openai tools schema

Returns:

The generated Python function code as a string.

Return type:

str

camel.utils.print_text_animated(text, delay: float = 0.02, end: str = '')[source]#

Prints the given text with an animated effect.

Parameters:
  • text (str) – The text to print.

  • delay (float, optional) – The delay between each character printed. (default: 0.02)

  • end (str, optional) – The end character to print after each character of text. (default: "")

camel.utils.retry_on_error(max_retries: int = 3, initial_delay: float = 1.0) Callable[source]#

Decorator to retry function calls on exception with exponential backoff.

Parameters:
  • max_retries (int) – Maximum number of retry attempts

  • initial_delay (float) – Initial delay between retries in seconds

Returns:

Decorated function with retry logic

Return type:

Callable

camel.utils.text_extract_from_web(url: str) str[source]#

Get the text information from given url.

Parameters:

url (str) – The website you want to search.

Returns:

All texts extract from the web.

Return type:

str

camel.utils.to_pascal(snake: str) str[source]#

Convert a snake_case string to PascalCase.

Parameters:

snake (str) – The snake_case string to be converted.

Returns:

The converted PascalCase string.

Return type:

str

camel.utils.track_agent(*args, **kwargs)[source]#

Mock track agent decorator for AgentOps.

camel.utils.with_timeout(timeout=None)[source]#

Decorator that adds timeout functionality to functions.

Executes functions with a specified timeout value. Returns a timeout message if execution time is exceeded.

Parameters:

timeout (float, optional) – The timeout duration in seconds. If None, will try to get timeout from the instance’s timeout attribute. (default: None)

Example

>>> @with_timeout(5)
... def my_function():
...     return "Success"
>>> my_function()
>>> class MyClass:
...     timeout = 5
...     @with_timeout()
...     def my_method(self):
...         return "Success"