camel.agents package

On this page

camel.agents package#

Subpackages#

Submodules#

camel.agents.base module#

class camel.agents.base.BaseAgent[source]#

Bases: ABC

An abstract base class for all CAMEL agents.

abstract reset(*args: Any, **kwargs: Any) Any[source]#

Resets the agent to its initial state.

abstract step(*args: Any, **kwargs: Any) Any[source]#

Performs a single step of the agent.

camel.agents.chat_agent module#

class camel.agents.chat_agent.ChatAgent(system_message: BaseMessage | str | None = None, model: BaseModelBackend | List[BaseModelBackend] | None = None, memory: AgentMemory | None = None, message_window_size: int | None = None, token_limit: int | None = None, output_language: str | None = None, tools: List[FunctionTool | Callable] | None = None, external_tools: List[FunctionTool | Callable | Dict[str, Any]] | None = None, response_terminators: List[ResponseTerminator] | None = None, scheduling_strategy: str = 'round_robin', single_iteration: bool = False, agent_id: str | None = None)[source]#

Bases: BaseAgent

Class for managing conversations of CAMEL Chat Agents.

Parameters:
  • system_message (Union[BaseMessage, str], optional) – The system message for the chat agent.

  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: ModelPlatformType.DEFAULT with ModelType.DEFAULT)

  • memory (AgentMemory, optional) – The agent memory for managing chat messages. If None, a ChatHistoryMemory will be used. (default: None)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

  • token_limit (int, optional) – The maximum number of tokens in a context. The context will be automatically pruned to fulfill the limitation. If None, it will be set according to the backend model. (default: None)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

  • tools (Optional[List[Union[FunctionTool, Callable]]], optional) – List of available FunctionTool or Callable. (default: None)

  • (Optional[List[Union[FunctionTool (external_tools) – Dict[str, Any]]]], optional): List of external tools (FunctionTool or Callable or Dict[str, Any]) bind to one chat agent. When these tools are called, the agent will directly return the request instead of processing it. (default: None)

  • Callable – Dict[str, Any]]]], optional): List of external tools (FunctionTool or Callable or Dict[str, Any]) bind to one chat agent. When these tools are called, the agent will directly return the request instead of processing it. (default: None)

:paramDict[str, Any]]]], optional): List of external tools

(FunctionTool or Callable or Dict[str, Any]) bind to one chat agent. When these tools are called, the agent will directly return the request instead of processing it. (default: None)

Parameters:
  • response_terminators (List[ResponseTerminator], optional) – List of ResponseTerminator bind to one chat agent. (default: None)

  • scheduling_strategy (str) – name of function that defines how to select the next model in ModelManager. (default: :str:`round_robin`)

  • single_iteration (bool) – Whether to let the agent perform only one model calling at each step. (default: False)

  • agent_id (str, optional) – The ID of the agent. If not provided, a random UUID will be generated. (default: None)

add_external_tool(tool: FunctionTool | Callable | Dict[str, Any]) None[source]#
add_model_scheduling_strategy(name: str, strategy_fn: Callable)[source]#

Add a scheduling strategy method provided by user to ModelManger.

Parameters:
  • name (str) – The name of the strategy.

  • strategy_fn (Callable) – The scheduling strategy function.

add_tool(tool: FunctionTool | Callable) None[source]#

Add a tool to the agent.

async astep(input_message: BaseMessage | str, response_format: Type[BaseModel] | None = None) ChatAgentResponse[source]#

Performs a single step in the chat session by generating a response to the input message. This agent step can call async function calls.

Parameters:
  • input_message (Union[BaseMessage, str]) – The input message to the agent. For BaseMessage input, its role field that specifies the role at backend may be either user or assistant but it will be set to user anyway since for the self agent any incoming message is external. For str input, the role_name would be User.

  • response_format (Optional[Type[BaseModel]], optional) – A pydantic model class that includes value types and field descriptions used to generate a structured response by LLM. This schema helps in defining the expected output format. (default: None)

Returns:

A struct containing the output messages,

a boolean indicating whether the chat session has terminated, and information about the chat session.

Return type:

ChatAgentResponse

property chat_history: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]#
clear_memory() None[source]#

Clear the agent’s memory and reset to initial state.

Returns:

None

get_usage_dict(output_messages: List[BaseMessage], prompt_tokens: int) Dict[str, int][source]#

Get usage dictionary when using the stream mode.

Parameters:
  • output_messages (list) – List of output messages.

  • prompt_tokens (int) – Number of input prompt tokens.

Returns:

Usage dictionary.

Return type:

dict

init_messages() None[source]#

Initializes the stored messages list with the current system message.

load_memory(memory: AgentMemory) None[source]#

Load the provided memory into the agent.

Parameters:

memory (AgentMemory) – The memory to load into the agent.

Returns:

None

load_memory_from_path(path: str) None[source]#

Loads memory records from a JSON file filtered by this agent’s ID.

Parameters:

path (str) – The file path to a JSON memory file that uses JsonStorage.

Raises:

ValueError – If no matching records for the agent_id are found (optional check; commented out below).

property output_language: str | None#

Returns the output language for the agent.

record_message(message: BaseMessage) None[source]#

Records the externally provided message into the agent memory as if it were an answer of the ChatAgent from the backend. Currently, the choice of the critic is submitted with this method.

Parameters:

message (BaseMessage) – An external message to be recorded in the memory.

remove_external_tool(tool_name: str) bool[source]#

Remove an external tool from the agent by name.

Parameters:

tool_name (str) – The name of the tool to remove.

Returns:

Whether the tool was successfully removed.

Return type:

bool

remove_tool(tool_name: str) bool[source]#

Remove a tool from the agent by name.

Parameters:

tool_name (str) – The name of the tool to remove.

Returns:

Whether the tool was successfully removed.

Return type:

bool

reset()[source]#

Resets the ChatAgent to its initial state.

save_memory(path: str) None[source]#

Retrieves the current conversation data from memory and writes it into a JSON file using JsonStorage.

Parameters:

path (str) – Target file path to store JSON data.

step(input_message: BaseMessage | str, response_format: Type[BaseModel] | None = None) ChatAgentResponse[source]#

Executes a single step in the chat session, generating a response to the input message.

Parameters:
  • input_message (Union[BaseMessage, str]) – The input message for the agent. If provided as a BaseMessage, the role is adjusted to user to indicate an external message.

  • response_format (Optional[Type[BaseModel]], optional) – A Pydantic model defining the expected structure of the response. Used to generate a structured response if provided. (default: None)

Returns:

Contains output messages, a termination status

flag, and session information.

Return type:

ChatAgentResponse

property system_message: BaseMessage | None#

Returns the system message for the agent.

property tool_dict: Dict[str, FunctionTool]#

Returns a dictionary of internal tools.

update_memory(message: BaseMessage, role: OpenAIBackendRole) None[source]#

Updates the agent memory with a new message.

Parameters:

camel.agents.critic_agent module#

class camel.agents.critic_agent.CriticAgent(system_message: BaseMessage, model: BaseModelBackend | None = None, memory: AgentMemory | None = None, message_window_size: int = 6, retry_attempts: int = 2, verbose: bool = False, logger_color: Any = '\x1b[35m')[source]#

Bases: ChatAgent

A class for the critic agent that assists in selecting an option.

Parameters:
  • system_message (BaseMessage) – The system message for the critic agent.

  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: 6)

  • retry_attempts (int, optional) – The number of retry attempts if the critic fails to return a valid option. (default: 2)

  • verbose (bool, optional) – Whether to print the critic’s messages.

  • logger_color (Any) – The color of the menu options displayed to the user. (default: Fore.MAGENTA)

flatten_options(messages: Sequence[BaseMessage]) str[source]#

Flattens the options to the critic.

Parameters:

messages (Sequence[BaseMessage]) – A list of BaseMessage objects.

Returns:

A string containing the flattened options to the critic.

Return type:

str

get_option(input_message: BaseMessage) str[source]#

Gets the option selected by the critic.

Parameters:

input_message (BaseMessage) – A BaseMessage object representing the input message.

Returns:

The option selected by the critic.

Return type:

str

parse_critic(critic_msg: BaseMessage) str | None[source]#

Parses the critic’s message and extracts the choice.

Parameters:

critic_msg (BaseMessage) – A BaseMessage object representing the critic’s response.

Returns:

The critic’s choice as a string, or None if the

message could not be parsed.

Return type:

Optional[str]

reduce_step(input_messages: Sequence[BaseMessage]) ChatAgentResponse[source]#

Performs one step of the conversation by flattening options to the critic, getting the option, and parsing the choice.

Parameters:

input_messages (Sequence[BaseMessage]) – A list of BaseMessage objects.

Returns:

A ChatAgentResponse object includes the

critic’s choice.

Return type:

ChatAgentResponse

camel.agents.deductive_reasoner_agent module#

class camel.agents.deductive_reasoner_agent.DeductiveReasonerAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent responsible for deductive reasoning. Model of deductive reasoning:

  • L: A βŠ• C -> q * B

  • A represents the known starting state.

  • B represents the known target state.

  • C represents the conditions required to transition from A to B.

  • Q represents the quality or effectiveness of the transition from

A to B. - L represents the path or process from A to B.

Parameters:

model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

deduce_conditions_and_quality(starting_state: str, target_state: str, role_descriptions_dict: Dict[str, str] | None = None) Dict[str, List[str] | Dict[str, str]][source]#

Derives the conditions and quality from the starting state and the target state based on the model of the deductive reasoning and the knowledge base. It can optionally consider the roles involved in the scenario, which allows tailoring the output more closely to the AI agent’s environment.

Parameters:
  • starting_state (str) – The initial or starting state from which conditions are deduced.

  • target_state (str) – The target state of the task.

  • role_descriptions_dict (Optional[Dict[str, str]], optional) – The descriptions of the roles. (default: None)

  • role_descriptions_dict – A dictionary describing the roles involved in the scenario. This is optional and can be used to provide a context for the CAMEL’s role-playing, enabling the generation of more relevant and tailored conditions and quality assessments. This could be generated using a RoleAssignmentAgent() or defined manually by the user.

Returns:

A dictionary with the

extracted data from the message. The dictionary contains three keys: - β€˜conditions’: A list where each key is a condition ID and

each value is the corresponding condition text.

  • ’labels’: A list of label strings extracted from the message.

  • ’quality’: A string of quality assessment strings extracted

    from the message.

Return type:

Dict[str, Union[List[str], Dict[str, str]]]

camel.agents.embodied_agent module#

class camel.agents.embodied_agent.EmbodiedAgent(system_message: BaseMessage, model: BaseModelBackend | None = None, message_window_size: int | None = None, tool_agents: List[BaseToolAgent] | None = None, code_interpreter: BaseInterpreter | None = None, verbose: bool = False, logger_color: Any = '\x1b[35m')[source]#

Bases: ChatAgent

Class for managing conversations of CAMEL Embodied Agents.

Parameters:
  • system_message (BaseMessage) – The system message for the chat agent.

  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

  • tool_agents (List[BaseToolAgent], optional) – The tools agents to use in the embodied agent. (default: None)

  • code_interpreter (BaseInterpreter, optional) – The code interpreter to execute codes. If code_interpreter and tool_agent are both None, default to SubProcessInterpreter. If code_interpreter is None and tool_agents is not None, default to InternalPythonInterpreter. (default: None)

  • verbose (bool, optional) – Whether to print the critic’s messages.

  • logger_color (Any) – The color of the logger displayed to the user. (default: Fore.MAGENTA)

get_tool_agent_names() List[str][source]#

Returns the names of tool agents.

Returns:

The names of tool agents.

Return type:

List[str]

step(input_message: BaseMessage) ChatAgentResponse[source]#

Performs a step in the conversation.

Parameters:

input_message (BaseMessage) – The input message.

Returns:

A struct containing the output messages,

a boolean indicating whether the chat session has terminated, and information about the chat session.

Return type:

ChatAgentResponse

camel.agents.knowledge_graph_agent module#

class camel.agents.knowledge_graph_agent.KnowledgeGraphAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent that can extract node and relationship information for different entities from given Element content.

task_prompt#

A prompt for the agent to extract node and relationship information for different entities.

Type:

TextPrompt

run(element: Element, parse_graph_elements: bool = False, prompt: str | None = None) str | GraphElement[source]#

Run the agent to extract node and relationship information.

Parameters:
  • element (Element) – The input element.

  • parse_graph_elements (bool, optional) – Whether to parse into GraphElement. Defaults to False.

  • prompt (str, optional) – The custom prompt to be used. Defaults to None.

Returns:

The extracted node and relationship

information. If parse_graph_elements is True then return GraphElement, else return str.

Return type:

Union[str, GraphElement]

camel.agents.role_assignment_agent module#

class camel.agents.role_assignment_agent.RoleAssignmentAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent that generates role names based on the task prompt.

Parameters:

model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

role_assignment_prompt#

A prompt for the agent to generate

Type:

TextPrompt

role names.
run(task_prompt: str | TextPrompt, num_roles: int = 2) Dict[str, str][source]#

Generate role names based on the input task prompt.

Parameters:
  • task_prompt (Union[str, TextPrompt]) – The prompt for the task based on which the roles are to be generated.

  • num_roles (int, optional) – The number of roles to generate. (default: 2)

Returns:

A dictionary mapping role names to their

descriptions.

Return type:

Dict[str, str]

camel.agents.search_agent module#

class camel.agents.search_agent.SearchAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent that summarizes text based on a query and evaluates the relevance of an answer.

Parameters:

model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

Ask whether to continue search or not based on the provided answer.

Parameters:
  • query (str) – The question.

  • answer (str) – The answer to the question.

Returns:

True if the user want to continue search, False otherwise.

Return type:

bool

summarize_text(text: str, query: str) str[source]#

Summarize the information from the text, base on the query.

Parameters:
  • text (str) – Text to summarize.

  • query (str) – What information you want.

Returns:

Strings with information.

Return type:

str

camel.agents.task_agent module#

class camel.agents.task_agent.TaskCreationAgent(role_name: str, objective: str | TextPrompt, model: BaseModelBackend | None = None, output_language: str | None = None, message_window_size: int | None = None, max_task_num: int | None = 3)[source]#

Bases: ChatAgent

An agent that helps create new tasks based on the objective and last completed task. Compared to TaskPlannerAgent, it’s still a task planner, but it has more context information like last task and incomplete task list. Modified from BabyAGI.

task_creation_prompt#

A prompt for the agent to create new tasks.

Type:

TextPrompt

Parameters:
  • role_name (str) – The role name of the Agent to create the task.

  • objective (Union[str, TextPrompt]) – The objective of the Agent to perform the task.

  • model (BaseModelBackend, optional) – The LLM backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

  • max_task_num (int, optional) – The maximum number of planned tasks in one round. (default: :obj:3)

run(task_list: List[str]) List[str][source]#

Generate subtasks based on the previous task results and incomplete task list.

Parameters:

task_list (List[str]) – The completed or in-progress tasks which should not overlap with new created tasks.

Returns:

The new task list generated by the Agent.

Return type:

List[str]

class camel.agents.task_agent.TaskPlannerAgent(model: BaseModelBackend | None = None, output_language: str | None = None)[source]#

Bases: ChatAgent

An agent that helps divide a task into subtasks based on the input task prompt.

task_planner_prompt#

A prompt for the agent to divide the task into subtasks.

Type:

TextPrompt

Parameters:
  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

run(task_prompt: str | TextPrompt) TextPrompt[source]#

Generate subtasks based on the input task prompt.

Parameters:

task_prompt (Union[str, TextPrompt]) – The prompt for the task to be divided into subtasks.

Returns:

A prompt for the subtasks generated by the agent.

Return type:

TextPrompt

class camel.agents.task_agent.TaskPrioritizationAgent(objective: str | TextPrompt, model: BaseModelBackend | None = None, output_language: str | None = None, message_window_size: int | None = None)[source]#

Bases: ChatAgent

An agent that helps re-prioritize the task list and returns numbered prioritized list. Modified from BabyAGI.

task_prioritization_prompt#

A prompt for the agent to prioritize tasks.

Type:

TextPrompt

Parameters:
  • objective (Union[str, TextPrompt]) – The objective of the Agent to perform the task.

  • model (BaseModelBackend, optional) – The LLM backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

run(task_list: List[str]) List[str][source]#

Prioritize the task list given the agent objective.

Parameters:

task_list (List[str]) – The unprioritized tasks of agent.

Returns:

The new prioritized task list generated by the Agent.

Return type:

List[str]

class camel.agents.task_agent.TaskSpecifyAgent(model: BaseModelBackend | None = None, task_type: TaskType = TaskType.AI_SOCIETY, task_specify_prompt: str | TextPrompt | None = None, word_limit: int = 50, output_language: str | None = None)[source]#

Bases: ChatAgent

An agent that specifies a given task prompt by prompting the user to provide more details.

DEFAULT_WORD_LIMIT#

The default word limit for the task prompt.

Type:

int

task_specify_prompt#

The prompt for specifying the task.

Type:

TextPrompt

Parameters:
  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • task_type (TaskType, optional) – The type of task for which to generate a prompt. (default: TaskType.AI_SOCIETY)

  • task_specify_prompt (Union[str, TextPrompt], optional) – The prompt for specifying the task. (default: None)

  • word_limit (int, optional) – The word limit for the task prompt. (default: 50)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

DEFAULT_WORD_LIMIT = 50#
run(task_prompt: str | TextPrompt, meta_dict: Dict[str, Any] | None = None) TextPrompt[source]#

Specify the given task prompt by providing more details.

Parameters:
  • task_prompt (Union[str, TextPrompt]) – The original task prompt.

  • meta_dict (Dict[str, Any], optional) – A dictionary containing additional information to include in the prompt. (default: None)

Returns:

The specified task prompt.

Return type:

TextPrompt

Module contents#

class camel.agents.BaseAgent[source]#

Bases: ABC

An abstract base class for all CAMEL agents.

abstract reset(*args: Any, **kwargs: Any) Any[source]#

Resets the agent to its initial state.

abstract step(*args: Any, **kwargs: Any) Any[source]#

Performs a single step of the agent.

class camel.agents.BaseToolAgent(name: str, description: str)[source]#

Bases: BaseAgent

Creates a BaseToolAgent object with the specified name and

description.

Parameters:
  • name (str) – The name of the tool agent.

  • description (str) – The description of the tool agent.

reset() None[source]#

Resets the agent to its initial state.

step() None[source]#

Performs a single step of the agent.

class camel.agents.ChatAgent(system_message: BaseMessage | str | None = None, model: BaseModelBackend | List[BaseModelBackend] | None = None, memory: AgentMemory | None = None, message_window_size: int | None = None, token_limit: int | None = None, output_language: str | None = None, tools: List[FunctionTool | Callable] | None = None, external_tools: List[FunctionTool | Callable | Dict[str, Any]] | None = None, response_terminators: List[ResponseTerminator] | None = None, scheduling_strategy: str = 'round_robin', single_iteration: bool = False, agent_id: str | None = None)[source]#

Bases: BaseAgent

Class for managing conversations of CAMEL Chat Agents.

Parameters:
  • system_message (Union[BaseMessage, str], optional) – The system message for the chat agent.

  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: ModelPlatformType.DEFAULT with ModelType.DEFAULT)

  • memory (AgentMemory, optional) – The agent memory for managing chat messages. If None, a ChatHistoryMemory will be used. (default: None)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

  • token_limit (int, optional) – The maximum number of tokens in a context. The context will be automatically pruned to fulfill the limitation. If None, it will be set according to the backend model. (default: None)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

  • tools (Optional[List[Union[FunctionTool, Callable]]], optional) – List of available FunctionTool or Callable. (default: None)

  • (Optional[List[Union[FunctionTool (external_tools) – Dict[str, Any]]]], optional): List of external tools (FunctionTool or Callable or Dict[str, Any]) bind to one chat agent. When these tools are called, the agent will directly return the request instead of processing it. (default: None)

  • Callable – Dict[str, Any]]]], optional): List of external tools (FunctionTool or Callable or Dict[str, Any]) bind to one chat agent. When these tools are called, the agent will directly return the request instead of processing it. (default: None)

:paramDict[str, Any]]]], optional): List of external tools

(FunctionTool or Callable or Dict[str, Any]) bind to one chat agent. When these tools are called, the agent will directly return the request instead of processing it. (default: None)

Parameters:
  • response_terminators (List[ResponseTerminator], optional) – List of ResponseTerminator bind to one chat agent. (default: None)

  • scheduling_strategy (str) – name of function that defines how to select the next model in ModelManager. (default: :str:`round_robin`)

  • single_iteration (bool) – Whether to let the agent perform only one model calling at each step. (default: False)

  • agent_id (str, optional) – The ID of the agent. If not provided, a random UUID will be generated. (default: None)

add_external_tool(tool: FunctionTool | Callable | Dict[str, Any]) None[source]#
add_model_scheduling_strategy(name: str, strategy_fn: Callable)[source]#

Add a scheduling strategy method provided by user to ModelManger.

Parameters:
  • name (str) – The name of the strategy.

  • strategy_fn (Callable) – The scheduling strategy function.

add_tool(tool: FunctionTool | Callable) None[source]#

Add a tool to the agent.

async astep(input_message: BaseMessage | str, response_format: Type[BaseModel] | None = None) ChatAgentResponse[source]#

Performs a single step in the chat session by generating a response to the input message. This agent step can call async function calls.

Parameters:
  • input_message (Union[BaseMessage, str]) – The input message to the agent. For BaseMessage input, its role field that specifies the role at backend may be either user or assistant but it will be set to user anyway since for the self agent any incoming message is external. For str input, the role_name would be User.

  • response_format (Optional[Type[BaseModel]], optional) – A pydantic model class that includes value types and field descriptions used to generate a structured response by LLM. This schema helps in defining the expected output format. (default: None)

Returns:

A struct containing the output messages,

a boolean indicating whether the chat session has terminated, and information about the chat session.

Return type:

ChatAgentResponse

property chat_history: List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam]#
clear_memory() None[source]#

Clear the agent’s memory and reset to initial state.

Returns:

None

get_usage_dict(output_messages: List[BaseMessage], prompt_tokens: int) Dict[str, int][source]#

Get usage dictionary when using the stream mode.

Parameters:
  • output_messages (list) – List of output messages.

  • prompt_tokens (int) – Number of input prompt tokens.

Returns:

Usage dictionary.

Return type:

dict

init_messages() None[source]#

Initializes the stored messages list with the current system message.

load_memory(memory: AgentMemory) None[source]#

Load the provided memory into the agent.

Parameters:

memory (AgentMemory) – The memory to load into the agent.

Returns:

None

load_memory_from_path(path: str) None[source]#

Loads memory records from a JSON file filtered by this agent’s ID.

Parameters:

path (str) – The file path to a JSON memory file that uses JsonStorage.

Raises:

ValueError – If no matching records for the agent_id are found (optional check; commented out below).

property output_language: str | None#

Returns the output language for the agent.

record_message(message: BaseMessage) None[source]#

Records the externally provided message into the agent memory as if it were an answer of the ChatAgent from the backend. Currently, the choice of the critic is submitted with this method.

Parameters:

message (BaseMessage) – An external message to be recorded in the memory.

remove_external_tool(tool_name: str) bool[source]#

Remove an external tool from the agent by name.

Parameters:

tool_name (str) – The name of the tool to remove.

Returns:

Whether the tool was successfully removed.

Return type:

bool

remove_tool(tool_name: str) bool[source]#

Remove a tool from the agent by name.

Parameters:

tool_name (str) – The name of the tool to remove.

Returns:

Whether the tool was successfully removed.

Return type:

bool

reset()[source]#

Resets the ChatAgent to its initial state.

save_memory(path: str) None[source]#

Retrieves the current conversation data from memory and writes it into a JSON file using JsonStorage.

Parameters:

path (str) – Target file path to store JSON data.

step(input_message: BaseMessage | str, response_format: Type[BaseModel] | None = None) ChatAgentResponse[source]#

Executes a single step in the chat session, generating a response to the input message.

Parameters:
  • input_message (Union[BaseMessage, str]) – The input message for the agent. If provided as a BaseMessage, the role is adjusted to user to indicate an external message.

  • response_format (Optional[Type[BaseModel]], optional) – A Pydantic model defining the expected structure of the response. Used to generate a structured response if provided. (default: None)

Returns:

Contains output messages, a termination status

flag, and session information.

Return type:

ChatAgentResponse

property system_message: BaseMessage | None#

Returns the system message for the agent.

property tool_dict: Dict[str, FunctionTool]#

Returns a dictionary of internal tools.

update_memory(message: BaseMessage, role: OpenAIBackendRole) None[source]#

Updates the agent memory with a new message.

Parameters:
class camel.agents.CriticAgent(system_message: BaseMessage, model: BaseModelBackend | None = None, memory: AgentMemory | None = None, message_window_size: int = 6, retry_attempts: int = 2, verbose: bool = False, logger_color: Any = '\x1b[35m')[source]#

Bases: ChatAgent

A class for the critic agent that assists in selecting an option.

Parameters:
  • system_message (BaseMessage) – The system message for the critic agent.

  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: 6)

  • retry_attempts (int, optional) – The number of retry attempts if the critic fails to return a valid option. (default: 2)

  • verbose (bool, optional) – Whether to print the critic’s messages.

  • logger_color (Any) – The color of the menu options displayed to the user. (default: Fore.MAGENTA)

flatten_options(messages: Sequence[BaseMessage]) str[source]#

Flattens the options to the critic.

Parameters:

messages (Sequence[BaseMessage]) – A list of BaseMessage objects.

Returns:

A string containing the flattened options to the critic.

Return type:

str

get_option(input_message: BaseMessage) str[source]#

Gets the option selected by the critic.

Parameters:

input_message (BaseMessage) – A BaseMessage object representing the input message.

Returns:

The option selected by the critic.

Return type:

str

parse_critic(critic_msg: BaseMessage) str | None[source]#

Parses the critic’s message and extracts the choice.

Parameters:

critic_msg (BaseMessage) – A BaseMessage object representing the critic’s response.

Returns:

The critic’s choice as a string, or None if the

message could not be parsed.

Return type:

Optional[str]

reduce_step(input_messages: Sequence[BaseMessage]) ChatAgentResponse[source]#

Performs one step of the conversation by flattening options to the critic, getting the option, and parsing the choice.

Parameters:

input_messages (Sequence[BaseMessage]) – A list of BaseMessage objects.

Returns:

A ChatAgentResponse object includes the

critic’s choice.

Return type:

ChatAgentResponse

class camel.agents.EmbodiedAgent(system_message: BaseMessage, model: BaseModelBackend | None = None, message_window_size: int | None = None, tool_agents: List[BaseToolAgent] | None = None, code_interpreter: BaseInterpreter | None = None, verbose: bool = False, logger_color: Any = '\x1b[35m')[source]#

Bases: ChatAgent

Class for managing conversations of CAMEL Embodied Agents.

Parameters:
  • system_message (BaseMessage) – The system message for the chat agent.

  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

  • tool_agents (List[BaseToolAgent], optional) – The tools agents to use in the embodied agent. (default: None)

  • code_interpreter (BaseInterpreter, optional) – The code interpreter to execute codes. If code_interpreter and tool_agent are both None, default to SubProcessInterpreter. If code_interpreter is None and tool_agents is not None, default to InternalPythonInterpreter. (default: None)

  • verbose (bool, optional) – Whether to print the critic’s messages.

  • logger_color (Any) – The color of the logger displayed to the user. (default: Fore.MAGENTA)

get_tool_agent_names() List[str][source]#

Returns the names of tool agents.

Returns:

The names of tool agents.

Return type:

List[str]

step(input_message: BaseMessage) ChatAgentResponse[source]#

Performs a step in the conversation.

Parameters:

input_message (BaseMessage) – The input message.

Returns:

A struct containing the output messages,

a boolean indicating whether the chat session has terminated, and information about the chat session.

Return type:

ChatAgentResponse

class camel.agents.HuggingFaceToolAgent(name: str, *args: Any, remote: bool = True, **kwargs: Any)[source]#

Bases: BaseToolAgent

Tool agent for calling HuggingFace models. This agent is a wrapper

around agents from the transformers library. For more information about the available models, please see the transformers documentation at https://huggingface.co/docs/transformers/transformers_agents.

Parameters:
  • name (str) – The name of the agent.

  • *args (Any) – Additional positional arguments to pass to the underlying Agent class.

  • remote (bool, optional) – Flag indicating whether to run the agent remotely. (default: True)

  • **kwargs (Any) – Additional keyword arguments to pass to the underlying Agent class.

chat(*args: Any, remote: bool | None = None, **kwargs: Any) Any[source]#

Runs the agent in a chat conversation mode.

Parameters:
  • *args (Any) – Positional arguments to pass to the agent.

  • remote (bool, optional) – Flag indicating whether to run the agent remotely. Overrides the default setting. (default: None)

  • **kwargs (Any) – Keyword arguments to pass to the agent.

Returns:

The response from the agent.

Return type:

str

reset() None[source]#

Resets the chat history of the agent.

step(*args: Any, remote: bool | None = None, **kwargs: Any) Any[source]#

Runs the agent in single execution mode.

Parameters:
  • *args (Any) – Positional arguments to pass to the agent.

  • remote (bool, optional) – Flag indicating whether to run the agent remotely. Overrides the default setting. (default: None)

  • **kwargs (Any) – Keyword arguments to pass to the agent.

Returns:

The response from the agent.

Return type:

str

class camel.agents.KnowledgeGraphAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent that can extract node and relationship information for different entities from given Element content.

task_prompt#

A prompt for the agent to extract node and relationship information for different entities.

Type:

TextPrompt

run(element: Element, parse_graph_elements: bool = False, prompt: str | None = None) str | GraphElement[source]#

Run the agent to extract node and relationship information.

Parameters:
  • element (Element) – The input element.

  • parse_graph_elements (bool, optional) – Whether to parse into GraphElement. Defaults to False.

  • prompt (str, optional) – The custom prompt to be used. Defaults to None.

Returns:

The extracted node and relationship

information. If parse_graph_elements is True then return GraphElement, else return str.

Return type:

Union[str, GraphElement]

class camel.agents.RepoAgent(vector_retriever: VectorRetriever, system_message: str | None = 'You are a code assistant with repo context.', repo_paths: List[str] | None = None, model: BaseModelBackend | None = None, max_context_tokens: int = 2000, github_auth_token: str | None = None, chunk_size: int | None = 8192, top_k: int | None = 5, similarity: float | None = 0.6, collection_name: str | None = None, **kwargs)[source]#

Bases: ChatAgent

A specialized agent designed to interact with GitHub repositories for code generation tasks. The RepoAgent enhances a base ChatAgent by integrating context from one or more GitHub repositories. It supports two processing modes: - FULL_CONTEXT: loads and injects full repository content into the

prompt.

  • RAG (Retrieval-Augmented Generation): retrieves relevant

    code/documentation chunks using a vector store when context length exceeds a specified token limit.

vector_retriever#

Retriever used to perform semantic search in RAG mode. Required if repo content exceeds context limit.

Type:

VectorRetriever

system_message#

The system message for the chat agent. (default: :str:`"You are a code assistant with repo context."`)

Type:

Optional[str]

repo_paths#

List of GitHub repository URLs to load during initialization. (default: None)

Type:

Optional[List[str]]

model#

The model backend to use for generating responses. (default: ModelPlatformType.DEFAULT with ModelType.DEFAULT)

Type:

BaseModelBackend

max_context_tokens#

Maximum number of tokens allowed before switching to RAG mode. (default: 2000)

Type:

Optional[int]

github_auth_token#

GitHub personal access token for accessing private or rate-limited repositories. (default: None)

Type:

Optional[str]

chunk_size#

Maximum number of characters per code chunk when indexing files for RAG. (default: 8192)

Type:

Optional[int]

top_k#

Number of top-matching chunks to retrieve from the vector store in RAG mode. (default: 5)

Type:

int

similarity#

Minimum similarity score required to include a chunk in the RAG context. (default: 0.6)

Type:

Optional[float]

collection_name#

Name of the vector database collection to use for storing and retrieving chunks. (default: None)

Type:

Optional[str]

\*\*kwargs

Inherited from ChatAgent

Note

The current implementation of RAG mode requires using Qdrant as the vector storage backend. The VectorRetriever defaults to QdrantStorage if no storage is explicitly provided. Other vector storage backends are not currently supported for the RepoAgent’s RAG functionality.

add_repositories(repo_urls: List[str])[source]#

Add a GitHub repository to the list of repositories.

Parameters:

repo_urls (str) – The Repo URL to be added.

check_switch_mode() bool[source]#

Check if the current context exceeds the context window; if so, switch to RAG mode.

Returns:

True if the mode was switched, False otherwise.

Return type:

bool

construct_full_text()[source]#

Construct full context text from repositories by concatenation.

count_tokens() int[source]#

To count the tokens that’s currently in the memory

Returns:

The number of tokens

Return type:

int

load_repositories(repo_urls: List[str]) List[RepositoryInfo][source]#

Load the content of a GitHub repository.

Parameters:

repo_urls (str) – The list of Repo URLs.

Returns:

A list of objects containing information

about the all repositories, including the contents.

Return type:

List[RepositoryInfo]

load_repository(repo_url: str, github_client: Github) RepositoryInfo[source]#

Load the content of a GitHub repository.

Parameters:
  • repo_urls (str) – The Repo URL to be loaded.

  • github_client (GitHub) – The established GitHub client.

Returns:

The object containing information

about the repository, including the contents.

Return type:

RepositoryInfo

parse_url(url: str) Tuple[str, str][source]#

Parse the GitHub URL and return the (owner, repo_name) tuple.

Parameters:

url (str) – The URL to be parsed.

Returns:

The (owner, repo_name) tuple.

Return type:

Tuple[str, str]

reset()[source]#

Resets the ChatAgent to its initial state.

search_by_file_path(file_path: str) str[source]#

Search for all payloads in the vector database where file_path matches the given value (the same file), then sort by piece_num and concatenate text fields to return a complete result.

Parameters:

file_path (str) – The file_path value to filter the payloads.

Returns:

A concatenated string of the text fields sorted by

piece_num.

Return type:

str

step(input_message: BaseMessage | str, *args, **kwargs) ChatAgentResponse[source]#

Overrides ChatAgent.step() to first retrieve relevant context from the vector store before passing the input to the language model.

class camel.agents.RoleAssignmentAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent that generates role names based on the task prompt.

Parameters:

model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

role_assignment_prompt#

A prompt for the agent to generate

Type:

TextPrompt

role names.
run(task_prompt: str | TextPrompt, num_roles: int = 2) Dict[str, str][source]#

Generate role names based on the input task prompt.

Parameters:
  • task_prompt (Union[str, TextPrompt]) – The prompt for the task based on which the roles are to be generated.

  • num_roles (int, optional) – The number of roles to generate. (default: 2)

Returns:

A dictionary mapping role names to their

descriptions.

Return type:

Dict[str, str]

class camel.agents.SearchAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent that summarizes text based on a query and evaluates the relevance of an answer.

Parameters:

model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

Ask whether to continue search or not based on the provided answer.

Parameters:
  • query (str) – The question.

  • answer (str) – The answer to the question.

Returns:

True if the user want to continue search, False otherwise.

Return type:

bool

summarize_text(text: str, query: str) str[source]#

Summarize the information from the text, base on the query.

Parameters:
  • text (str) – Text to summarize.

  • query (str) – What information you want.

Returns:

Strings with information.

Return type:

str

class camel.agents.TaskCreationAgent(role_name: str, objective: str | TextPrompt, model: BaseModelBackend | None = None, output_language: str | None = None, message_window_size: int | None = None, max_task_num: int | None = 3)[source]#

Bases: ChatAgent

An agent that helps create new tasks based on the objective and last completed task. Compared to TaskPlannerAgent, it’s still a task planner, but it has more context information like last task and incomplete task list. Modified from BabyAGI.

task_creation_prompt#

A prompt for the agent to create new tasks.

Type:

TextPrompt

Parameters:
  • role_name (str) – The role name of the Agent to create the task.

  • objective (Union[str, TextPrompt]) – The objective of the Agent to perform the task.

  • model (BaseModelBackend, optional) – The LLM backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

  • max_task_num (int, optional) – The maximum number of planned tasks in one round. (default: :obj:3)

run(task_list: List[str]) List[str][source]#

Generate subtasks based on the previous task results and incomplete task list.

Parameters:

task_list (List[str]) – The completed or in-progress tasks which should not overlap with new created tasks.

Returns:

The new task list generated by the Agent.

Return type:

List[str]

class camel.agents.TaskPlannerAgent(model: BaseModelBackend | None = None, output_language: str | None = None)[source]#

Bases: ChatAgent

An agent that helps divide a task into subtasks based on the input task prompt.

task_planner_prompt#

A prompt for the agent to divide the task into subtasks.

Type:

TextPrompt

Parameters:
  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

run(task_prompt: str | TextPrompt) TextPrompt[source]#

Generate subtasks based on the input task prompt.

Parameters:

task_prompt (Union[str, TextPrompt]) – The prompt for the task to be divided into subtasks.

Returns:

A prompt for the subtasks generated by the agent.

Return type:

TextPrompt

class camel.agents.TaskPrioritizationAgent(objective: str | TextPrompt, model: BaseModelBackend | None = None, output_language: str | None = None, message_window_size: int | None = None)[source]#

Bases: ChatAgent

An agent that helps re-prioritize the task list and returns numbered prioritized list. Modified from BabyAGI.

task_prioritization_prompt#

A prompt for the agent to prioritize tasks.

Type:

TextPrompt

Parameters:
  • objective (Union[str, TextPrompt]) – The objective of the Agent to perform the task.

  • model (BaseModelBackend, optional) – The LLM backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

run(task_list: List[str]) List[str][source]#

Prioritize the task list given the agent objective.

Parameters:

task_list (List[str]) – The unprioritized tasks of agent.

Returns:

The new prioritized task list generated by the Agent.

Return type:

List[str]

class camel.agents.TaskSpecifyAgent(model: BaseModelBackend | None = None, task_type: TaskType = TaskType.AI_SOCIETY, task_specify_prompt: str | TextPrompt | None = None, word_limit: int = 50, output_language: str | None = None)[source]#

Bases: ChatAgent

An agent that specifies a given task prompt by prompting the user to provide more details.

DEFAULT_WORD_LIMIT#

The default word limit for the task prompt.

Type:

int

task_specify_prompt#

The prompt for specifying the task.

Type:

TextPrompt

Parameters:
  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • task_type (TaskType, optional) – The type of task for which to generate a prompt. (default: TaskType.AI_SOCIETY)

  • task_specify_prompt (Union[str, TextPrompt], optional) – The prompt for specifying the task. (default: None)

  • word_limit (int, optional) – The word limit for the task prompt. (default: 50)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

DEFAULT_WORD_LIMIT = 50#
memory: AgentMemory#
role_name: str#
role_type: RoleType#
run(task_prompt: str | TextPrompt, meta_dict: Dict[str, Any] | None = None) TextPrompt[source]#

Specify the given task prompt by providing more details.

Parameters:
  • task_prompt (Union[str, TextPrompt]) – The original task prompt.

  • meta_dict (Dict[str, Any], optional) – A dictionary containing additional information to include in the prompt. (default: None)

Returns:

The specified task prompt.

Return type:

TextPrompt

task_specify_prompt: str | TextPrompt#