camel.agents package

Contents

camel.agents package#

Subpackages#

Submodules#

camel.agents.base module#

class camel.agents.base.BaseAgent[source]#

Bases: ABC

An abstract base class for all CAMEL agents.

abstract reset(*args: Any, **kwargs: Any) Any[source]#

Resets the agent to its initial state.

abstract step(*args: Any, **kwargs: Any) Any[source]#

Performs a single step of the agent.

camel.agents.chat_agent module#

class camel.agents.chat_agent.ChatAgent(system_message: BaseMessage, model: BaseModelBackend | None = None, memory: AgentMemory | None = None, message_window_size: int | None = None, token_limit: int | None = None, output_language: str | None = None, tools: List[OpenAIFunction] | None = None, external_tools: List[OpenAIFunction] | None = None, response_terminators: List[ResponseTerminator] | None = None)[source]#

Bases: BaseAgent

Class for managing conversations of CAMEL Chat Agents.

Parameters:
  • system_message (BaseMessage) – The system message for the chat agent.

  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • memory (AgentMemory, optional) – The agent memory for managing chat messages. If None, a ChatHistoryMemory will be used. (default: None)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

  • token_limit (int, optional) – The maximum number of tokens in a context. The context will be automatically pruned to fulfill the limitation. If None, it will be set according to the backend model. (default: None)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

  • tools (List[OpenAIFunction], optional) – List of available OpenAIFunction. (default: None)

  • external_tools (List[OpenAIFunction], optional) – List of external tools (OpenAIFunction) bind to one chat agent. When these tools are called, the agent will directly return the request instead of processing it. (default: None)

  • response_terminators (List[ResponseTerminator], optional) – List of ResponseTerminator bind to one chat agent. (default: None)

get_info(session_id: str | None, usage: Dict[str, int] | None, termination_reasons: List[str], num_tokens: int, tool_calls: List[FunctionCallingRecord], external_tool_request: ChatCompletionMessageToolCall | None = None) Dict[str, Any][source]#

Returns a dictionary containing information about the chat session.

Parameters:
  • session_id (str, optional) – The ID of the chat session.

  • usage (Dict[str, int], optional) – Information about the usage of the LLM model.

  • termination_reasons (List[str]) – The reasons for the termination of the chat session.

  • num_tokens (int) – The number of tokens used in the chat session.

  • tool_calls (List[FunctionCallingRecord]) – The list of function calling records, containing the information of called tools.

  • external_tool_request – (Optional[ChatCompletionMessageToolCall], optional): The tool calling request of external tools from the model. These requests are directly returned to the user instead of being processed by the agent automatically. (default: None)

Returns:

The chat session information.

Return type:

Dict[str, Any]

get_usage_dict(output_messages: List[BaseMessage], prompt_tokens: int) Dict[str, int][source]#

Get usage dictionary when using the stream mode.

Parameters:
  • output_messages (list) – List of output messages.

  • prompt_tokens (int) – Number of input prompt tokens.

Returns:

Usage dictionary.

Return type:

dict

handle_batch_response(response: ChatCompletion) Tuple[List[BaseMessage], List[str], Dict[str, int], str][source]#
Parameters:

response (dict) – Model response.

Returns:

A tuple of list of output ChatMessage, list of

finish reasons, usage dictionary, and response id.

Return type:

tuple

handle_stream_response(response: Stream[ChatCompletionChunk], prompt_tokens: int) Tuple[List[BaseMessage], List[str], Dict[str, int], str][source]#
Parameters:
  • response (dict) – Model response.

  • prompt_tokens (int) – Number of input prompt tokens.

Returns:

A tuple of list of output ChatMessage, list of

finish reasons, usage dictionary, and response id.

Return type:

tuple

init_messages() None[source]#

Initializes the stored messages list with the initial system message.

is_tools_added() bool[source]#

Whether OpenAI function calling is enabled for this agent.

Returns:

Whether OpenAI function calling is enabled for this

agent, determined by whether the dictionary of tools is empty.

Return type:

bool

record_message(message: BaseMessage) None[source]#

Records the externally provided message into the agent memory as if it were an answer of the ChatAgent from the backend. Currently, the choice of the critic is submitted with this method.

Parameters:

message (BaseMessage) – An external message to be recorded in the memory.

reset()[source]#

Resets the ChatAgent to its initial state and returns the stored messages.

Returns:

The stored messages.

Return type:

List[BaseMessage]

set_output_language(output_language: str) BaseMessage[source]#

Sets the output language for the system message. This method updates the output language for the system message. The output language determines the language in which the output text should be generated.

Parameters:

output_language (str) – The desired output language.

Returns:

The updated system message object.

Return type:

BaseMessage

step(input_message: BaseMessage, output_schema: Type[BaseModel] | None = None) ChatAgentResponse[source]#

Performs a single step in the chat session by generating a response to the input message.

Parameters:
  • input_message (BaseMessage) – The input message to the agent. Its role field that specifies the role at backend may be either user or assistant but it will be set to user anyway since for the self agent any incoming message is external.

  • output_schema (Optional[Type[BaseModel]], optional) – A pydantic model class that includes value types and field descriptions used to generate a structured response by LLM. This schema helps in defining the expected output format. (default: None)

Returns:

A struct containing the output messages,

a boolean indicating whether the chat session has terminated, and information about the chat session.

Return type:

ChatAgentResponse

async step_async(input_message: BaseMessage, output_schema: Type[BaseModel] | None = None) ChatAgentResponse[source]#

Performs a single step in the chat session by generating a response to the input message. This agent step can call async function calls.

Parameters:
  • input_message (BaseMessage) – The input message to the agent. Its role field that specifies the role at backend may be either user or assistant but it will be set to user anyway since for the self agent any incoming message is external.

  • output_schema (Optional[Type[BaseModel]], optional) – A pydantic model class that includes value types and field descriptions used to generate a structured response by LLM. This schema helps in defining the expected output format. (default: None)

Returns:

A struct containing the output messages,

a boolean indicating whether the chat session has terminated, and information about the chat session.

Return type:

ChatAgentResponse

step_tool_call(response: ChatCompletion) Tuple[FunctionCallingMessage, FunctionCallingMessage, FunctionCallingRecord][source]#

Execute the function with arguments following the model’s response.

Parameters:

response (Dict[str, Any]) – The response obtained by calling the model.

Returns:

A tuple consisting of two obj:FunctionCallingMessage,

one about the arguments and the other about the execution result, and a struct for logging information about this function call.

Return type:

tuple

async step_tool_call_async(response: ChatCompletion) Tuple[FunctionCallingMessage, FunctionCallingMessage, FunctionCallingRecord][source]#

Execute the async function with arguments following the model’s response.

Parameters:

response (Dict[str, Any]) – The response obtained by calling the model.

Returns:

A tuple consisting of two obj:FunctionCallingMessage,

one about the arguments and the other about the execution result, and a struct for logging information about this function call.

Return type:

tuple

property system_message: BaseMessage#

The getter method for the property system_message.

Returns:

The system message of this agent.

Return type:

BaseMessage

update_memory(message: BaseMessage, role: OpenAIBackendRole) None[source]#

Updates the agent memory with a new message.

Parameters:
class camel.agents.chat_agent.FunctionCallingRecord(*, func_name: str, args: Dict[str, Any], result: Any)[source]#

Bases: BaseModel

Historical records of functions called in the conversation.

func_name#

The name of the function being called.

Type:

str

args#

The dictionary of arguments passed to the function.

Type:

Dict[str, Any]

result#

The execution result of calling this function.

Type:

Any

args: Dict[str, Any]#
as_dict() dict[str, Any][source]#
func_name: str#
model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}#

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_config: ClassVar[ConfigDict] = {}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_fields: ClassVar[Dict[str, FieldInfo]] = {'args': FieldInfo(annotation=Dict[str, Any], required=True), 'func_name': FieldInfo(annotation=str, required=True), 'result': FieldInfo(annotation=Any, required=True)}#

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.__fields__ from Pydantic V1.

result: Any#

camel.agents.critic_agent module#

class camel.agents.critic_agent.CriticAgent(system_message: BaseMessage, model: BaseModelBackend | None = None, memory: AgentMemory | None = None, message_window_size: int = 6, retry_attempts: int = 2, verbose: bool = False, logger_color: Any = '\x1b[35m')[source]#

Bases: ChatAgent

A class for the critic agent that assists in selecting an option.

Parameters:
  • system_message (BaseMessage) – The system message for the critic agent.

  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: 6)

  • retry_attempts (int, optional) – The number of retry attempts if the critic fails to return a valid option. (default: 2)

  • verbose (bool, optional) – Whether to print the critic’s messages.

  • logger_color (Any) – The color of the menu options displayed to the user. (default: Fore.MAGENTA)

flatten_options(messages: Sequence[BaseMessage]) str[source]#

Flattens the options to the critic.

Parameters:

messages (Sequence[BaseMessage]) – A list of BaseMessage objects.

Returns:

A string containing the flattened options to the critic.

Return type:

str

get_option(input_message: BaseMessage) str[source]#

Gets the option selected by the critic.

Parameters:

input_message (BaseMessage) – A BaseMessage object representing the input message.

Returns:

The option selected by the critic.

Return type:

str

parse_critic(critic_msg: BaseMessage) str | None[source]#

Parses the critic’s message and extracts the choice.

Parameters:

critic_msg (BaseMessage) – A BaseMessage object representing the critic’s response.

Returns:

The critic’s choice as a string, or None if the

message could not be parsed.

Return type:

Optional[str]

reduce_step(input_messages: Sequence[BaseMessage]) ChatAgentResponse[source]#

Performs one step of the conversation by flattening options to the critic, getting the option, and parsing the choice.

Parameters:

input_messages (Sequence[BaseMessage]) – A list of BaseMessage objects.

Returns:

A ChatAgentResponse object includes the

critic’s choice.

Return type:

ChatAgentResponse

camel.agents.deductive_reasoner_agent module#

class camel.agents.deductive_reasoner_agent.DeductiveReasonerAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent responsible for deductive reasoning. Model of deductive reasoning:

  • L: A ⊕ C -> q * B

  • A represents the known starting state.

  • B represents the known target state.

  • C represents the conditions required to transition from A to B.

  • Q represents the quality or effectiveness of the transition from

A to B. - L represents the path or process from A to B.

Parameters:

model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

deduce_conditions_and_quality(starting_state: str, target_state: str, role_descriptions_dict: Dict[str, str] | None = None) Dict[str, List[str] | Dict[str, str]][source]#

Derives the conditions and quality from the starting state and the target state based on the model of the deductive reasoning and the knowledge base. It can optionally consider the roles involved in the scenario, which allows tailoring the output more closely to the AI agent’s environment.

Parameters:
  • starting_state (str) – The initial or starting state from which conditions are deduced.

  • target_state (str) – The target state of the task.

  • role_descriptions_dict (Optional[Dict[str, str]], optional) – The descriptions of the roles. (default: None)

  • role_descriptions_dict – A dictionary describing the roles involved in the scenario. This is optional and can be used to provide a context for the CAMEL’s role-playing, enabling the generation of more relevant and tailored conditions and quality assessments. This could be generated using a RoleAssignmentAgent() or defined manually by the user.

Returns:

A dictionary with the

extracted data from the message. The dictionary contains three keys: - ‘conditions’: A list where each key is a condition ID and

each value is the corresponding condition text.

  • ’labels’: A list of label strings extracted from the message.

  • ’quality’: A string of quality assessment strings extracted

    from the message.

Return type:

Dict[str, Union[List[str], Dict[str, str]]]

camel.agents.embodied_agent module#

class camel.agents.embodied_agent.EmbodiedAgent(system_message: BaseMessage, model: BaseModelBackend | None = None, message_window_size: int | None = None, tool_agents: List[BaseToolAgent] | None = None, code_interpreter: BaseInterpreter | None = None, verbose: bool = False, logger_color: Any = '\x1b[35m')[source]#

Bases: ChatAgent

Class for managing conversations of CAMEL Embodied Agents.

Parameters:
  • system_message (BaseMessage) – The system message for the chat agent.

  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

  • tool_agents (List[BaseToolAgent], optional) – The tools agents to use in the embodied agent. (default: None)

  • code_interpreter (BaseInterpreter, optional) – The code interpreter to execute codes. If code_interpreter and tool_agent are both None, default to SubProcessInterpreter. If code_interpreter is None and tool_agents is not None, default to InternalPythonInterpreter. (default: None)

  • verbose (bool, optional) – Whether to print the critic’s messages.

  • logger_color (Any) – The color of the logger displayed to the user. (default: Fore.MAGENTA)

get_tool_agent_names() List[str][source]#

Returns the names of tool agents.

Returns:

The names of tool agents.

Return type:

List[str]

step(input_message: BaseMessage) ChatAgentResponse[source]#

Performs a step in the conversation.

Parameters:

input_message (BaseMessage) – The input message.

Returns:

A struct containing the output messages,

a boolean indicating whether the chat session has terminated, and information about the chat session.

Return type:

ChatAgentResponse

camel.agents.knowledge_graph_agent module#

class camel.agents.knowledge_graph_agent.KnowledgeGraphAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent that can extract node and relationship information for different entities from given Element content.

task_prompt#

A prompt for the agent to extract node and relationship information for different entities.

Type:

TextPrompt

run(element: str | Element, parse_graph_elements: bool = False) str | GraphElement[source]#

Run the agent to extract node and relationship information.

Parameters:
  • element (Union[str, Element]) – The input element or string.

  • parse_graph_elements (bool, optional) – Whether to parse into GraphElement. Defaults to False.

Returns:

The extracted node and relationship

information. If parse_graph_elements is True then return GraphElement, else return str.

Return type:

Union[str, GraphElement]

camel.agents.role_assignment_agent module#

class camel.agents.role_assignment_agent.RoleAssignmentAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent that generates role names based on the task prompt.

Parameters:

model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

role_assignment_prompt#

A prompt for the agent to generate

Type:

TextPrompt

role names.
run(task_prompt: str | TextPrompt, num_roles: int = 2) Dict[str, str][source]#

Generate role names based on the input task prompt.

Parameters:
  • task_prompt (Union[str, TextPrompt]) – The prompt for the task based on which the roles are to be generated.

  • num_roles (int, optional) – The number of roles to generate. (default: 2)

Returns:

A dictionary mapping role names to their

descriptions.

Return type:

Dict[str, str]

camel.agents.search_agent module#

class camel.agents.search_agent.SearchAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent that summarizes text based on a query and evaluates the relevance of an answer.

Parameters:

model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

Ask whether to continue search or not based on the provided answer.

Parameters:
  • query (str) – The question.

  • answer (str) – The answer to the question.

Returns:

True if the user want to continue search, False otherwise.

Return type:

bool

summarize_text(text: str, query: str) str[source]#

Summarize the information from the text, base on the query.

Parameters:
  • text (str) – Text to summarize.

  • query (str) – What information you want.

Returns:

Strings with information.

Return type:

str

camel.agents.task_agent module#

class camel.agents.task_agent.TaskCreationAgent(role_name: str, objective: str | TextPrompt, model: BaseModelBackend | None = None, output_language: str | None = None, message_window_size: int | None = None, max_task_num: int | None = 3)[source]#

Bases: ChatAgent

An agent that helps create new tasks based on the objective and last completed task. Compared to TaskPlannerAgent, it’s still a task planner, but it has more context information like last task and incomplete task list. Modified from BabyAGI.

task_creation_prompt#

A prompt for the agent to create new tasks.

Type:

TextPrompt

Parameters:
  • role_name (str) – The role name of the Agent to create the task.

  • objective (Union[str, TextPrompt]) – The objective of the Agent to perform the task.

  • model (BaseModelBackend, optional) – The LLM backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

  • max_task_num (int, optional) – The maximum number of planned tasks in one round. (default: :obj:3)

run(task_list: List[str]) List[str][source]#

Generate subtasks based on the previous task results and incomplete task list.

Parameters:

task_list (List[str]) – The completed or in-progress tasks which should not overlap with new created tasks.

Returns:

The new task list generated by the Agent.

Return type:

List[str]

class camel.agents.task_agent.TaskPlannerAgent(model: BaseModelBackend | None = None, output_language: str | None = None)[source]#

Bases: ChatAgent

An agent that helps divide a task into subtasks based on the input task prompt.

task_planner_prompt#

A prompt for the agent to divide the task into subtasks.

Type:

TextPrompt

Parameters:
  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

run(task_prompt: str | TextPrompt) TextPrompt[source]#

Generate subtasks based on the input task prompt.

Parameters:

task_prompt (Union[str, TextPrompt]) – The prompt for the task to be divided into subtasks.

Returns:

A prompt for the subtasks generated by the agent.

Return type:

TextPrompt

class camel.agents.task_agent.TaskPrioritizationAgent(objective: str | TextPrompt, model: BaseModelBackend | None = None, output_language: str | None = None, message_window_size: int | None = None)[source]#

Bases: ChatAgent

An agent that helps re-prioritize the task list and returns numbered prioritized list. Modified from BabyAGI.

task_prioritization_prompt#

A prompt for the agent to prioritize tasks.

Type:

TextPrompt

Parameters:
  • objective (Union[str, TextPrompt]) – The objective of the Agent to perform the task.

  • model (BaseModelBackend, optional) – The LLM backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

run(task_list: List[str]) List[str][source]#

Prioritize the task list given the agent objective.

Parameters:

task_list (List[str]) – The unprioritized tasks of agent.

Returns:

The new prioritized task list generated by the Agent.

Return type:

List[str]

class camel.agents.task_agent.TaskSpecifyAgent(model: BaseModelBackend | None = None, task_type: TaskType = TaskType.AI_SOCIETY, task_specify_prompt: str | TextPrompt | None = None, word_limit: int = 50, output_language: str | None = None)[source]#

Bases: ChatAgent

An agent that specifies a given task prompt by prompting the user to provide more details.

DEFAULT_WORD_LIMIT#

The default word limit for the task prompt.

Type:

int

task_specify_prompt#

The prompt for specifying the task.

Type:

TextPrompt

Parameters:
  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • task_type (TaskType, optional) – The type of task for which to generate a prompt. (default: TaskType.AI_SOCIETY)

  • task_specify_prompt (Union[str, TextPrompt], optional) – The prompt for specifying the task. (default: None)

  • word_limit (int, optional) – The word limit for the task prompt. (default: 50)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

DEFAULT_WORD_LIMIT = 50#
run(task_prompt: str | TextPrompt, meta_dict: Dict[str, Any] | None = None) TextPrompt[source]#

Specify the given task prompt by providing more details.

Parameters:
  • task_prompt (Union[str, TextPrompt]) – The original task prompt.

  • meta_dict (Dict[str, Any], optional) – A dictionary containing additional information to include in the prompt. (default: None)

Returns:

The specified task prompt.

Return type:

TextPrompt

Module contents#

class camel.agents.BaseAgent[source]#

Bases: ABC

An abstract base class for all CAMEL agents.

abstract reset(*args: Any, **kwargs: Any) Any[source]#

Resets the agent to its initial state.

abstract step(*args: Any, **kwargs: Any) Any[source]#

Performs a single step of the agent.

class camel.agents.BaseToolAgent(name: str, description: str)[source]#

Bases: BaseAgent

Creates a BaseToolAgent object with the specified name and

description.

Parameters:
  • name (str) – The name of the tool agent.

  • description (str) – The description of the tool agent.

reset() None[source]#

Resets the agent to its initial state.

step() None[source]#

Performs a single step of the agent.

class camel.agents.ChatAgent(system_message: BaseMessage, model: BaseModelBackend | None = None, memory: AgentMemory | None = None, message_window_size: int | None = None, token_limit: int | None = None, output_language: str | None = None, tools: List[OpenAIFunction] | None = None, external_tools: List[OpenAIFunction] | None = None, response_terminators: List[ResponseTerminator] | None = None)[source]#

Bases: BaseAgent

Class for managing conversations of CAMEL Chat Agents.

Parameters:
  • system_message (BaseMessage) – The system message for the chat agent.

  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • memory (AgentMemory, optional) – The agent memory for managing chat messages. If None, a ChatHistoryMemory will be used. (default: None)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

  • token_limit (int, optional) – The maximum number of tokens in a context. The context will be automatically pruned to fulfill the limitation. If None, it will be set according to the backend model. (default: None)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

  • tools (List[OpenAIFunction], optional) – List of available OpenAIFunction. (default: None)

  • external_tools (List[OpenAIFunction], optional) – List of external tools (OpenAIFunction) bind to one chat agent. When these tools are called, the agent will directly return the request instead of processing it. (default: None)

  • response_terminators (List[ResponseTerminator], optional) – List of ResponseTerminator bind to one chat agent. (default: None)

get_info(session_id: str | None, usage: Dict[str, int] | None, termination_reasons: List[str], num_tokens: int, tool_calls: List[FunctionCallingRecord], external_tool_request: ChatCompletionMessageToolCall | None = None) Dict[str, Any][source]#

Returns a dictionary containing information about the chat session.

Parameters:
  • session_id (str, optional) – The ID of the chat session.

  • usage (Dict[str, int], optional) – Information about the usage of the LLM model.

  • termination_reasons (List[str]) – The reasons for the termination of the chat session.

  • num_tokens (int) – The number of tokens used in the chat session.

  • tool_calls (List[FunctionCallingRecord]) – The list of function calling records, containing the information of called tools.

  • external_tool_request – (Optional[ChatCompletionMessageToolCall], optional): The tool calling request of external tools from the model. These requests are directly returned to the user instead of being processed by the agent automatically. (default: None)

Returns:

The chat session information.

Return type:

Dict[str, Any]

get_usage_dict(output_messages: List[BaseMessage], prompt_tokens: int) Dict[str, int][source]#

Get usage dictionary when using the stream mode.

Parameters:
  • output_messages (list) – List of output messages.

  • prompt_tokens (int) – Number of input prompt tokens.

Returns:

Usage dictionary.

Return type:

dict

handle_batch_response(response: ChatCompletion) Tuple[List[BaseMessage], List[str], Dict[str, int], str][source]#
Parameters:

response (dict) – Model response.

Returns:

A tuple of list of output ChatMessage, list of

finish reasons, usage dictionary, and response id.

Return type:

tuple

handle_stream_response(response: Stream[ChatCompletionChunk], prompt_tokens: int) Tuple[List[BaseMessage], List[str], Dict[str, int], str][source]#
Parameters:
  • response (dict) – Model response.

  • prompt_tokens (int) – Number of input prompt tokens.

Returns:

A tuple of list of output ChatMessage, list of

finish reasons, usage dictionary, and response id.

Return type:

tuple

init_messages() None[source]#

Initializes the stored messages list with the initial system message.

is_tools_added() bool[source]#

Whether OpenAI function calling is enabled for this agent.

Returns:

Whether OpenAI function calling is enabled for this

agent, determined by whether the dictionary of tools is empty.

Return type:

bool

record_message(message: BaseMessage) None[source]#

Records the externally provided message into the agent memory as if it were an answer of the ChatAgent from the backend. Currently, the choice of the critic is submitted with this method.

Parameters:

message (BaseMessage) – An external message to be recorded in the memory.

reset()[source]#

Resets the ChatAgent to its initial state and returns the stored messages.

Returns:

The stored messages.

Return type:

List[BaseMessage]

set_output_language(output_language: str) BaseMessage[source]#

Sets the output language for the system message. This method updates the output language for the system message. The output language determines the language in which the output text should be generated.

Parameters:

output_language (str) – The desired output language.

Returns:

The updated system message object.

Return type:

BaseMessage

step(input_message: BaseMessage, output_schema: Type[BaseModel] | None = None) ChatAgentResponse[source]#

Performs a single step in the chat session by generating a response to the input message.

Parameters:
  • input_message (BaseMessage) – The input message to the agent. Its role field that specifies the role at backend may be either user or assistant but it will be set to user anyway since for the self agent any incoming message is external.

  • output_schema (Optional[Type[BaseModel]], optional) – A pydantic model class that includes value types and field descriptions used to generate a structured response by LLM. This schema helps in defining the expected output format. (default: None)

Returns:

A struct containing the output messages,

a boolean indicating whether the chat session has terminated, and information about the chat session.

Return type:

ChatAgentResponse

async step_async(input_message: BaseMessage, output_schema: Type[BaseModel] | None = None) ChatAgentResponse[source]#

Performs a single step in the chat session by generating a response to the input message. This agent step can call async function calls.

Parameters:
  • input_message (BaseMessage) – The input message to the agent. Its role field that specifies the role at backend may be either user or assistant but it will be set to user anyway since for the self agent any incoming message is external.

  • output_schema (Optional[Type[BaseModel]], optional) – A pydantic model class that includes value types and field descriptions used to generate a structured response by LLM. This schema helps in defining the expected output format. (default: None)

Returns:

A struct containing the output messages,

a boolean indicating whether the chat session has terminated, and information about the chat session.

Return type:

ChatAgentResponse

step_tool_call(response: ChatCompletion) Tuple[FunctionCallingMessage, FunctionCallingMessage, FunctionCallingRecord][source]#

Execute the function with arguments following the model’s response.

Parameters:

response (Dict[str, Any]) – The response obtained by calling the model.

Returns:

A tuple consisting of two obj:FunctionCallingMessage,

one about the arguments and the other about the execution result, and a struct for logging information about this function call.

Return type:

tuple

async step_tool_call_async(response: ChatCompletion) Tuple[FunctionCallingMessage, FunctionCallingMessage, FunctionCallingRecord][source]#

Execute the async function with arguments following the model’s response.

Parameters:

response (Dict[str, Any]) – The response obtained by calling the model.

Returns:

A tuple consisting of two obj:FunctionCallingMessage,

one about the arguments and the other about the execution result, and a struct for logging information about this function call.

Return type:

tuple

property system_message: BaseMessage#

The getter method for the property system_message.

Returns:

The system message of this agent.

Return type:

BaseMessage

update_memory(message: BaseMessage, role: OpenAIBackendRole) None[source]#

Updates the agent memory with a new message.

Parameters:
class camel.agents.CriticAgent(system_message: BaseMessage, model: BaseModelBackend | None = None, memory: AgentMemory | None = None, message_window_size: int = 6, retry_attempts: int = 2, verbose: bool = False, logger_color: Any = '\x1b[35m')[source]#

Bases: ChatAgent

A class for the critic agent that assists in selecting an option.

Parameters:
  • system_message (BaseMessage) – The system message for the critic agent.

  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: 6)

  • retry_attempts (int, optional) – The number of retry attempts if the critic fails to return a valid option. (default: 2)

  • verbose (bool, optional) – Whether to print the critic’s messages.

  • logger_color (Any) – The color of the menu options displayed to the user. (default: Fore.MAGENTA)

flatten_options(messages: Sequence[BaseMessage]) str[source]#

Flattens the options to the critic.

Parameters:

messages (Sequence[BaseMessage]) – A list of BaseMessage objects.

Returns:

A string containing the flattened options to the critic.

Return type:

str

get_option(input_message: BaseMessage) str[source]#

Gets the option selected by the critic.

Parameters:

input_message (BaseMessage) – A BaseMessage object representing the input message.

Returns:

The option selected by the critic.

Return type:

str

parse_critic(critic_msg: BaseMessage) str | None[source]#

Parses the critic’s message and extracts the choice.

Parameters:

critic_msg (BaseMessage) – A BaseMessage object representing the critic’s response.

Returns:

The critic’s choice as a string, or None if the

message could not be parsed.

Return type:

Optional[str]

reduce_step(input_messages: Sequence[BaseMessage]) ChatAgentResponse[source]#

Performs one step of the conversation by flattening options to the critic, getting the option, and parsing the choice.

Parameters:

input_messages (Sequence[BaseMessage]) – A list of BaseMessage objects.

Returns:

A ChatAgentResponse object includes the

critic’s choice.

Return type:

ChatAgentResponse

class camel.agents.EmbodiedAgent(system_message: BaseMessage, model: BaseModelBackend | None = None, message_window_size: int | None = None, tool_agents: List[BaseToolAgent] | None = None, code_interpreter: BaseInterpreter | None = None, verbose: bool = False, logger_color: Any = '\x1b[35m')[source]#

Bases: ChatAgent

Class for managing conversations of CAMEL Embodied Agents.

Parameters:
  • system_message (BaseMessage) – The system message for the chat agent.

  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

  • tool_agents (List[BaseToolAgent], optional) – The tools agents to use in the embodied agent. (default: None)

  • code_interpreter (BaseInterpreter, optional) – The code interpreter to execute codes. If code_interpreter and tool_agent are both None, default to SubProcessInterpreter. If code_interpreter is None and tool_agents is not None, default to InternalPythonInterpreter. (default: None)

  • verbose (bool, optional) – Whether to print the critic’s messages.

  • logger_color (Any) – The color of the logger displayed to the user. (default: Fore.MAGENTA)

get_tool_agent_names() List[str][source]#

Returns the names of tool agents.

Returns:

The names of tool agents.

Return type:

List[str]

step(input_message: BaseMessage) ChatAgentResponse[source]#

Performs a step in the conversation.

Parameters:

input_message (BaseMessage) – The input message.

Returns:

A struct containing the output messages,

a boolean indicating whether the chat session has terminated, and information about the chat session.

Return type:

ChatAgentResponse

class camel.agents.HuggingFaceToolAgent(name: str, *args: Any, remote: bool = True, **kwargs: Any)[source]#

Bases: BaseToolAgent

Tool agent for calling HuggingFace models. This agent is a wrapper

around agents from the transformers library. For more information about the available models, please see the transformers documentation at https://huggingface.co/docs/transformers/transformers_agents.

Parameters:
  • name (str) – The name of the agent.

  • *args (Any) – Additional positional arguments to pass to the underlying Agent class.

  • remote (bool, optional) – Flag indicating whether to run the agent remotely. (default: True)

  • **kwargs (Any) – Additional keyword arguments to pass to the underlying Agent class.

chat(*args: Any, remote: bool | None = None, **kwargs: Any) Any[source]#

Runs the agent in a chat conversation mode.

Parameters:
  • *args (Any) – Positional arguments to pass to the agent.

  • remote (bool, optional) – Flag indicating whether to run the agent remotely. Overrides the default setting. (default: None)

  • **kwargs (Any) – Keyword arguments to pass to the agent.

Returns:

The response from the agent.

Return type:

str

reset() None[source]#

Resets the chat history of the agent.

step(*args: Any, remote: bool | None = None, **kwargs: Any) Any[source]#

Runs the agent in single execution mode.

Parameters:
  • *args (Any) – Positional arguments to pass to the agent.

  • remote (bool, optional) – Flag indicating whether to run the agent remotely. Overrides the default setting. (default: None)

  • **kwargs (Any) – Keyword arguments to pass to the agent.

Returns:

The response from the agent.

Return type:

str

class camel.agents.KnowledgeGraphAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent that can extract node and relationship information for different entities from given Element content.

task_prompt#

A prompt for the agent to extract node and relationship information for different entities.

Type:

TextPrompt

run(element: str | Element, parse_graph_elements: bool = False) str | GraphElement[source]#

Run the agent to extract node and relationship information.

Parameters:
  • element (Union[str, Element]) – The input element or string.

  • parse_graph_elements (bool, optional) – Whether to parse into GraphElement. Defaults to False.

Returns:

The extracted node and relationship

information. If parse_graph_elements is True then return GraphElement, else return str.

Return type:

Union[str, GraphElement]

class camel.agents.RoleAssignmentAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent that generates role names based on the task prompt.

Parameters:

model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

role_assignment_prompt#

A prompt for the agent to generate

Type:

TextPrompt

role names.
run(task_prompt: str | TextPrompt, num_roles: int = 2) Dict[str, str][source]#

Generate role names based on the input task prompt.

Parameters:
  • task_prompt (Union[str, TextPrompt]) – The prompt for the task based on which the roles are to be generated.

  • num_roles (int, optional) – The number of roles to generate. (default: 2)

Returns:

A dictionary mapping role names to their

descriptions.

Return type:

Dict[str, str]

class camel.agents.SearchAgent(model: BaseModelBackend | None = None)[source]#

Bases: ChatAgent

An agent that summarizes text based on a query and evaluates the relevance of an answer.

Parameters:

model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

Ask whether to continue search or not based on the provided answer.

Parameters:
  • query (str) – The question.

  • answer (str) – The answer to the question.

Returns:

True if the user want to continue search, False otherwise.

Return type:

bool

summarize_text(text: str, query: str) str[source]#

Summarize the information from the text, base on the query.

Parameters:
  • text (str) – Text to summarize.

  • query (str) – What information you want.

Returns:

Strings with information.

Return type:

str

class camel.agents.TaskCreationAgent(role_name: str, objective: str | TextPrompt, model: BaseModelBackend | None = None, output_language: str | None = None, message_window_size: int | None = None, max_task_num: int | None = 3)[source]#

Bases: ChatAgent

An agent that helps create new tasks based on the objective and last completed task. Compared to TaskPlannerAgent, it’s still a task planner, but it has more context information like last task and incomplete task list. Modified from BabyAGI.

task_creation_prompt#

A prompt for the agent to create new tasks.

Type:

TextPrompt

Parameters:
  • role_name (str) – The role name of the Agent to create the task.

  • objective (Union[str, TextPrompt]) – The objective of the Agent to perform the task.

  • model (BaseModelBackend, optional) – The LLM backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

  • max_task_num (int, optional) – The maximum number of planned tasks in one round. (default: :obj:3)

run(task_list: List[str]) List[str][source]#

Generate subtasks based on the previous task results and incomplete task list.

Parameters:

task_list (List[str]) – The completed or in-progress tasks which should not overlap with new created tasks.

Returns:

The new task list generated by the Agent.

Return type:

List[str]

class camel.agents.TaskPlannerAgent(model: BaseModelBackend | None = None, output_language: str | None = None)[source]#

Bases: ChatAgent

An agent that helps divide a task into subtasks based on the input task prompt.

task_planner_prompt#

A prompt for the agent to divide the task into subtasks.

Type:

TextPrompt

Parameters:
  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

run(task_prompt: str | TextPrompt) TextPrompt[source]#

Generate subtasks based on the input task prompt.

Parameters:

task_prompt (Union[str, TextPrompt]) – The prompt for the task to be divided into subtasks.

Returns:

A prompt for the subtasks generated by the agent.

Return type:

TextPrompt

class camel.agents.TaskPrioritizationAgent(objective: str | TextPrompt, model: BaseModelBackend | None = None, output_language: str | None = None, message_window_size: int | None = None)[source]#

Bases: ChatAgent

An agent that helps re-prioritize the task list and returns numbered prioritized list. Modified from BabyAGI.

task_prioritization_prompt#

A prompt for the agent to prioritize tasks.

Type:

TextPrompt

Parameters:
  • objective (Union[str, TextPrompt]) – The objective of the Agent to perform the task.

  • model (BaseModelBackend, optional) – The LLM backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

  • message_window_size (int, optional) – The maximum number of previous messages to include in the context window. If None, no windowing is performed. (default: None)

run(task_list: List[str]) List[str][source]#

Prioritize the task list given the agent objective.

Parameters:

task_list (List[str]) – The unprioritized tasks of agent.

Returns:

The new prioritized task list generated by the Agent.

Return type:

List[str]

class camel.agents.TaskSpecifyAgent(model: BaseModelBackend | None = None, task_type: TaskType = TaskType.AI_SOCIETY, task_specify_prompt: str | TextPrompt | None = None, word_limit: int = 50, output_language: str | None = None)[source]#

Bases: ChatAgent

An agent that specifies a given task prompt by prompting the user to provide more details.

DEFAULT_WORD_LIMIT#

The default word limit for the task prompt.

Type:

int

task_specify_prompt#

The prompt for specifying the task.

Type:

TextPrompt

Parameters:
  • model (BaseModelBackend, optional) – The model backend to use for generating responses. (default: OpenAIModel with GPT_4O_MINI)

  • task_type (TaskType, optional) – The type of task for which to generate a prompt. (default: TaskType.AI_SOCIETY)

  • task_specify_prompt (Union[str, TextPrompt], optional) – The prompt for specifying the task. (default: None)

  • word_limit (int, optional) – The word limit for the task prompt. (default: 50)

  • output_language (str, optional) – The language to be output by the agent. (default: None)

DEFAULT_WORD_LIMIT = 50#
memory: AgentMemory#
model_backend: BaseModelBackend#
model_type: ModelType#
orig_sys_message: BaseMessage#
output_language: str | None#
role_name: str#
role_type: RoleType#
run(task_prompt: str | TextPrompt, meta_dict: Dict[str, Any] | None = None) TextPrompt[source]#

Specify the given task prompt by providing more details.

Parameters:
  • task_prompt (Union[str, TextPrompt]) – The original task prompt.

  • meta_dict (Dict[str, Any], optional) – A dictionary containing additional information to include in the prompt. (default: None)

Returns:

The specified task prompt.

Return type:

TextPrompt

task_specify_prompt: str | TextPrompt#
terminated: bool#