StreamContentAccumulator
init
set_base_content
add_streaming_content
add_tool_status
get_full_content
get_content_with_new_status
reset_streaming_content
StreamingChatAgentResponse
init
_ensure_latest_response
msgs
terminated
info
msg
iter
getattr
AsyncStreamingChatAgentResponse
init
await
aiter
ChatAgent
- system_message (Union[BaseMessage, str], optional): The system message for the chat agent. (default: :obj:
None
) model (Union[BaseModelBackend, Tuple[str, str], str, ModelType, Tuple[ModelPlatformType, ModelType], List[BaseModelBackend], List[str], List[ModelType], List[Tuple[str, str]], List[Tuple[ModelPlatformType, ModelType]]], optional): The model backend(s) to use. Can be a single instance, a specification (string, enum, tuple), or a list of instances or specifications to be managed byModelManager
. If a list of specifications (notBaseModelBackend
instances) is provided, they will be instantiated usingModelFactory
. (default: :obj:ModelPlatformType.DEFAULT
withModelType.DEFAULT
) - memory (AgentMemory, optional): The agent memory for managing chat messages. If
None
, a :obj:ChatHistoryMemory
will be used. (default: :obj:None
) - message_window_size (int, optional): The maximum number of previous messages to include in the context window. If
None
, no windowing is performed. (default: :obj:None
) - token_limit (int, optional): The maximum number of tokens in a context. The context will be automatically pruned to fulfill the limitation. If
None
, it will be set according to the backend model. (default: :obj:None
) - output_language (str, optional): The language to be output by the agent. (default: :obj:
None
) - tools (Optional[List[Union[FunctionTool, Callable]]], optional): List of available :obj:
FunctionTool
or :obj:Callable
. (default: :obj:None
) toolkits_to_register_agent (Optional[List[RegisteredAgentToolkit]], optional): List of toolkit instances that inherit from :obj:RegisteredAgentToolkit
. The agent will register itself with these toolkits, allowing them to access the agent instance. Note: This does NOT add the toolkit’s tools to the agent. To use tools from these toolkits, pass them explicitly via thetools
parameter. (default: :obj:None
) external_tools (Optional[List[Union[FunctionTool, Callable, Dict[str, Any]]]], optional): List of external tools (:obj:FunctionTool
or :obj:Callable
or :obj:Dict[str, Any]
) bind to one chat agent. When these tools are called, the agent will directly return the request instead of processing it. (default: :obj:None
) - response_terminators (List[ResponseTerminator], optional): List of :obj:
ResponseTerminator
bind to one chat agent. (default: :obj:None
) - scheduling_strategy (str): name of function that defines how to select the next model in ModelManager. (default: :str:
round_robin
) - max_iteration (Optional[int], optional): Maximum number of model calling iterations allowed per step. If
None
(default), there’s no explicit limit. If1
, it performs a single model call. IfN > 1
, it allows up to N model calls. (default: :obj:None
) - agent_id (str, optional): The ID of the agent. If not provided, a random UUID will be generated. (default: :obj:
None
) - stop_event (Optional[threading.Event], optional): Event to signal termination of the agent’s operation. When set, the agent will terminate its execution. (default: :obj:
None
) - tool_execution_timeout (Optional[float], optional): Timeout for individual tool execution. If None, wait indefinitely.
- mask_tool_output (Optional[bool]): Whether to return a sanitized placeholder instead of the raw tool output. (default: :obj:
False
) - pause_event (Optional[asyncio.Event]): Event to signal pause of the agent’s operation. When clear, the agent will pause its execution. (default: :obj:
None
) - prune_tool_calls_from_memory (bool): Whether to clean tool call messages from memory after response generation to save token usage. When enabled, removes FUNCTION/TOOL role messages and ASSISTANT messages with tool_calls after each step. (default: :obj:
False
) - retry_attempts (int, optional): Maximum number of retry attempts for rate limit errors. (default: :obj:
3
) - retry_delay (float, optional): Initial delay in seconds between retries. Uses exponential backoff. (default: :obj:
1.0
) - step_timeout (Optional[float], optional): Timeout in seconds for the entire step operation. If None, no timeout is applied. (default: :obj:
None
) - stream_accumulate (bool, optional): When True, partial streaming updates return accumulated content (current behavior). When False, partial updates return only the incremental delta. (default: :obj:
True
)
init
reset
ChatAgent
to its initial state.
_resolve_models
- model: Model specification in various formats including single model, list of models, or model type specifications.
_resolve_model_list
- model_list (list): List of model specifications in various formats.
system_message
tool_dict
output_language
output_language
memory
memory
- value (AgentMemory): The new agent memory to use.
_get_full_tool_schemas
_get_external_tool_names
add_tool
add_tools
add_external_tool
remove_tool
- tool_name (str): The name of the tool to remove.
remove_tools
remove_external_tool
- tool_name (str): The name of the tool to remove.
update_memory
ScoreBasedContextCreator
where an over-sized message cannot fit
into the available token budget at all.
This slicing logic handles both regular text messages (in the
content
field) and long tool call results (in the result
field of
a FunctionCallingMessage
).
Parameters:
- message (BaseMessage): The new message to add to the stored messages.
- role (OpenAIBackendRole): The backend role type.
- timestamp (Optional[float], optional): Custom timestamp for the memory record. If
None
, the current time will be used. (default: :obj:None
) (default: obj:None
)
load_memory
- memory (AgentMemory): The memory to load into the agent.
load_memory_from_path
- path (str): The file path to a JSON memory file that uses JsonStorage.
save_memory
- path (str): Target file path to store JSON data.
summarize
- filename (Optional[str]): The base filename (without extension) to use for the markdown file. Defaults to a timestamped name when not provided.
- summary_prompt (Optional[str]): Custom prompt for the summarizer. When omitted, a default prompt highlighting key decisions, action items, and open questions is used.
- working_directory (Optional[str|Path]): Optional directory to save the markdown summary file. If provided, overrides the default directory used by ContextUtility.
clear_memory
_generate_system_message_for_output_language
init_messages
record_message
ChatAgent
from the backend. Currently,
the choice of the critic is submitted with this method.
Parameters:
- message (BaseMessage): An external message to be recorded in the memory.
_try_format_message
_check_tools_strict_compatibility
_convert_response_format_to_prompt
- response_format (Type[BaseModel]): The Pydantic model class.
_handle_response_format_with_non_strict_tools
- input_message: The original input message.
- response_format: The requested response format.
_is_called_from_registered_toolkit
_apply_prompt_based_parsing
- response: The model response to parse.
- original_response_format: The original response format class.
_format_response_if_needed
- The response format is None (not provided)
- The response is empty
step
- input_message (Union[BaseMessage, str]): The input message for the agent. If provided as a BaseMessage, the
role
is adjusted touser
to indicate an external message. - response_format (Optional[Type[BaseModel]], optional): A Pydantic model defining the expected structure of the response. Used to generate a structured response if provided. (default: :obj:
None
)
_step_impl
chat_history
_create_token_usage_tracker
_update_token_usage_tracker
- tracker (Dict[str, int]): The token usage tracker to update.
- usage_dict (Dict[str, int]): The usage dictionary with new values.
_convert_to_chatagent_response
_record_final_output
_get_model_response
_sanitize_messages_for_logging
- messages (List[OpenAIMessage]): The OpenAI messages to sanitize.
- prev_num_openai_messages (int): The number of openai messages logged in the previous iteration.
_step_get_info
- output_messages (List[BaseMessage]): The messages generated in this step.
- finish_reasons (List[str]): The reasons for finishing the generation for each message.
- usage_dict (Dict[str, int]): Dictionary containing token usage information.
- response_id (str): The ID of the response from the model.
- tool_calls (List[ToolCallingRecord]): Records of function calls made during this step.
- num_tokens (int): The number of tokens used in this step.
- external_tool_call_request (Optional[ToolCallRequest]): The request for external tool call.
_handle_batch_response
- response (ChatCompletion): Model response.
_step_terminate
- num_tokens (int): Number of tokens in the messages.
- tool_calls (List[ToolCallingRecord]): List of information objects of functions called in the current step.
- termination_reason (str): String describing the reason for termination.
_execute_tool
- tool_call_request (_ToolCallRequest): The tool call request.
_record_tool_calling
- func_name (str): The name of the tool function called.
- args (Dict[str, Any]): The arguments passed to the tool.
- result (Any): The result returned by the tool execution.
- tool_call_id (str): A unique identifier for the tool call.
- mask_output (bool, optional): Whether to return a sanitized placeholder instead of the raw tool output. (default: :obj:
False
)
_stream
- input_message (Union[BaseMessage, str]): The input message for the agent.
- response_format (Optional[Type[BaseModel]], optional): A Pydantic model defining the expected structure of the response.
- Yields:
- ChatAgentResponse: Intermediate responses containing partial content, tool calls, and other information as they become available.
_get_token_count
_stream_response
_process_stream_chunks_with_accumulator
_accumulate_tool_calls
- tool_call_deltas (List[Any]): List of tool call deltas.
- accumulated_tool_calls (Dict[str, Any]): Dictionary of accumulated tool calls.
_execute_tools_sync_with_status_accumulator
_execute_tool_from_stream_data
_create_error_response
_record_assistant_tool_calls_message
_create_streaming_response_with_accumulator
get_usage_dict
- output_messages (list): List of output messages.
- prompt_tokens (int): Number of input prompt tokens.
add_model_scheduling_strategy
- name (str): The name of the strategy.
- strategy_fn (Callable): The scheduling strategy function.
clone
ChatAgent
with the same
configuration as the current instance.
Parameters:
- with_memory (bool): Whether to copy the memory (conversation history) to the new agent. If True, the new agent will have the same conversation history. If False, the new agent will have a fresh memory with only the system message. (default: :obj:
False
)
ChatAgent
with the same
configuration.
_clone_tools
- List of cloned tools/functions
- List of RegisteredAgentToolkit instances need registration
repr
ChatAgent
.
to_mcp
- name (str): Name of the MCP server. (default: :obj:
CAMEL-ChatAgent
) - description (Optional[List[str]]): Description of the agent. If None, a generic description is used. (default: :obj:
A helpful assistant using the CAMEL AI framework.
) - dependencies (Optional[List[str]]): Additional dependencies for the MCP server. (default: :obj:
None
) - host (str): Host to bind to for HTTP transport. (default: :obj:
localhost
) - port (int): Port to bind to for HTTP transport. (default: :obj:
8000
)