_cleanup_temp_files
StreamContentAccumulator
init
set_base_content
add_streaming_content
add_reasoning_content
add_tool_status
get_full_content
get_full_reasoning_content
get_content_with_new_status
reset_streaming_content
StreamingChatAgentResponse
init
_ensure_latest_response
msgs
terminated
info
msg
iter
getattr
AsyncStreamingChatAgentResponse
init
await
aiter
ChatAgent
- system_message (Union[BaseMessage, str], optional): The system message for the chat agent. (default: :obj:
None) model (Union[BaseModelBackend, Tuple[str, str], str, ModelType, Tuple[ModelPlatformType, ModelType], List[BaseModelBackend], List[str], List[ModelType], List[Tuple[str, str]], List[Tuple[ModelPlatformType, ModelType]]], optional): The model backend(s) to use. Can be a single instance, a specification (string, enum, tuple), or a list of instances or specifications to be managed byModelManager. If a list of specifications (notBaseModelBackendinstances) is provided, they will be instantiated usingModelFactory. (default: :obj:ModelPlatformType.DEFAULTwithModelType.DEFAULT) - memory (AgentMemory, optional): The agent memory for managing chat messages. If
None, a :obj:ChatHistoryMemorywill be used. (default: :obj:None) - message_window_size (int, optional): The maximum number of previous messages to include in the context window. If
None, no windowing is performed. (default: :obj:None) - summarize_threshold (int, optional): The percentage of the context window that triggers summarization. If
None, will trigger summarization when the context window is full. (default: :obj:None) - token_limit (int, optional): The maximum number of tokens allowed for the context window. If
None, uses the model’s default token limit. This can be used to restrict the context size below the model’s maximum capacity. (default: :obj:None) - output_language (str, optional): The language to be output by the agent. (default: :obj:
None) - tools (Optional[List[Union[FunctionTool, Callable]]], optional): List of available :obj:
FunctionToolor :obj:Callable. (default: :obj:None) toolkits_to_register_agent (Optional[List[RegisteredAgentToolkit]], optional): List of toolkit instances that inherit from :obj:RegisteredAgentToolkit. The agent will register itself with these toolkits, allowing them to access the agent instance. Note: This does NOT add the toolkit’s tools to the agent. To use tools from these toolkits, pass them explicitly via thetoolsparameter. (default: :obj:None) external_tools (Optional[List[Union[FunctionTool, Callable, Dict[str, Any]]]], optional): List of external tools (:obj:FunctionToolor :obj:Callableor :obj:Dict[str, Any]) bind to one chat agent. When these tools are called, the agent will directly return the request instead of processing it. (default: :obj:None) - response_terminators (List[ResponseTerminator], optional): List of :obj:
ResponseTerminatorbind to one chat agent. (default: :obj:None) - scheduling_strategy (str): name of function that defines how to select the next model in ModelManager. (default: :str:
round_robin) - max_iteration (Optional[int], optional): Maximum number of model calling iterations allowed per step. If
None(default), there’s no explicit limit. If1, it performs a single model call. IfN > 1, it allows up to N model calls. (default: :obj:None) - agent_id (str, optional): The ID of the agent. If not provided, a random UUID will be generated. (default: :obj:
None) - stop_event (Optional[threading.Event], optional): Event to signal termination of the agent’s operation. When set, the agent will terminate its execution. (default: :obj:
None) - tool_execution_timeout (Optional[float], optional): Timeout for individual tool execution. If None, wait indefinitely.
- mask_tool_output (Optional[bool]): Whether to return a sanitized placeholder instead of the raw tool output. (default: :obj:
False) - pause_event (Optional[Union[threading.Event, asyncio.Event]]): Event to signal pause of the agent’s operation. When clear, the agent will pause its execution. Use threading.Event for sync operations or asyncio.Event for async operations. (default: :obj:
None) - prune_tool_calls_from_memory (bool): Whether to clean tool call messages from memory after response generation to save token usage. When enabled, removes FUNCTION/TOOL role messages and ASSISTANT messages with tool_calls after each step. (default: :obj:
False) - enable_snapshot_clean (bool, optional): Whether to clean snapshot markers and references from historical tool outputs in memory. This removes verbose DOM markers (like [ref=…]) from older tool results while keeping the latest output intact for immediate use. (default: :obj:
False) - retry_attempts (int, optional): Maximum number of retry attempts for rate limit errors. (default: :obj:
3) - retry_delay (float, optional): Initial delay in seconds between retries. Uses exponential backoff. (default: :obj:
1.0) - step_timeout (Optional[float], optional): Timeout in seconds for the entire step operation. If None, no timeout is applied. (default: :obj:
None) - stream_accumulate (bool, optional): When True, partial streaming updates return accumulated content (current behavior). When False, partial updates return only the incremental delta. (default: :obj:
True) - summary_window_ratio (float, optional): Maximum fraction of the total context window that can be occupied by summary information. Used to limit how much of the model’s context is reserved for summarization results. (default: :obj:
0.6)
init
reset
ChatAgent to its initial state.
_resolve_models
- model: Model specification in various formats including single model, list of models, or model type specifications.
_resolve_model_list
- model_list (list): List of model specifications in various formats.
system_message
tool_dict
token_limit
output_language
output_language
memory
memory
- value (AgentMemory): The new agent memory to use.
set_context_utility
- context_utility (ContextUtility, optional): The context utility to use. If None, the agent will create its own when needed.
_get_full_tool_schemas
_is_token_limit_error
_is_tool_related_record
_find_indices_to_remove_for_last_tool_pair
_serialize_tool_args
_build_tool_signature
_describe_tool_call
_update_last_tool_call_state
_format_tool_limit_notice
_append_user_messages_section
_reset_summary_state
_calculate_next_summary_threshold
_update_memory_with_summary
_get_external_tool_names
add_tool
add_tools
_serialize_tool_result
_truncate_tool_result
- func_name (str): The name of the tool function called.
- result (Any): The result returned by the tool execution.
- The (possibly truncated) result
- A boolean indicating whether truncation occurred
_clean_snapshot_line
- [prefix] “quoted text” [attributes] [ref=…]: description
- Quoted text content (including brackets inside quotes)
- Description text after the colon
- Line prefixes (e.g., ”- button”, ”- tooltip”, “generic:”)
- Attribute markers (e.g., [disabled], [ref=e47])
- Lines with only element types
- All indentation
- line: The original line content.
_clean_snapshot_content
- content: The original snapshot content.
_clean_text_snapshot
- Removes all indentation
- Deletes empty lines
- Deduplicates all lines
- Cleans snapshot-specific markers
- content: The original snapshot text.
_register_tool_output_for_cache
_process_tool_output_cache
_clean_snapshot_in_memory
add_external_tool
remove_tool
- tool_name (str): The name of the tool to remove.
remove_tools
remove_external_tool
- tool_name (str): The name of the tool to remove.
update_memory
- message (BaseMessage): The new message to add to the stored messages.
- role (OpenAIBackendRole): The backend role type.
- timestamp (Optional[float], optional): Custom timestamp for the memory record. If
None, the current time will be used. (default: :obj:None) - return_records (bool, optional): When
__INLINE_CODE_0____INLINE_CODE_1__False)
__INLINE_CODE_0____INLINE_CODE_1____INLINE_CODE_2____INLINE_CODE_3____INLINE_CODE_4__.
load_memory
- memory (AgentMemory): The memory to load into the agent.
load_memory_from_path
- path (str): The file path to a JSON memory file that uses JsonStorage.
save_memory
- path (str): Target file path to store JSON data.
summarize
asummarize for async/await support and better
performance in parallel summarization workflows.
Parameters:
- filename (Optional[str]): The base filename (without extension) to use for the markdown file. Defaults to a timestamped name when not provided.
- summary_prompt (Optional[str]): Custom prompt for the summarizer. When omitted, a default prompt highlighting key decisions, action items, and open questions is used.
- response_format (Optional[Type[BaseModel]]): A Pydantic model defining the expected structure of the response. If provided, the summary will be generated as structured output and included in the result.
- include_summaries (bool): Whether to include previously generated summaries in the content to be summarized. If False (default), only non-summary messages will be summarized. If True, all messages including previous summaries will be summarized (full compression). (default: :obj:
False) - working_directory (Optional[str|Path]): Optional directory to save the markdown summary file. If provided, overrides the default directory used by ContextUtility.
- add_user_messages (bool): Whether add user messages to summary. (default: :obj:
True)
asummarize: Async version for non-blocking LLM calls.
_build_conversation_text_from_messages
- messages (List[Any]): List of messages to convert.
- include_summaries (bool): Whether to include messages starting with [CONTEXT_SUMMARY]. (default: :obj:
False)
- Formatted conversation text
- List of user messages extracted from the conversation
clear_memory
_generate_system_message_for_output_language
init_messages
update_system_message
- system_message (Union[BaseMessage, str]): The new system message. Can be either a BaseMessage object or a string. If a string is provided, it will be converted into a BaseMessage object.
- reset_memory (bool): Whether to reinitialize conversation messages after updating the system message. Defaults to True.
append_to_system_message
- content (str): The additional system message.
- reset_memory (bool): Whether to reinitialize conversation messages after appending additional context. Defaults to True.
reset_to_original_system_message
record_message
ChatAgent from the backend. Currently,
the choice of the critic is submitted with this method.
Parameters:
- message (BaseMessage): An external message to be recorded in the memory.
_try_format_message
_check_tools_strict_compatibility
_convert_response_format_to_prompt
- response_format (Type[BaseModel]): The Pydantic model class.
_handle_response_format_with_non_strict_tools
- input_message: The original input message.
- response_format: The requested response format.
_is_called_from_registered_toolkit
_apply_prompt_based_parsing
- response: The model response to parse.
- original_response_format: The original response format class.
_format_response_if_needed
- The response format is None (not provided)
- The response is empty
step
- input_message (Union[BaseMessage, str]): The input message for the agent. If provided as a BaseMessage, the
roleis adjusted touserto indicate an external message. - response_format (Optional[Type[BaseModel]], optional): A Pydantic model defining the expected structure of the response. Used to generate a structured response if provided. (default: :obj:
None)
_step_impl
chat_history
_create_token_usage_tracker
_update_token_usage_tracker
- tracker (Dict[str, int]): The token usage tracker to update.
- usage_dict (Dict[str, int]): The usage dictionary with new values.
_convert_to_chatagent_response
_record_final_output
_get_model_response
_sanitize_messages_for_logging
- messages (List[OpenAIMessage]): The OpenAI messages to sanitize.
- prev_num_openai_messages (int): The number of openai messages logged in the previous iteration.
_step_get_info
- output_messages (List[BaseMessage]): The messages generated in this step.
- finish_reasons (List[str]): The reasons for finishing the generation for each message.
- usage_dict (Dict[str, int]): Dictionary containing token usage information.
- response_id (str): The ID of the response from the model.
- tool_calls (List[ToolCallingRecord]): Records of function calls made during this step.
- num_tokens (int): The number of tokens used in this step.
- external_tool_call_request (Optional[ToolCallRequest]): The request for external tool call.
_handle_batch_response
- response (ChatCompletion): Model response.
_step_terminate
- num_tokens (int): Number of tokens in the messages.
- tool_calls (List[ToolCallingRecord]): List of information objects of functions called in the current step.
- termination_reason (str): String describing the reason for termination.
_execute_tool
- tool_call_request (_ToolCallRequest): The tool call request.
_record_tool_calling
- func_name (str): The name of the tool function called.
- args (Dict[str, Any]): The arguments passed to the tool.
- result (Any): The result returned by the tool execution.
- tool_call_id (str): A unique identifier for the tool call.
- mask_output (bool, optional): Whether to return a sanitized placeholder instead of the raw tool output. (default: :obj:
False) - extra_content (Optional[Dict[str, Any]], optional): Additional content associated with the tool call. (default: :obj:
None)
_stream
- input_message (Union[BaseMessage, str]): The input message for the agent.
- response_format (Optional[Type[BaseModel]], optional): A Pydantic model defining the expected structure of the response.
- Yields:
- ChatAgentResponse: Intermediate responses containing partial content, tool calls, and other information as they become available.
_get_token_count
_stream_response
_process_stream_chunks_with_accumulator
_accumulate_tool_calls
- tool_call_deltas (List[Any]): List of tool call deltas.
- accumulated_tool_calls (Dict[str, Any]): Dictionary of accumulated tool calls.
_execute_tools_sync_with_status_accumulator
_execute_tool_from_stream_data
_create_error_response
_record_assistant_tool_calls_message
_create_streaming_response_with_accumulator
get_usage_dict
- output_messages (list): List of output messages.
- prompt_tokens (int): Number of input prompt tokens.
add_model_scheduling_strategy
- name (str): The name of the strategy.
- strategy_fn (Callable): The scheduling strategy function.
clone
ChatAgent with the same
configuration as the current instance.
Parameters:
- with_memory (bool): Whether to copy the memory (conversation history) to the new agent. If True, the new agent will have the same conversation history. If False, the new agent will have a fresh memory with only the system message. (default: :obj:
False)
ChatAgent with the same
configuration.
_clone_tools
- List of cloned tools/functions
- List of RegisteredAgentToolkit instances need registration
repr
ChatAgent.
to_mcp
- name (str): Name of the MCP server. (default: :obj:
CAMEL-ChatAgent) - description (Optional[List[str]]): Description of the agent. If None, a generic description is used. (default: :obj:
A helpful assistant using the CAMEL AI framework.) - dependencies (Optional[List[str]]): Additional dependencies for the MCP server. (default: :obj:
None) - host (str): Host to bind to for HTTP transport. (default: :obj:
localhost) - port (int): Port to bind to for HTTP transport. (default: :obj:
8000)