generate_tool_prompt

def generate_tool_prompt(tool_schema_list: List[Dict[str, Any]]):

Returns:

str: A string representing the tool prompt.

extract_tool_call

def extract_tool_call(content: str):

Extract the tool call from the model response, if present.

Parameters:

  • response (Any): The model’s response object.

Returns:

Optional[Dict[str, Any]]: The parsed tool call if present, otherwise None.

safe_model_dump

def safe_model_dump(obj):

Safely dump a Pydantic model to a dictionary.

This method attempts to use the model_dump method if available, otherwise it falls back to the dict method.

convert_to_function_tool

def convert_to_function_tool(tool: Union[FunctionTool, Callable]):

Convert a tool to a FunctionTool from Callable.

convert_to_schema

def convert_to_schema(tool: Union[FunctionTool, Callable, Dict[str, Any]]):

Convert a tool to a schema from Callable or FunctionTool.

get_info_dict

def get_info_dict(
    session_id: Optional[str],
    usage: Optional[Dict[str, int]],
    termination_reasons: List[str],
    num_tokens: int,
    tool_calls: List[ToolCallingRecord],
    external_tool_call_requests: Optional[List[ToolCallRequest]] = None
):

Returns a dictionary containing information about the chat session.

Parameters:

  • session_id (str, optional): The ID of the chat session.
  • usage (Dict[str, int], optional): Information about the usage of the LLM.
  • termination_reasons (List[str]): The reasons for the termination of the chat session.
  • num_tokens (int): The number of tokens used in the chat session.
  • tool_calls (List[ToolCallingRecord]): The list of function calling records, containing the information of called tools.
  • external_tool_call_requests (Optional[List[ToolCallRequest]]): The requests for external tool calls.

Returns:

Dict[str, Any]: The chat session information.

handle_logprobs

def handle_logprobs(choice: Choice):