camel.memories.context_creators package#
Submodules#
camel.memories.context_creators.score_based module#
- class camel.memories.context_creators.score_based.ScoreBasedContextCreator(token_counter: BaseTokenCounter, token_limit: int)[source]#
Bases:
BaseContextCreator
A default implementation of context creation strategy, which inherits from
BaseContextCreator
.This class provides a strategy to generate a conversational context from a list of chat history records while ensuring the total token count of the context does not exceed a specified limit. It prunes messages based on their score if the total token count exceeds the limit.
- Parameters:
token_counter (BaseTokenCounter) – An instance responsible for counting tokens in a message.
token_limit (int) – The maximum number of tokens allowed in the generated context.
- create_context(records: List[ContextRecord]) Tuple[List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam], int] [source]#
Creates conversational context from chat history while respecting token limits.
Constructs the context from provided records and ensures that the total token count does not exceed the specified limit by pruning the least score messages if necessary.
- Parameters:
records (List[ContextRecord]) – A list of message records from which to generate the context.
- Returns:
- A tuple containing the constructed
context in OpenAIMessage format and the total token count.
- Return type:
Tuple[List[OpenAIMessage], int]
- Raises:
RuntimeError – If it’s impossible to create a valid context without exceeding the token limit.
- property token_counter: BaseTokenCounter#
- property token_limit: int#
Module contents#
- class camel.memories.context_creators.ScoreBasedContextCreator(token_counter: BaseTokenCounter, token_limit: int)[source]#
Bases:
BaseContextCreator
A default implementation of context creation strategy, which inherits from
BaseContextCreator
.This class provides a strategy to generate a conversational context from a list of chat history records while ensuring the total token count of the context does not exceed a specified limit. It prunes messages based on their score if the total token count exceeds the limit.
- Parameters:
token_counter (BaseTokenCounter) – An instance responsible for counting tokens in a message.
token_limit (int) – The maximum number of tokens allowed in the generated context.
- create_context(records: List[ContextRecord]) Tuple[List[ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam], int] [source]#
Creates conversational context from chat history while respecting token limits.
Constructs the context from provided records and ensures that the total token count does not exceed the specified limit by pruning the least score messages if necessary.
- Parameters:
records (List[ContextRecord]) – A list of message records from which to generate the context.
- Returns:
- A tuple containing the constructed
context in OpenAIMessage format and the total token count.
- Return type:
Tuple[List[OpenAIMessage], int]
- Raises:
RuntimeError – If it’s impossible to create a valid context without exceeding the token limit.
- property token_counter: BaseTokenCounter#
- property token_limit: int#