camel.memories.context_creators package#
Submodules#
camel.memories.context_creators.score_based module#
- class camel.memories.context_creators.score_based.ScoreBasedContextCreator(token_counter: BaseTokenCounter, token_limit: int)[source]#
Bases:
BaseContextCreator
A default implementation of context creation strategy, which inherits from
BaseContextCreator
.This class provides a strategy to generate a conversational context from a list of chat history records while ensuring the total token count of the context does not exceed a specified limit. It prunes messages based on their score if the total token count exceeds the limit.
- Parameters:
token_counter (BaseTokenCounter) β An instance responsible for counting tokens in a message.
token_limit (int) β The maximum number of tokens allowed in the generated context.
- create_context(records: List[ContextRecord]) Tuple[List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam], int] [source]#
Constructs conversation context from chat history while respecting token limits.
Key strategies: 1. System message is always prioritized and preserved 2. Truncation removes low-score messages first 3. Final output maintains chronological order and in history memory,
the score of each message decreases according to keep_rate. The newer the message, the higher the score.
- Parameters:
records (List[ContextRecord]) β List of context records with scores and timestamps.
- Returns:
Ordered list of OpenAI messages
Total token count of the final context
- Return type:
Tuple[List[OpenAIMessage], int]
- Raises:
RuntimeError β If system message alone exceeds token limit
- property token_counter: BaseTokenCounter#
- property token_limit: int#
Module contents#
- class camel.memories.context_creators.ScoreBasedContextCreator(token_counter: BaseTokenCounter, token_limit: int)[source]#
Bases:
BaseContextCreator
A default implementation of context creation strategy, which inherits from
BaseContextCreator
.This class provides a strategy to generate a conversational context from a list of chat history records while ensuring the total token count of the context does not exceed a specified limit. It prunes messages based on their score if the total token count exceeds the limit.
- Parameters:
token_counter (BaseTokenCounter) β An instance responsible for counting tokens in a message.
token_limit (int) β The maximum number of tokens allowed in the generated context.
- create_context(records: List[ContextRecord]) Tuple[List[ChatCompletionDeveloperMessageParam | ChatCompletionSystemMessageParam | ChatCompletionUserMessageParam | ChatCompletionAssistantMessageParam | ChatCompletionToolMessageParam | ChatCompletionFunctionMessageParam], int] [source]#
Constructs conversation context from chat history while respecting token limits.
Key strategies: 1. System message is always prioritized and preserved 2. Truncation removes low-score messages first 3. Final output maintains chronological order and in history memory,
the score of each message decreases according to keep_rate. The newer the message, the higher the score.
- Parameters:
records (List[ContextRecord]) β List of context records with scores and timestamps.
- Returns:
Ordered list of OpenAI messages
Total token count of the final context
- Return type:
Tuple[List[OpenAIMessage], int]
- Raises:
RuntimeError β If system message alone exceeds token limit
- property token_counter: BaseTokenCounter#
- property token_limit: int#