Documentation Index
Fetch the complete documentation index at: https://docs.camel-ai.org/llms.txt
Use this file to discover all available pages before exploring further.
ScoreBasedContextCreator
class ScoreBasedContextCreator(BaseContextCreator):
A context creation strategy that orders records chronologically.
This class supports token count estimation to reduce expensive repeated
token counting. When a cached token count is available, it estimates
new message tokens using character-based approximation instead of
calling the token counter for every message.
Parameters:
- token_counter (BaseTokenCounter): Token counter instance used to compute the combined token count of the returned messages.
- token_limit (int): Retained for API compatibility. No longer used to filter records.
init
def __init__(self, token_counter: BaseTokenCounter, token_limit: int):
token_counter
token_limit
set_cached_token_count
def set_cached_token_count(self, token_count: int, message_count: int):
Set the cached token count from LLM response usage.
Parameters:
- token_count (int): The total token count (prompt + completion) from LLM response usage.
- message_count (int): The number of messages including the assistant response that will be added to memory.
clear_cache
Clear the cached token count.
_estimate_message_tokens
def _estimate_message_tokens(self, message: OpenAIMessage):
Estimate token count for a single message.
Uses ~2 chars/token as a conservative approximation to handle both
ASCII (~4 chars/token) and CJK text (~1-2 chars/token).
Parameters:
- message: The OpenAI message to estimate.
Returns:
Estimated token count (intentionally conservative).
create_context
def create_context(self, records: List[ContextRecord]):
Returns messages sorted by timestamp and their total token count.