get_model_encoding
- value_for_tiktoken: Model value for tiktoken.
BaseTokenCounter
count_tokens_from_messages
- messages (List[OpenAIMessage]): Message list with the chat history in OpenAI API format.
encode
- text (str): The text to encode.
decode
- token_ids (List[int]): List of token IDs to decode.
OpenAITokenCounter
init
- model (UnifiedModelType): Model type for which tokens will be counted.
count_tokens_from_messages
- messages (List[OpenAIMessage]): Message list with the chat history in OpenAI API format.
_count_tokens_from_image
"auto"
resolution model will be treated as :obj:"high"
. All images with
:obj:"low"
detail cost 85 tokens each. Images with :obj:"high"
detail
are first scaled to fit within a 2048 x 2048 square, maintaining their
aspect ratio. Then, they are scaled such that the shortest side of the
image is 768px long. Finally, we count how many 512px squares the image
consists of. Each of those squares costs 170 tokens. Another 85 tokens are
always added to the final total. For more details please refer to OpenAI
vision docs
Parameters:
- image (PIL.Image.Image): Image to count number of tokens.
- detail (OpenAIVisionDetailType): Image detail type to count number of tokens.
encode
- text (str): The text to encode.
decode
- token_ids (List[int]): List of token IDs to decode.
AnthropicTokenCounter
init
- model (str): The name of the Anthropic model being used.
- api_key (Optional[str], optional): The API key for authenticating with the Anthropic service. If not provided, it will use the ANTHROPIC_API_KEY environment variable. (default: :obj:
None
) - base_url (Optional[str], optional): The URL of the Anthropic service. If not provided, it will use the default Anthropic URL. (default: :obj:
None
)
count_tokens_from_messages
- messages (List[OpenAIMessage]): Message list with the chat history in OpenAI API format.
encode
- text (str): The text to encode.
decode
- token_ids (List[int]): List of token IDs to decode.
LiteLLMTokenCounter
init
- model_type (UnifiedModelType): Model type for which tokens will be counted.
token_counter
completion_cost
count_tokens_from_messages
- messages (List[OpenAIMessage]): Message list with the chat history in LiteLLM API format.
calculate_cost_from_response
- response (dict): The completion response from LiteLLM.
encode
- text (str): The text to encode.
decode
- token_ids (List[int]): List of token IDs to decode.
MistralTokenCounter
init
- model_type (ModelType): Model type for which tokens will be counted.
count_tokens_from_messages
- messages (List[OpenAIMessage]): Message list with the chat history in OpenAI API format.
_convert_response_from_openai_to_mistral
- openai_msg (OpenAIMessage): An individual message with OpenAI format.
encode
- text (str): The text to encode.
decode
- token_ids (List[int]): List of token IDs to decode.