Memory
What is Memory?
The CAMEL Memory module gives your AI agents a flexible, persistent way to store, retrieve, and manage information, across any conversation or task.
With memory, agents can maintain context, recall key details from previous chats, and deliver much more coherent, context-aware responses.
Memory is what transforms a “chatbot” into a smart, adaptable assistant.
Basic Usage: LongtermAgentMemory
This is the fastest way to enable true memory for your agents: store, retrieve, and leverage context across interactions.
Integrate Memory into a ChatAgent
Assign memory to any agent and watch your AI recall and reason like a pro.
Core Components of CAMEL Memory
MemoryRecord
MemoryRecord
What it is:
The basic data unit in CAMEL’s memory system—everything stored/retrieved flows through this structure.
Attributes:
- message: The content, as a
BaseMessage
- role_at_backend: Backend role (
OpenAIBackendRole
) - uuid: Unique identifier for the record
- extra_info: Optional metadata (key-value pairs)
Key methods:
from_dict()
: Build from a Python dictto_dict()
: Convert to dict for saving/serializationto_openai_message()
: Transform into an OpenAI message object
ContextRecord
ContextRecord
What it is:
Result of memory retrieval from AgentMemory
, scored for context relevance.
Attributes:
- memory_record: The original
MemoryRecord
- score: How important/relevant this record is (float)
MemoryBlock (Abstract Base Class)
MemoryBlock (Abstract Base Class)
What it is:
The core “building block” for agent memory, following the Composite design pattern (supports tree structures).
Key methods:
write_records()
: Store multiple recordswrite_record()
: Store a single recordclear()
: Remove all stored records
BaseContextCreator (Abstract Base Class)
BaseContextCreator (Abstract Base Class)
What it is:
Defines strategies for generating agent context when data exceeds model limits.
Key methods/properties:
token_counter
: Counts message tokenstoken_limit
: Max allowed tokens in the contextcreate_context()
: Algorithm for building context from chat history
AgentMemory (Abstract Base Class)
AgentMemory (Abstract Base Class)
What it is:
Specialized MemoryBlock
for direct agent use.
Key methods:
retrieve()
: GetContextRecord
listget_context_creator()
: Return the associated context creatorget_context()
: Return properly sized chat context
Memory Block Implementations
ChatHistoryBlock
ChatHistoryBlock
What it does:
Stores and retrieves recent chat history (like a conversation timeline).
Initialization:
storage
: Optional (defaultInMemoryKeyValueStorage
)keep_rate
: Historical message score weighting (default0.9
)
Methods:
retrieve()
: Get recent chats (windowed)write_records()
: Add new recordsclear()
: Remove all chat history
Use Case:
Best for maintaining the most recent conversation flow/context.
VectorDBBlock
VectorDBBlock
What it does:
Uses vector embeddings for storing and retrieving information based on semantic similarity.
Initialization:
storage
: Optional vector DB (QdrantStorage
by default)embedding
: Embedding model (default:OpenAIEmbedding
)
Methods:
retrieve()
: Get similar records based on query/keywordwrite_records()
: Add new records (converted to vectors)clear()
: Remove all vector records
Use Case:
Ideal for large histories or when semantic search is needed.
Key Differences:
- Storage: ChatHistoryBlock uses key-value storage. VectorDBBlock uses vector DBs.
- Retrieval: ChatHistoryBlock retrieves by recency. VectorDBBlock retrieves by similarity.
- Data: ChatHistoryBlock stores raw messages. VectorDBBlock stores embeddings.
Agent Memory Implementations & Advanced Usage
ChatHistoryMemory
What is it?
An AgentMemory implementation that wraps ChatHistoryBlock
.
Best for: Sequential, recent chat context (simple conversation memory).
Initialization:
context_creator
:BaseContextCreator
storage
: OptionalBaseKeyValueStorage
window_size
: Optionalint
(retrieval window)
Methods:
retrieve()
: Get recent chat messageswrite_records()
: Write new records to chat historyget_context_creator()
: Get the context creatorclear()
: Remove all chat messages
VectorDBMemory
What is it?
An AgentMemory implementation that wraps VectorDBBlock
.
Best for: Semantic search—find relevant messages by meaning, not just recency.
Initialization:
context_creator
:BaseContextCreator
storage
: OptionalBaseVectorStorage
retrieve_limit
:int
(default3
)
Methods:
retrieve()
: Get relevant messages from the vector DBwrite_records()
: Write new records and update topicget_context_creator()
: Get the context creator
LongtermAgentMemory
What is it?
Combines ChatHistoryMemory and VectorDBMemory for hybrid memory.
Best for: Production bots that need both recency & semantic search.
Initialization:
context_creator
:BaseContextCreator
chat_history_block
: OptionalChatHistoryBlock
vector_db_block
: OptionalVectorDBBlock
retrieve_limit
:int
(default3
)
Methods:
retrieve()
: Get context from both history & vector DBwrite_records()
: Write to both chat history & vector DBget_context_creator()
: Get the context creatorclear()
: Remove all records from both memory blocks
Mem0Storage Integration
Add Mem0 for cloud-based memory with automatic sync.
Initialization Params:
api_key
: (optional) Mem0 API authenticationagent_id
: (optional) Agent associationuser_id
: (optional) User associationmetadata
: (optional) Dict of metadata for all memories
Why use this?
- Cloud persistence of chat history
- Simple setup and config
- Sequential retrieval—conversation order preserved
- Syncs across sessions automatically
Use when: you need reliable, persistent chat history in the cloud (not advanced semantic search).
Advanced Topics
Customizing Context Creator
You can subclass BaseContextCreator
for advanced control.
Customizing VectorDBBlock
You can use custom embeddings or vector DBs.
Performance Considerations
- For production, use persistent storage (not just in-memory).
- Optimize your context creator for both relevance and token count.