Concept

Agents in CAMEL are autonomous entities capable of performing specific tasks through interaction with language models and other components. Each agent is designed with a particular role and capability, allowing them to work independently or collaboratively to achieve complex goals.
Think of an agent as an AI-powered teammate one that brings a defined role, memory, and tool-using abilities to every workflow. CAMEL’s agents are composable, robust, and can be extended with custom logic.

Base Agent Architecture

All CAMEL agents inherit from the BaseAgent abstract class, which defines two essential methods:
MethodPurposeDescription
reset()State ManagementResets the agent to its initial state
step()Task ExecutionPerforms a single step of the agent’s operation

Types

ChatAgent

The ChatAgent is the primary implementation that handles conversations with language models. It supports:
  • System message configuration for role definition
  • Memory management for conversation history
  • Tool/function calling capabilities
  • Response formatting and structured outputs
  • Multiple model backend support with scheduling strategies
  • Async operation support
CriticAgent
Specialized agent for evaluating and critiquing responses or solutions. Used in scenarios requiring quality assessment or validation.
DeductiveReasonerAgent
Focused on logical reasoning and deduction. Breaks down complex problems into smaller, manageable steps.
EmbodiedAgent
Designed for embodied AI scenarios, capable of understanding and responding to physical world contexts.
KnowledgeGraphAgent
Specialized in building and utilizing knowledge graphs for enhanced reasoning and information management.
MultiHopGeneratorAgent
Handles multi-hop reasoning tasks, generating intermediate steps to reach conclusions.
SearchAgent
Focused on information retrieval and search tasks across various data sources.
TaskAgent
Handles task decomposition and management, breaking down complex tasks into manageable subtasks.

Usage

Basic ChatAgent Usage

from camel.agents import ChatAgent

# Create a chat agent with a system message
agent = ChatAgent(system_message="You are a helpful assistant.")

# Step through a conversation
response = agent.step("Hello, can you help me?")

Simplified Agent Creation

The ChatAgent supports multiple ways to specify the model:
from camel.agents import ChatAgent
from camel.models import ModelFactory
from camel.types import ModelPlatformType, ModelType

# Method 1: Using just a string for the model name (default model platform is used)
agent_1 = ChatAgent("You are a helpful assistant.", model="gpt-4o-mini")

# Method 2: Using a ModelType enum (default model platform is used)
agent_2 = ChatAgent("You are a helpful assistant.", model=ModelType.GPT_4O_MINI)

# Method 3: Using a tuple of strings (platform, model)
agent_3 = ChatAgent("You are a helpful assistant.", model=("openai", "gpt-4o-mini"))

# Method 4: Using a tuple of enums
agent_4 = ChatAgent(
    "You are a helpful assistant.",
    model=(ModelPlatformType.ANTHROPIC, ModelType.CLAUDE_3_5_SONNET),
)

# Method 5: Using default model platform and default model type when none is specified
agent_5 = ChatAgent("You are a helpful assistant.")

# Method 6: Using a pre-created model with ModelFactory (original approach)
model = ModelFactory.create(
    model_platform=ModelPlatformType.OPENAI,  # Using enum
    model_type=ModelType.GPT_4O_MINI,         # Using enum
)
agent_6 = ChatAgent("You are a helpful assistant.", model=model)

# Method 7: Using ModelFactory with string parameters
model = ModelFactory.create(
    model_platform="openai",     # Using string
    model_type="gpt-4o-mini",    # Using string
)
agent_7 = ChatAgent("You are a helpful assistant.", model=model)

Using Tools with Chat Agent

from camel.agents import ChatAgent
from camel.toolkits import FunctionTool

# Define a tool
def calculator(a: int, b: int) -> int:
    return a + b

# Create agent with tool
agent = ChatAgent(tools=[calculator])

# The agent can now use the calculator tool in conversations
response = agent.step("What is 5 + 3?")

Structured Output

CAMEL’s ChatAgent can produce structured output by leveraging Pydantic models. This feature is especially useful when you need the agent to return data in a specific format, such as JSON. By defining a Pydantic model, you can ensure that the agent’s output is predictable and easy to parse.
Here’s how you can get a structured response from a ChatAgent. First, define a BaseModel that specifies the desired output fields. You can add descriptions to each field to guide the model.
from pydantic import BaseModel, Field
from typing import List

class JokeResponse(BaseModel):
    joke: str = Field(description="A joke")
    funny_level: int = Field(description="Funny level, from 1 to 10")

# Create agent with structured output
agent = ChatAgent(model="gpt-4o-mini")
response = agent.step("Tell me a joke.", response_format=JokeResponse)

# The response content is a JSON string
print(response.msgs[0].content)
# '{"joke": "Why don't scientists trust atoms? Because they make up everything!", "funny_level": 8}'

# Access the parsed Pydantic object
parsed_response = response.msgs[0].parsed
print(parsed_response.joke)
# "Why don't scientists trust atoms? Because they make up everything!"
print(parsed_response.funny_level)
# 8

Best Practices

  • Use appropriate window sizes to manage conversation history
  • Consider token limits when dealing with long conversations
  • Utilize the memory system for maintaining context
  • Keep tool functions focused and well-documented
  • Handle tool errors gracefully
  • Use external tools for operations that should be handled by the user
  • Implement appropriate response terminators for conversation control
  • Use structured outputs when specific response formats are needed
  • Handle async operations properly when dealing with long-running tasks
  • Use the simplified model specification methods for cleaner code
  • For default platform models, just specify the model name as a string
  • For specific platforms, use the tuple format (platform, model)
  • Use enums for better type safety and IDE support

Advanced Features

You can dynamically select which model an agent uses for each step by adding your own scheduling strategy.
def custom_strategy(models):
    # Custom model selection logic
    return models[0]

agent.add_model_scheduling_strategy("custom", custom_strategy)