Tic Tac Toe#
You can also open this on Google Colab
In this cookbook, I want to show how Multi-Step environments work in CAMEL. Our RL modules were built to mimic OpenAI Gym, so if you’re familiar with Gym’s interface, you’ll feel right at home.
We will use the Tic-Tac-Toe environment as an example to show the lifecycle of an environment.
The Tic-Tac-Toe environment can be used to evaluate agents, generate synthetic data for distillation, or train an agent to play the game.
First, we need to initialize our environment and set it up. Then we can call reset
to get our initial observation.
Let’s install the CAMEL package with all its dependencies:
[ ]:
%pip install camel-ai[all]==0.2.46
[1]:
import asyncio
from camel.environments.models import Action
from camel.environments.tic_tac_toe import TicTacToeEnv, Opponent
# we can choose the playstyle of our opponent to be either 'random' or 'optimal' (computed using minimax)
opp = Opponent(play_style="random")
env = TicTacToeEnv(opponent=opp)
await env.setup()
obs = await env.reset()
print("Initial Observation:\n")
print(obs.question)
Initial Observation:
You are playing Tic Tac Toe with standard rules.
You are the player with X.
Choose a number between 1 and 9 to place an X.
This is the current state of the board:
1 | 2 | 3
---------
4 | 5 | 6
---------
7 | 8 | 9
Each number that you can see is still an empty field that you can place your 'X' in. Please end your response with <Action> [a number from 1 to 9]
We will use GPT-4o-mini, so let’s enter our API key.
[2]:
import os
from getpass import getpass
openai_api_key = getpass('Enter your API key: ')
os.environ["OPENAI_API_KEY"] = openai_api_key
Alternatively, if running on Colab, you could save your API keys and tokens as Colab Secrets, and use them across notebooks.
To do so, comment out the above manual API key prompt code block(s), and uncomment the following codeblock.
⚠️ Don’t forget granting access to the API key you would be using to the current notebook.
[ ]:
# import os
# from google.colab import userdata
# os.environ["OPENAI_API_KEY"] = userdata.get("OPENAI_API_KEY")
Let’s next define the model-backend and the agent.
You can also add a system prompt or equip your agent with tools, but for the sake of simplicity we just create a bare agent with GPT-4o-mini.
[3]:
from camel.models import ModelFactory
from camel.types import ModelPlatformType, ModelType
from camel.configs import ChatGPTConfig
from camel.agents import ChatAgent
model = ModelFactory.create(
model_platform=ModelPlatformType.OPENAI,
model_type=ModelType.GPT_4O_MINI,
model_config_dict=ChatGPTConfig().as_dict(),
)
agent = ChatAgent(model=model)
Next, we will simulate one episode.
[4]:
while not env.is_done():
llm_response = agent.step(obs.question).msgs[0].content
agent.reset() # clear context window
action = Action(llm_response=llm_response)
result = await env.step(action)
next_obs, reward, done, info = result
obs = next_obs
print("\nAgent Move:", action.llm_response)
print("Observation:")
print(next_obs.question)
print("Reward:", reward)
print("Done:", done)
print("Info:", info)
Agent Move: I'll place my X in position 5.
<Action> 5
Observation:
You are playing Tic Tac Toe with standard rules.
You are the player with X.
Choose a number between 1 and 9 to place an X.
This is the current state of the board:
O | 2 | 3
---------
4 | X | 6
---------
7 | 8 | 9
Each number that you can see is still an empty field that you can place your 'X' in. Please end your response with <Action> [a number from 1 to 9]
Reward: 0.5
Done: False
Info: {'extraction_result': '5', 'step': 1, 'state': {'board': ['O', ' ', ' ', ' ', 'X', ' ', ' ', ' ', ' '], 'game_over': False, 'winner': None, 'last_move_illegal': False, 'last_move': 5, 'extraction_error': None}, 'rewards_dict': {'x_non_loss_value': 0.5}}
Agent Move: I'll place an X in position 2.
<Action> 2
Observation:
You are playing Tic Tac Toe with standard rules.
You are the player with X.
Choose a number between 1 and 9 to place an X.
This is the current state of the board:
O | X | 3
---------
4 | X | 6
---------
7 | O | 9
Each number that you can see is still an empty field that you can place your 'X' in. Please end your response with <Action> [a number from 1 to 9]
Reward: 0.5
Done: False
Info: {'extraction_result': '2', 'step': 2, 'state': {'board': ['O', 'X', ' ', ' ', 'X', ' ', ' ', 'O', ' '], 'game_over': False, 'winner': None, 'last_move_illegal': False, 'last_move': 2, 'extraction_error': None}, 'rewards_dict': {'x_non_loss_value': 0.5}}
Agent Move: In this situation, I will place my 'X' in position 9 to block the opponent's potential win on their next turn.
<Action> 9
Observation:
You are playing Tic Tac Toe with standard rules.
You are the player with X.
Choose a number between 1 and 9 to place an X.
This is the current state of the board:
O | X | O
---------
4 | X | 6
---------
7 | O | X
Each number that you can see is still an empty field that you can place your 'X' in. Please end your response with <Action> [a number from 1 to 9]
Reward: 0.5
Done: False
Info: {'extraction_result': '9', 'step': 3, 'state': {'board': ['O', 'X', 'O', ' ', 'X', ' ', ' ', 'O', 'X'], 'game_over': False, 'winner': None, 'last_move_illegal': False, 'last_move': 9, 'extraction_error': None}, 'rewards_dict': {'x_non_loss_value': 0.5}}
Agent Move: To win the game, I can place my 'X' in position 4, completing a vertical column.
<Action> 4
Observation:
You are playing Tic Tac Toe with standard rules.
You are the player with X.
Choose a number between 1 and 9 to place an X.
This is the current state of the board:
O | X | O
---------
X | X | O
---------
7 | O | X
Each number that you can see is still an empty field that you can place your 'X' in. Please end your response with <Action> [a number from 1 to 9]
Reward: 0.5
Done: False
Info: {'extraction_result': '4', 'step': 4, 'state': {'board': ['O', 'X', 'O', 'X', 'X', 'O', ' ', 'O', 'X'], 'game_over': False, 'winner': None, 'last_move_illegal': False, 'last_move': 4, 'extraction_error': None}, 'rewards_dict': {'x_non_loss_value': 0.5}}
Agent Move: The current board looks like this:
```
O | X | O
---------
X | X | O
---------
7 | O | X
```
You can place your 'X' in position 7 to win the game.
I'll choose 7.
<Action> [7]
Observation:
You are playing Tic Tac Toe with standard rules.
You are the player with X.
Your last move was illegal.
You chose the move None.Choose another number between 1 and 9 to place an X.
The field must still be available.
This is the current state of the board:
O | X | O
---------
X | X | O
---------
7 | O | X
Each number that you can see is still an empty field that you can place your 'X' in. Please end your response with <Action> [a number from 1 to 9]
Reward: 0.0
Done: False
Info: {'extraction_result': None, 'step': 5, 'state': {'board': ['O', 'X', 'O', 'X', 'X', 'O', ' ', 'O', 'X'], 'game_over': False, 'winner': None, 'last_move_illegal': True, 'last_move': None, 'extraction_error': 'Could not extract a valid move'}, 'rewards_dict': {'illegal_move': 0.0}}
Agent Move: The current state of the board is as follows:
```
O | X | O
---------
X | X | O
---------
7 | O | X
```
The only available field is 7. I will place my X there.
<Action> 7
Observation:
O | X | O
---------
X | X | O
---------
X | O | X
Game Over. It's a draw!
Reward: 0.5
Done: True
Info: {'extraction_result': '7', 'step': 6, 'state': {'board': ['O', 'X', 'O', 'X', 'X', 'O', 'X', 'O', 'X'], 'game_over': True, 'winner': 'draw', 'last_move_illegal': False, 'last_move': 7, 'extraction_error': None}, 'rewards_dict': {'draw': 0.5}}
Eval#
We can also use this to eval a model on tic tac toe.
Let’s run it for 10 episodes and see how often we win, draw and lose.
[5]:
wins = 0
losses = 0
draws = 0
for episode in range(10):
obs = await env.reset() # Start fresh
done = False
while not done:
llm_response = agent.step(obs.question).msgs[0].content
agent.reset() # Nuke the context
action = Action(llm_response=llm_response)
next_obs, reward, done, info = await env.step(action)
obs = next_obs
# Tally result based on final reward
if reward == 1:
wins += 1
elif reward == 0.5:
draws += 1
else:
losses += 1
# Final report
print("\n=== Summary after 10 Episodes ===")
print(f"Wins: {wins}")
print(f"Draws: {draws}")
print(f"Losses: {losses}")
=== Summary after 10 Episodes ===
Wins: 0
Draws: 2
Losses: 8
As you can see, GPT-4o-mini is quite bad!
Finally, we close the environment.
[6]:
await env.close()