š¤ Customer Service Discord Bot with Agentic RAG Powered by a local model deployment, using š« CAMEL, Firecrawl & Qdrant#
To run this, press āRuntimeā and press āRun allā on a free Tesla T4 Google Colab instance!
Join our Discord if you need help + ā Star us on Github ā
Installation and Setup#
First, install the CAMEL package with all its dependencies
[ ]:
!pip install "camel-ai[all]==0.2.16"
!pip install starlette
!pip install nest_asyncio
Next, prepare the knowledge base with Firecrawl. Firecrawl is a versatile web scraping and crawling tool designed to extract data efficiently from websites, which has been integrated with CAMEL. For more information, you can check out our Firecrawl cookbook: https://colab.research.google.com/drive/1lOmM3VmgR1hLwDKdeLGFve_75RFW0R9I?usp=sharing#scrollTo=1Nj0Oqnoy6oJ
Letās set up your Firecrawl! You may skip this part if you already have your knowledge file.
In order to run everything locally, we can use self-hosted firecrawl.
For more details, please check out firecrawl documentation: https://docs.firecrawl.dev/contributing/guide
[ ]:
from getpass import getpass
firecrawl_api_url = getpass('Enter your API url: ')
Enter your API url: Ā·Ā·Ā·Ā·Ā·Ā·Ā·Ā·Ā·Ā·
Local setup#
Please make a copy of this notebook (important), or run this notebook locally.
If you choose to make a copy of this notebook and stay in Google colab, connect the copied notebook to your local runtime by follow the following steps:
Install notebook locally by running the following command in your terminal:
pip install notebook
jupyter notebook --NotebookApp.allow_origin='https://colab.research.google.com' \
--port=8888 \
--no-browser
You will see something like this in your terminal:
To access the server, open this file in a browser:
<some_path>
Or copy and paste one of these URLs:
<url1>
<url2>
Copy any of the url, and click on āconnect to a local runtimeā button in Google Colab, and paste the copied url into Backend Url.
Click on āconnectā
Basic Agent and local model Setup#
Download Ollama for a local model at: https://ollama.com/download
After setting up Ollama, pull the Llama3 model by typing the following command into the terminal:
ollama pull qwq
3. cd into a desired directory
```bash
cd <target_drectory_path>
4. Create a `ModelFile` similar the one below in your project directory. (Optional)
```bash
FROM qwq
# Set parameters
PARAMETER temperature 0.8
PARAMETER stop Result
# Sets a custom system message to specify the behavior of the chat assistant
# Leaving it blank for now.
SYSTEM """ """
Create a script to get the base model (llama3) and create a custom model using the
ModelFile
above. Save this as a .sh file: (Optional)
#!/bin/zsh
# variables
model_name="qwq"
custom_model_name="camel-qwq"
#get the base model
ollama pull $model_name
#create the model file
ollama create $custom_model_name -f ./ModelFile
Now you have the local model deployed!
[2]:
from camel.models import ModelFactory
from camel.types import ModelPlatformType
ollama_model = ModelFactory.create(
model_platform=ModelPlatformType.OLLAMA,
model_type="qwq",
url="http://localhost:11434/v1", #optional
model_config_dict={"temperature": 0.4},
)
2024-12-29 11:15:47,983 - camel - INFO - Camel library logging has been configured.
[6]:
from camel.agents import ChatAgent
from camel.logger import disable_logging
disable_logging()
chat_agent = ChatAgent(
system_message="You're a helpful assistant",
message_window_size=10,
model=ollama_model,
token_limit=8192, #change base on your input size
)
Knowledge Crawling and Storage#
Use Firecrawl to crawl a website and store the content in a markdown file:
[ ]:
import os
from camel.loaders import Firecrawl
from camel.messages import BaseMessage
os.makedirs('local_data', exist_ok=True)
firecrawl = Firecrawl(api_url=firecrawl_api_url, api_key="_")
crawl_response = firecrawl.crawl(
url="https://docs.camel-ai.org/"
)
with open('local_data/camel.md', 'w') as file:
file.write(crawl_response["data"][0]["markdown"])
Insert the external knowledge to Agent
[8]:
with open('local_data/camel.md', 'r') as file:
knowledge = file.read()
knowledge_message = BaseMessage.make_user_message(
role_name="User", content=f"Based on the following knowledge: {knowledge}"
)
chat_agent.update_memory(knowledge_message, "user")
Basic Chatbot Setup#
[ ]:
print("Start chatting! Type 'exit' to end the conversation.")
while True:
user_input = input("User: ")
if user_input.lower() == "exit":
print("Ending conversation.")
break
assistant_response = chat_agent.step(user_input)
print(f"Assistant: {assistant_response.msgs[0].content}")
Start chatting! Type 'exit' to end the conversation.
User: what is camel?
2024-12-28 14:57:51,584 - httpx - INFO - HTTP Request: POST http://localhost:11434/v1/chat/completions "HTTP/1.1 200 OK"
Assistant: CAMEL is a multi-agent framework that allows you to build and use large language model (LLM)-based agents for real-world task solving. It was introduced as one of the earliest LLM-based multi-agent frameworks in research, and it provides a generic platform for creating various types of agents, tasks, prompts, models, and simulated environments.
The primary goal of CAMEL is to facilitate large-scale studies on agent behaviors, capabilities, and potential risks by providing a comprehensive framework for building and interacting with LLM-based agents. It supports different modules such as models, messages, memory, tools, prompts, tasks, loaders, storages, societies, embeddings, retrievers, and workforce, each serving specific purposes in the agent ecosystem.
CAMEL offers a range of cookbooks and tutorials to help users get started with creating their first agents and agent societies, using tools, implementing memory and retrieval mechanisms, generating tasks, and more. Additionally, it provides API references and indices for developers looking to delve deeper into its functionalities.
If you're interested in contributing to CAMEL, whether through research, coding, or simply engaging with the community, there are various ways to get involved, including joining their Discord, WeChat group, or Slack channel.
User: exit
Ending conversation.
Basic Discord Bot Integration#
To build a discord bot, a discord bot token is necessary.
If you donāt have a bot token, you can obtain one by following these steps:
Go to the Discord Developer Portal:https://discord.com/developers/applications
Log in with your Discord account, or create an account if you donāt have one
Click on āNew Applicationā to create a new bot.
Give your application a name and click āCreateā.
Navigate to the āBotā tab on the left sidebar and click āAdd Botā.
Once the bot is created, you will find a āTokenā section. Click āReset Tokenā to generate a new token.
Copy the generated token securely.
To invite the bot:
Navigate to the āOAuth2ā tab, then to āURL Generatorā.
Under āScopesā, select ābotā.
Under āBot Permissionsā, select the permissions your bot will need (e.g., āSend Messagesā, āRead Messagesā for our bot use)
Copy the generated URL and paste it into your browser to invite the bot to your server.
To grant the bot permissions:
Navigate to the āBotā tab
Under āPrivileged Gateway Intentsā, check āServer Members Intentā and āMessage Content Intentā.
For more details, you can also check the official Discord bot documentation: https://discord.com/developers/docs/intro
[9]:
import os
from getpass import getpass
discord_bot_token = getpass('Enter your Discord bot token: ')
os.environ["DISCORD_BOT_TOKEN"] = discord_bot_token
Enter your Discord bot token: Ā·Ā·Ā·Ā·Ā·Ā·Ā·Ā·Ā·Ā·
This code cell sets up a simple Discord bot using the DiscordApp class from the camel.bots library. The bot listens for messages in any channel it has access to and provides a response based on the input message.
[ ]:
from camel.bots import DiscordApp
import nest_asyncio
import discord
nest_asyncio.apply()
discord_bot = DiscordApp(token=discord_bot_token)
@discord_bot.client.event
async def on_message(message: discord.Message):
if message.author == discord_bot.client.user:
return
if message.type != discord.MessageType.default:
return
if message.author.bot:
return
user_input = message.content
chat_agent.reset()
chat_agent.update_memory(knowledge_message, "user")
assistant_response = chat_agent.step(user_input)
response_content = assistant_response.msgs[0].content
if len(response_content) > 2000: # discord message length limit
for chunk in [response_content[i:i+2000] for i in range(0, len(response_content), 2000)]:
await message.channel.send(chunk)
else:
await message.channel.send(response_content)
discord_bot.run()
Integrating Qdrant for Large Files to build a more powerful Discord bot#
Qdrant is a vector similarity search engine and vector database. It is designed to perform fast and efficient similarity searches on large datasets of vectors. This enables the chatbot to access and utilize external information to provide more comprehensive and accurate responses. By storing knowledge as vectors, Qdrant enables efficient semantic search, allowing the chatbot to find relevant information based on the meaning of the userās query.
Set up an embedding model and retriever for Qdrant: feel free switch to other embedding models supported by CAMEL.
Set up an embedding model and retriever for Qdrant:
[ ]:
from camel.embeddings import SentenceTransformerEncoder # CAMEL also support other embedding
sentence_encoder = SentenceTransformerEncoder(model_name='intfloat/e5-large-v2')
Set up the AutoRetriever for automatically retrieving relevant information from a storage system.
[18]:
from camel.retrievers import AutoRetriever
from camel.types import StorageType
assistant_sys_msg = """You are a helpful assistant to answer question,
I will give you the Original Query and Retrieved Context,
answer the Original Query based on the Retrieved Context,
if you can't answer the question just say I don't know.
Just give the answer to me directly, no more other words needed.
"""
auto_retriever = AutoRetriever(
vector_storage_local_path="local_data2/",
storage_type=StorageType.QDRANT,
embedding_model=sentence_encoder
)
chat_agent_with_rag = ChatAgent(
system_message=assistant_sys_msg,
model=ollama_model,
token_limit=8192, #change base on your input size
)
Use Auto RAG to retrieve first and then answer the userās query using CAMEL ChatAgent
based on the retrieved info:
If you are connecting this cookbook to a local runtime, adding files in your local path in contents might cause an error.
[ ]:
from camel.bots import DiscordApp
import nest_asyncio
import discord
nest_asyncio.apply()
discord_q_bot = DiscordApp(token=discord_bot_token)
@discord_q_bot.client.event # triggers when a message is sent in the channel
async def on_message(message: discord.Message):
if message.author == discord_q_bot.client.user:
return
if message.type != discord.MessageType.default:
return
if message.author.bot:
return
user_input = message.content
query_and_retrieved_info = auto_retriever.run_vector_retriever(
query=user_input,
contents=[ # don't add a local path if you are connecting to a local runtime
"https://docs.camel-ai.org/", #replace with your knowledge base
],
top_k=3,
return_detailed_info=False,
similarity_threshold=0.5
)
user_msg = str(query_and_retrieved_info)
assistant_response = chat_agent_with_rag.step(user_msg)
response_content = assistant_response.msgs[0].content
print(3)
if len(response_content) > 2000: # discord message length limit
for chunk in [response_content[i:i+2000] for i in range(0, len(response_content), 2000)]:
await message.channel.send(chunk)
else:
await message.channel.send(response_content)
discord_q_bot.run()
Thatās everything: Got questions about š« CAMEL-AI? Join us on Discord! Whether you want to share feedback, explore the latest in multi-agent systems, get support, or connect with others on exciting projects, weād love to have you in the community! š¤
Check out some of our other work:
š« Creating Your First CAMEL Agent free Colab
Graph RAG Cookbook free Colab
š§āāļø Create A Hackathon Judge Committee with Workforce free Colab
š„ 3 ways to ingest data from websites with Firecrawl & CAMEL free Colab
š¦„ Agentic SFT Data Generation with CAMEL and Mistral Models, Fine-Tuned with Unsloth free Colab
Thanks from everyone at š« CAMEL-AI