Next, set up your API keys for Firecrawl and the model (Qwen or Mistral)If you don’t have a FireCrawl API key, you can obtain one by following these steps:
import osfrom getpass import getpassfirecrawl_api_key = getpass('Enter your API key: ')os.environ["FIRECRAWL_API_KEY"] = firecrawl_api_key
If you want to choose Mistral as the model, skip below part for Qwen.If you don’t have a Qwen API key, you can obtain one by following these steps:
Visit the Alibaba Cloud Model Studio Console (https://www.alibabacloud.com/en?_p_lc=1) and follow the on-screen instructions to activate the model services.
In the upper-right corner of the console, click on your account name and select API-KEY.
On the API Key management page, click on the Create API Key button to generate a new key.
mistral_api_key = getpass('Enter your API key')os.environ["MISTRAL_API_KEY"] = mistral_api_key
Alternatively, if running on Colab, you could save your API keys and tokens as Colab Secrets, and use them across notebooks.To do so, comment out the above manual API key prompt code block(s), and uncomment the following codeblock.⚠️ Don’t forget granting access to the API key you would be using to the current notebook.
Qwen is a large language model developed by Alibaba Cloud. It is trained on a massive dataset of text and code and can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.Use QWen model:
Copy
from camel.configs import QwenConfig, MistralConfigfrom camel.models import ModelFactoryfrom camel.types import ModelPlatformType, ModelTypeqwen_model = ModelFactory.create( model_platform=ModelPlatformType.QWEN, model_type=ModelType.QWEN_TURBO, model_config_dict=QwenConfig(temperature=0.2).as_dict(),)mistral_model = ModelFactory.create( model_platform=ModelPlatformType.MISTRAL, model_type=ModelType.MISTRAL_LARGE, model_config_dict=MistralConfig(temperature=0.0).as_dict(),)# Use Qwen modelmodel = qwen_model# Replace with mistral_model if you want to choose mistral mode instead# model = mistral_model
Copy
from camel.agents import ChatAgentfrom camel.messages import BaseMessageagent = ChatAgent( system_message="You're a helpful assistant", message_window_size=10, model=model)knowledge_message = BaseMessage.make_user_message( role_name="User", content=f"Based on the following knowledge: {knowledge}")agent.update_memory(knowledge_message, "user")
print("Start chatting! Type 'exit' to end the conversation.")while True: user_input = input("User: ") if user_input.lower() == "exit": print("Ending conversation.") break assistant_response = agent.step(user_input) print(f"Assistant: {assistant_response.msgs[0].content}")
# import os# from google.colab import userdata# os.environ["DISCORD_BOT_TOKEN"] = userdata.get("DISCORD_BOT_TOKEN")
This code cell sets up a simple Discord bot using the DiscordApp class from the camel.bots library. The bot listens for messages in any channel it has access to and provides a response based on the input message.
Copy
from camel.bots import DiscordAppimport nest_asyncioimport discordnest_asyncio.apply()discord_bot = DiscordApp(token=discord_bot_token)@discord_bot.client.eventasync def on_message(message: discord.Message): if message.author == discord_bot.client.user: return if message.type != discord.MessageType.default: return if message.author.bot: return user_input = message.content agent.reset() agent.update_memory(knowledge_message, "user") assistant_response = agent.step(user_input) response_content = assistant_response.msgs[0].content if len(response_content) > 2000: # discord message length limit for chunk in [response_content[i:i+2000] for i in range(0, len(response_content), 2000)]: await message.channel.send(chunk) else: await message.channel.send(response_content)discord_bot.run()
Integrating Qdrant for Large Files to build a more powerful Discord bot
Qdrant is a vector similarity search engine and vector database. It is designed to perform fast and efficient similarity searches on large datasets of vectors. This enables the chatbot to access and utilize external information to provide more comprehensive and accurate responses. By storing knowledge as vectors, Qdrant enables efficient semantic search, allowing the chatbot to find relevant information based on the meaning of the user’s query.Set up an embedding model and retriever for Qdrant:
Copy
from camel.embeddings import SentenceTransformerEncodersentence_encoder = SentenceTransformerEncoder(model_name='intfloat/e5-large-v2')
Set up the AutoRetriever for automatically retrieving relevant information from a storage system.
Copy
from camel.retrievers import AutoRetrieverfrom camel.types import StorageTypeassistant_sys_msg = """You are a helpful assistant to answer question, I will give you the Original Query and Retrieved Context, answer the Original Query based on the Retrieved Context, if you can't answer the question just say I don't know."""auto_retriever = AutoRetriever( vector_storage_local_path="local_data2/", storage_type=StorageType.QDRANT, embedding_model=sentence_encoder )qdrant_agent = ChatAgent(system_message=assistant_sys_msg, model=model)
Use Auto RAG to retrieve first and then answer the user’s query using CAMEL ChatAgent based on the retrieved info:
Copy
from camel.bots import DiscordAppimport nest_asyncioimport discordnest_asyncio.apply()discord_q_bot = DiscordApp(token=discord_bot_token)@discord_q_bot.client.event # triggers when a message is sent in the channelasync def on_message(message: discord.Message): if message.author == discord_q_bot.client.user: return if message.type != discord.MessageType.default: return if message.author.bot: return user_input = message.content retrieved_info = auto_retriever.run_vector_retriever( query=user_input, contents=[ "local_data/qdrant_overview.md", ], top_k=3, return_detailed_info=False, similarity_threshold=0.5 ) user_msg = str(retrieved_info) assistant_response = qdrant_agent.step(user_msg) response_content = assistant_response.msgs[0].content if len(response_content) > 2000: # discord message length limit for chunk in [response_content[i:i+2000] for i in range(0, len(response_content), 2000)]: await message.channel.send(chunk) else: await message.channel.send(response_content)discord_q_bot.run()
That’s everything: Got questions about 🐫 CAMEL-AI? Join us on Discord! Whether you want to share feedback, explore the latest in multi-agent systems, get support, or connect with others on exciting projects, we’d love to have you in the community! 🤝Check out some of our other work:
🧑⚖️ Create A Hackathon Judge Committee with Workforce free Colab
🔥 3 ways to ingest data from websites with Firecrawl & CAMEL free Colab
🦥 Agentic SFT Data Generation with CAMEL and Mistral Models, Fine-Tuned with Unsloth free Colab
Thanks from everyone at 🐫 CAMEL-AI⭐ Star the RepoIf you find CAMEL useful or interesting, please consider giving it a star on our CAMEL GitHub Repo! Your stars help others find this project and motivate us to continue improving it.