import osfrom getpass import getpasssamba_api_key = getpass('Enter your API key: ')os.environ["SAMBA_API_KEY"] = samba_api_key
Alternatively, if running on Colab, you could save your API keys and tokens as Colab Secrets, and use them across notebooks.To do so, comment out the above manual API key prompt code block(s), and uncomment the following codeblock.⚠️ Don’t forget granting access to the API key you would be using to the current notebook.
Qwen is large language model developed by Alibaba. It is trained on a massive dataset of text and code and can generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.Use Qwen models with SambaNova Cloud to set up CAMEL agent:
Copy
from camel.configs import SambaCloudAPIConfigfrom camel.models import ModelFactoryfrom camel.types import ModelPlatformType, ModelTypefrom camel.agents import ChatAgentfrom camel.messages import BaseMessage#### Set up Agent using Qwen2.5-Coder-32B-Instruct #####qwen_model = ModelFactory.create( model_platform=ModelPlatformType.SAMBA, model_type="Qwen2.5-Coder-32B-Instruct", model_config_dict=SambaCloudAPIConfig(max_tokens=4000).as_dict(),)# ##### Set up Agent using Qwen2.5-72B-Instruct ###### qwen_model = ModelFactory.create(# model_platform=ModelPlatformType.SAMBA,# model_type="Qwen2.5-72B-Instruct",# model_config_dict=SambaCloudAPIConfig(max_tokens=4000).as_dict(),# )chat_agent = ChatAgent( system_message="You're a helpful assistant", message_window_size=20, model=qwen_model)
Insert the external knowledge to Agent
Copy
knowledge_message = BaseMessage.make_user_message( role_name="User", content=f"Based on the following knowledge: {knowledge}")chat_agent.update_memory(knowledge_message, "user")
Let’s set up the basic Chatbot with CAMEL Agent and ask some questions!Example question you can ask:How SambaNova Cloud supports Qwen 2.5 Coder and how fast it is?
Copy
print("Start chatting! Type 'exit' to end the conversation.")while True: user_input = input("User: ") if user_input.lower() == "exit": print("Ending conversation.") break assistant_response = chat_agent.step(user_input) print(f"Assistant: {assistant_response.msgs[0].content}")
# import os# from google.colab import userdata# os.environ["DISCORD_BOT_TOKEN"] = userdata.get("DISCORD_BOT_TOKEN")
This code cell sets up a simple Discord bot using the DiscordApp class from the camel.bots library. The bot listens for messages in any channel it has access to and provides a response based on the input message.
Copy
from camel.bots import DiscordAppimport nest_asyncioimport discordnest_asyncio.apply()discord_bot = DiscordApp(token=discord_bot_token)@discord_bot.client.eventasync def on_message(message: discord.Message): if message.author == discord_bot.client.user: return if message.type != discord.MessageType.default: return if message.author.bot: return user_input = message.content chat_agent.reset() chat_agent.update_memory(knowledge_message, "user") assistant_response = chat_agent.step(user_input) response_content = assistant_response.msgs[0].content if len(response_content) > 2000: # discord message length limit for chunk in [response_content[i:i+2000] for i in range(0, len(response_content), 2000)]: await message.channel.send(chunk) else: await message.channel.send(response_content)discord_bot.run()
Integrating Qdrant for More Files to build a more powerful Discord bot
Qdrant is a vector similarity search engine and vector database. It is designed to perform fast and efficient similarity searches on large datasets of vectors. This enables the chatbot to access and utilize external information to provide more comprehensive and accurate responses. By storing knowledge as vectors, Qdrant enables efficient semantic search, allowing the chatbot to find relevant information based on the meaning of the user’s query.In this section, we will add more data source, including camel’s example code regarding how to use SambaNova Cloud, then ask more complex questions.Set up an embedding model and retriever for Qdrant:
You can use Tesla T4 Google Colab instance for running open-source embedding models with RAG functionality for bots, feel free switch to other embedding models supported by CAMEL.
Copy
from camel.embeddings import SentenceTransformerEncoder # CAMEL also support other embedding modelsfrom camel.types import EmbeddingModelTypesentence_encoder = SentenceTransformerEncoder(model_name='intfloat/e5-large-v2')
Set up the AutoRetriever for retrieving relevant information from a storage system.
Copy
from camel.retrievers import AutoRetrieverfrom camel.types import StorageTypeassistant_sys_msg = """You are a helpful assistant to answer question, I will give you the Original Query and Retrieved Context, answer the Original Query based on the Retrieved Context, if you can't answer the question just say I don't know. Just give the answer to me directly, no more other words needed. """auto_retriever = AutoRetriever( vector_storage_local_path="local_data2/", storage_type=StorageType.QDRANT, embedding_model=sentence_encoder )chat_agent_with_rag = ChatAgent(system_message=assistant_sys_msg, model=qwen_model)
Use Auto RAG to retrieve first and then answer the user’s query using CAMEL ChatAgent based on the retrieved info:
Copy
from camel.bots import DiscordAppimport nest_asyncioimport discordnest_asyncio.apply()discord_q_bot = DiscordApp(token=discord_bot_token)@discord_q_bot.client.event # triggers when a message is sent in the channelasync def on_message(message: discord.Message): if message.author == discord_q_bot.client.user: return if message.type != discord.MessageType.default: return if message.author.bot: return user_input = message.content query_and_retrieved_info = auto_retriever.run_vector_retriever( query=user_input, contents=[ "local_data/sambanova_announcement.md", # SambaNova's anncouncement "https://github.com/camel-ai/camel/blob/master/examples/models/samba_model_example.py", # CAMEL's example code for SambaNova Usage ], top_k=3, return_detailed_info=False, similarity_threshold=0.5 ) user_msg = str(query_and_retrieved_info) assistant_response = chat_agent_with_rag.step(user_msg) response_content = assistant_response.msgs[0].content if len(response_content) > 2000: # discord message length limit for chunk in [response_content[i:i+2000] for i in range(0, len(response_content), 2000)]: await message.channel.send(chunk) else: await message.channel.send(response_content)discord_q_bot.run()
Start from the same query as before:Since we also added CAMEL’s example code to the RAG Bot, let’s ask some code related question:Ask the bot to guide you through setting up Qwen2.5-Coder-32B-Instruct. CAMEL’s bot, equipped with memory capabilities, can assist effectively by leveraging its ability to recall related information from previous interactions!That’s everything: Got questions about 🐫 CAMEL-AI? Join us on Discord! Whether you want to share feedback, explore the latest in multi-agent systems, get support, or connect with others on exciting projects, we’d love to have you in the community! 🤝Check out some of our other work: