You can also check this cookbook in colab hereβ Star us on Github, join our Discord or follow our XGoal: Star a repository on GitHub with natural language & CAMEL Agent
Integrate Composio with CAMEL agents to let them seamlessly interact with external appsEnsure you have the necessary packages installed and connect your GitHub account to allow your CAMEL-AI agents to utilize GitHub functionalities.
# Connect your Github account (this is a shell command, so it should be run in your terminal or with '!' prefix in a Jupyter Notebook)!composio add github# Check all different apps which you can connect with!composio apps
import osfrom getpass import getpass# Prompt for the API key securelyopenai_api_key = getpass('Enter your API key: ')os.environ["OPENAI_API_KEY"] = openai_api_key
Alternatively, if running on Colab, you could save your API keys and tokens as Colab Secrets, and use them across notebooks.To do so, comment out the above manual API key prompt code block(s), and uncomment the following codeblock.β οΈ Donβt forget granting access to the API key you would be using to the current notebook.
Copy
# import os# from google.colab import userdata# os.environ["OPENAI_API_KEY"] = userdata.get("OPENAI_API_KEY")
Letβs run CAMEL agents with tools from Composio!
Copy
# Set your tasktask_prompt = ( "I have created a new Github Repo," "Please star my github repository: camel-ai/camel")
Copy
# Set Toolsetcomposio_toolset = ComposioToolSet()tools = composio_toolset.get_actions( actions=[Action.GITHUB_STAR_A_REPOSITORY_FOR_THE_AUTHENTICATED_USER])
Copy
# Set models for user agent and assistant agent, give tool to the assistantassistant_agent_model = ModelFactory.create( model_platform=ModelPlatformType.OPENAI, model_type=ModelType.GPT_3_5_TURBO, model_config_dict=ChatGPTConfig(tools=tools).as_dict(),)user_agent_model = ModelFactory.create( model_platform=ModelPlatformType.OPENAI, model_type=ModelType.GPT_3_5_TURBO, model_config_dict=ChatGPTConfig().as_dict(),)
# Print the system message and task promptprint( Fore.GREEN + f"AI Assistant sys message:\n{role_play_session.assistant_sys_msg}\n")print(Fore.BLUE + f"AI User sys message:\n{role_play_session.user_sys_msg}\n")print(Fore.YELLOW + f"Original task prompt:\n{task_prompt}\n")print( Fore.CYAN + "Specified task prompt:" + f"\n{role_play_session.specified_task_prompt}\n")print(Fore.RED + f"Final task prompt:\n{role_play_session.task_prompt}\n")
Copy
# Set terminate rule and print the chat messagen = 0input_msg = role_play_session.init_chat()while n < 50: n += 1 assistant_response, user_response = role_play_session.step(input_msg) if assistant_response.terminated: print( Fore.GREEN + ( "AI Assistant terminated. Reason: " f"{assistant_response.info['termination_reasons']}." ) ) break if user_response.terminated: print( Fore.GREEN + ( "AI User terminated. " f"Reason: {user_response.info['termination_reasons']}." ) ) break # Print output from the user print_text_animated( Fore.BLUE + f"AI User:\n\n{user_response.msg.content}\n" ) # Print output from the assistant, including any function # execution information print_text_animated(Fore.GREEN + "AI Assistant:") tool_calls: List[FunctionCallingRecord] = assistant_response.info[ 'tool_calls' ] for func_record in tool_calls: print_text_animated(f"{func_record}") print_text_animated(f"{assistant_response.msg.content}\n") if "CAMEL_TASK_DONE" in user_response.msg.content: break input_msg = assistant_response.msg