In CAMEL, every model refers specifically to a Large Language Model (LLM) the intelligent core powering your agent’s understanding, reasoning, and conversational capabilities.
Large Language Models (LLMs)
LLMs are sophisticated AI systems trained on vast datasets to understand and generate human-like text. They reason, summarize, create content, and drive conversations effortlessly.
Flexible Model Integration
CAMEL allows quick integration and swapping of leading LLMs from providers like OpenAI, Gemini, Llama, Anthropic, Nebius, and more, helping you match the best model to your task.
Optimized for Customization
Customize performance parameters such as temperature, token limits, and response structures easily, balancing creativity, accuracy, and efficiency.
Rapid Experimentation
Experiment freely, CAMEL’s modular design lets you seamlessly compare and benchmark different LLMs, adapting swiftly as your project needs evolve.
Supported Model Platforms in CAMEL
CAMEL supports a wide range of models, including OpenAI’s GPT series, Meta’s Llama models, DeepSeek models (R1 and other variants), and more.Direct Integrations
Model Platform | Model Type(s) |
---|---|
OpenAI | gpt-4.5-preview gpt-4o, gpt-4o-mini o1, o1-preview, o1-mini o3-mini, o3-pro gpt-4-turbo, gpt-4, gpt-3.5-turbo |
Azure OpenAI | gpt-4o, gpt-4-turbo gpt-4, gpt-3.5-turbo |
Mistral AI | mistral-large-latest, pixtral-12b-2409 ministral-8b-latest, ministral-3b-latest open-mistral-nemo, codestral-latest open-mistral-7b, open-mixtral-8x7b open-mixtral-8x22b, open-codestral-mamba mistral-small-2506, mistral-medium-2508 magistral-small-1.2, magistral-medium-1.2 |
Moonshot | moonshot-v1-8k moonshot-v1-32k moonshot-v1-128k |
Anthropic | claude-2.1, claude-2.0, claude-instant-1.2 claude-3-opus-latest, claude-3-sonnet-20240229, claude-3-haiku-20240307 claude-3-5-sonnet-latest, claude-3-5-haiku-latest |
Gemini | gemini-2.5-pro, gemini-2.5-flash gemini-2.0-flash, gemini-2.0-flash-thinking gemini-2.0-flash-lite |
Lingyiwanwu | yi-lightning, yi-large, yi-medium yi-large-turbo, yi-vision, yi-medium-200k yi-spark, yi-large-rag, yi-large-fc |
Qwen | qwen3-coder-plus,qwq-32b-preview, qwen-max, qwen-plus, qwen-turbo, qwen-long qwen-vl-max, qwen-vl-plus, qwen-math-plus, qwen-math-turbo, qwen-coder-turbo qwen2.5-coder-32b-instruct, qwen2.5-72b-instruct, qwen2.5-32b-instruct, qwen2.5-14b-instruct |
DeepSeek | deepseek-chat deepseek-reasoner |
CometAPI | All models available on CometAPI Including: gpt-5-chat-latest, gpt-5, gpt-5-mini, gpt-5-nano claude-opus-4-1-20250805, claude-sonnet-4-20250514, claude-3-7-sonnet-latest gemini-2.5-pro, gemini-2.5-flash, grok-4-0709, grok-3 deepseek-v3.1, deepseek-v3, deepseek-r1-0528, qwen3-30b-a3b |
Nebius | All models available on Nebius AI Studio Including: gpt-oss-120b, gpt-oss-20b, GLM-4.5 DeepSeek V3 & R1, LLaMA, Mistral, and more |
ZhipuAI | glm-4, glm-4v, glm-4v-flash glm-4v-plus-0111, glm-4-plus, glm-4-air glm-4-air-0111, glm-4-airx, glm-4-long glm-4-flashx, glm-zero-preview, glm-4-flash, glm-3-turbo |
InternLM | internlm3-latest, internlm3-8b-instruct internlm2.5-latest, internlm2-pro-chat |
Reka | reka-core, reka-flash, reka-edge |
COHERE | command-r-plus, command-r, command-light, command, command-nightly |
API & Connector Platforms
Model Platform | Supported via API/Connector |
---|---|
GROQ | supported models |
TOGETHER AI | supported models |
SambaNova | supported models |
Ollama | supported models |
OpenRouter | supported models |
PPIO | supported models |
LiteLLM | supported models |
LMStudio | supported models |
vLLM | supported models |
SGLANG | supported models |
NetMind | supported models |
NOVITA | supported models |
NVIDIA | supported models |
AIML | supported models |
ModelScope | supported models |
AWS Bedrock | supported models |
IBM WatsonX | supported models |
Crynux | supported models |
qianfan | supported models |
How to Use Models via API Calls
Integrate your favorite models into CAMEL-AI with straightforward Python calls. Choose a provider below to see how it’s done:Here’s how you use OpenAI models such as GPT-4o-mini with CAMEL:
Using On-Device Open Source Models
Run Open-Source LLMs Locally
Unlock true flexibility: CAMEL-AI supports running popular LLMs right on your own machine. Use Ollama, vLLM, or SGLang to experiment, prototype, or deploy privately (no cloud required).
1
Using Ollama for Llama 3
1
Install Ollama
Download Ollama and follow the installation steps for your OS.
2
Pull the Llama 3 model
3
(Optional) Create a Custom Model
Create a file named You can also create a shell script
Llama3ModelFile
:setup_llama3.sh
:4
Integrate with CAMEL-AI
2
Using vLLM for Phi-3
1
Install vLLM
Follow the vLLM installation guide for your environment.
2
Start the vLLM server
3
Integrate with CAMEL-AI
3
Using SGLang for Meta-Llama
1
Install SGLang
Follow the SGLang install instructions for your platform.
2
Integrate with CAMEL-AI
Looking for more examples?
Explore the full CAMEL-AI Examples library for advanced workflows, tool integrations, and multi-agent demos.
Next Steps
You’ve now seen how to connect, configure, and optimize models with CAMEL-AI.Continue: Working with Messages
Learn how to create, format, and convert BaseMessage objects—the backbone of agent conversations in CAMEL-AI.