Model Platform | Model Type(s) |
---|---|
OpenAI | gpt-4.5-preview gpt-4o, gpt-4o-mini o1, o1-preview, o1-mini o3-mini, o3-pro gpt-4-turbo, gpt-4, gpt-3.5-turbo |
Azure OpenAI | gpt-4o, gpt-4-turbo gpt-4, gpt-3.5-turbo |
Mistral AI | mistral-large-latest, pixtral-12b-2409 ministral-8b-latest, ministral-3b-latest open-mistral-nemo, codestral-latest open-mistral-7b, open-mixtral-8x7b open-mixtral-8x22b, open-codestral-mamba magistral-medium-2506, mistral-small-2506 |
Moonshot | moonshot-v1-8k moonshot-v1-32k moonshot-v1-128k |
Anthropic | claude-2.1, claude-2.0, claude-instant-1.2 claude-3-opus-latest, claude-3-sonnet-20240229, claude-3-haiku-20240307 claude-3-5-sonnet-latest, claude-3-5-haiku-latest |
Gemini | gemini-2.5-pro, gemini-2.5-flash gemini-2.0-flash, gemini-2.0-flash-thinking gemini-2.0-flash-lite |
Lingyiwanwu | yi-lightning, yi-large, yi-medium yi-large-turbo, yi-vision, yi-medium-200k yi-spark, yi-large-rag, yi-large-fc |
Qwen | qwen3-coder-plus,qwq-32b-preview, qwen-max, qwen-plus, qwen-turbo, qwen-long qwen-vl-max, qwen-vl-plus, qwen-math-plus, qwen-math-turbo, qwen-coder-turbo qwen2.5-coder-32b-instruct, qwen2.5-72b-instruct, qwen2.5-32b-instruct, qwen2.5-14b-instruct |
DeepSeek | deepseek-chat deepseek-reasoner |
ZhipuAI | glm-4, glm-4v, glm-4v-flash glm-4v-plus-0111, glm-4-plus, glm-4-air glm-4-air-0111, glm-4-airx, glm-4-long glm-4-flashx, glm-zero-preview, glm-4-flash, glm-3-turbo |
InternLM | internlm3-latest, internlm3-8b-instruct internlm2.5-latest, internlm2-pro-chat |
Reka | reka-core, reka-flash, reka-edge |
COHERE | command-r-plus, command-r, command-light, command, command-nightly |
Model Platform | Supported via API/Connector |
---|---|
GROQ | supported models |
TOGETHER AI | supported models |
SambaNova | supported models |
Ollama | supported models |
OpenRouter | supported models |
PPIO | supported models |
LiteLLM | supported models |
LMStudio | supported models |
vLLM | supported models |
SGLANG | supported models |
NetMind | supported models |
NOVITA | supported models |
NVIDIA | supported models |
AIML | supported models |
ModelScope | supported models |
AWS Bedrock | supported models |
IBM WatsonX | supported models |
Crynux | supported models |
qianfan | supported models |
Using Ollama for Llama 3
Install Ollama
Pull the Llama 3 model
(Optional) Create a Custom Model
Llama3ModelFile
:setup_llama3.sh
:Integrate with CAMEL-AI
Using vLLM for Phi-3
Install vLLM
Start the vLLM server
Integrate with CAMEL-AI
Using SGLang for Meta-Llama
Install SGLang
Integrate with CAMEL-AI