Supported Models
Ovexa provides access to models from 9+ AI providers through a single API endpoint. You specify the model in your request, and Ovexa routes it to the correct provider automatically.
Free Plan Model Restrictions
The Free plan only allows access to gpt-5.4-nano and google/gemini-2.5-flash-lite. To use other models, upgrade to the Solo plan or higher.
Model Naming Convention
Models can be referenced in two ways:
- Bare name (for OpenAI models):
gpt-4o,o1 - Provider-prefixed:
provider/model-name(e.g.,anthropic/claude-4.6-sonnet,google/gemini-2.5-pro)
OpenAI models work with either format: gpt-4o and openai/gpt-4o are equivalent.
OpenAI
| Model | Context Window | Description |
|---|---|---|
gpt-5.4 | 256K | Most capable OpenAI model |
gpt-5.4-mini | 256K | Efficient version of GPT-5.4 |
gpt-4o | 128K | Fast, multimodal flagship model |
gpt-5.4-nano | 256K | Affordable small model for lightweight tasks |
o1 | 200K | Advanced reasoning model |
o1-mini | 128K | Smaller, faster reasoning model |
Anthropic
| Model | Context Window | Description |
|---|---|---|
anthropic/claude-4.6-opus | 1M | Most powerful Claude model |
anthropic/claude-4.6-sonnet | 1M | Balanced performance and speed |
anthropic/claude-4.6-haiku | 1M | Fastest Claude model |
Google
| Model | Context Window | Description |
|---|---|---|
google/gemini-3.1-pro | 2M | Latest Gemini Pro model |
google/gemini-2.5-pro | 1M | Previous generation Pro |
google/gemini-2.5-flash | 1M | Fast and efficient |
google/gemini-2.5-flash-lite | 1M | Lightest Gemini model |
Mistral
| Model | Context Window | Description |
|---|---|---|
mistral/mistral-large-3 | 128K | Mistral flagship model |
mistral/mistral-small-4 | 128K | Efficient small model |
mistral/pixtral-large | 128K | Multimodal with vision |
Groq
| Model | Context Window | Description |
|---|---|---|
groq/llama-4-scout | 128K | Llama 4 on Groq inference |
groq/llama-4-maverick | 128K | Llama 4 Maverick variant |
groq/deepseek-v3 | 64K | DeepSeek V3 on Groq |
DeepSeek
| Model | Context Window | Description |
|---|---|---|
deepseek/deepseek-v3.2 | 128K | Latest DeepSeek general model |
deepseek/deepseek-reasoner | 128K | Specialized reasoning model |
Cohere
| Model | Context Window | Description |
|---|---|---|
cohere/command-a | 256K | Latest Command model |
cohere/command-r-plus | 128K | RAG-optimized model |
xAI
| Model | Context Window | Description |
|---|---|---|
xai/grok-4 | 256K | xAI flagship model |
Perplexity
| Model | Context Window | Description |
|---|---|---|
perplexity/pplx-70b-online | 128K | Search-augmented model with live data |
Example: Using Different Models
# OpenAI (bare name)
curl -X POST https://api.ovexa.ai/v1/chat/completions \
-H "Authorization: Bearer vpx_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4o", "messages": [{"role": "user", "content": "Hello"}]}'
# Anthropic (provider-prefixed)
curl -X POST https://api.ovexa.ai/v1/chat/completions \
-H "Authorization: Bearer vpx_live_YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "anthropic/claude-4.6-sonnet", "messages": [{"role": "user", "content": "Hello"}]}'
info
You must have a valid provider key configured for the provider of the model you are requesting. If you request an Anthropic model but have not added an Anthropic provider key, the request will return a 400 error.