Models
Discover and compare available AI models, their capabilities, rankings, and benchmark scores.
Endpoint
GET https://conduit.im/api/rankingsThis endpoint is public and does not require authentication. Responses are cached for up to 1 hour.
Query Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
| source | string | INTERNAL | Ranking source. One of: INTERNAL, LMSYS_ARENA, HUGGINGFACE, ARTIFICIAL_ANALYSIS, OPENROUTER |
| category | string | — | Filter by model category. One of: GENERAL, CODING, MATH, REASONING, CREATIVE, OPEN_SOURCE |
| limit | integer | 50 | Number of results to return (1–100) |
| offset | integer | 0 | Number of results to skip for pagination |
| sortBy | string | rank | Sort field. One of: rank, score, name, change |
| sortOrder | string | asc | Sort direction. One of: asc, desc |
| search | string | — | Search by model name, provider, or provider ID (case-insensitive) |
| commercialOnly | boolean | — | When true, only return models available for commercial use |
Example Request
Fetch the top 10 models in the CODING category, sorted by rank:
curl "https://conduit.im/api/rankings?category=CODING&limit=10&sortBy=rank&sortOrder=asc"Response Format
The response contains ranked model data, pagination info, source health status, and a timestamp.
{
"rankings": [
{
"rank": 1,
"categoryRank": 1,
"change": 0,
"model": {
"id": "claude-opus-4-6",
"name": "Claude Opus 4.6",
"provider": "Anthropic",
"category": "CODING",
"contextWindow": 200000,
"imageSupport": true,
"functionCalling": true,
"commercialUse": true,
"license": "Commercial"
},
"scores": {
"overall": 94.2,
"quality": 96.1,
"speed": 78.5,
"cost": 62.0,
"elo": 1285,
"benchmarks": {
"mmlu": 92.3,
"hellaswag": 95.1,
"humanEval": 90.8
}
},
"metadata": {
"confidence": 0.95,
"dataFreshnessHours": 2,
"lastUpdated": "2026-03-09T12:00:00.000Z"
}
},
{
"rank": 2,
"categoryRank": 2,
"change": 1,
"model": {
"id": "gpt-4o",
"name": "GPT-4o",
"provider": "OpenAI",
"category": "CODING",
"contextWindow": 128000,
"imageSupport": true,
"functionCalling": true,
"commercialUse": true,
"license": "Commercial"
},
"scores": {
"overall": 91.8,
"quality": 93.4,
"speed": 85.2,
"cost": 70.5,
"elo": 1260,
"benchmarks": {
"mmlu": 90.1,
"hellaswag": 93.7,
"humanEval": 88.4
}
},
"metadata": {
"confidence": 0.93,
"dataFreshnessHours": 3,
"lastUpdated": "2026-03-09T11:30:00.000Z"
}
}
],
"pagination": {
"total": 42,
"limit": 10,
"offset": 0,
"hasMore": true
},
"sources": [
{
"source": "INTERNAL",
"isHealthy": true,
"lastSyncAt": "2026-03-09T10:00:00.000Z"
}
],
"timestamp": "2026-03-09T12:05:00.000Z"
}Model Properties
| Field | Type | Description |
|---|---|---|
| id | string | Provider model ID (e.g., gpt-4o, claude-opus-4-6). Use this as the model parameter in Chat Completions. |
| name | string | Human-readable model name |
| provider | string | Model provider (e.g., OpenAI, Anthropic, Google) |
| category | string | Model category: GENERAL, CODING, MATH, REASONING, CREATIVE, or OPEN_SOURCE |
| contextWindow | integer | Maximum context window size in tokens |
| imageSupport | boolean | Whether the model supports image inputs |
| functionCalling | boolean | Whether the model supports function/tool calling |
| commercialUse | boolean | Whether the model is available for commercial use |
| license | string | null | Model license type (e.g., "Commercial", "Apache 2.0", "MIT") |
Scores & Benchmarks
Each ranked model includes composite scores and individual benchmark results. All scores range from 0 to 100 unless otherwise noted.
| Score | Range | Description |
|---|---|---|
| overall | 0–100 | Weighted composite score across all dimensions |
| quality | 0–100 | Output quality and accuracy score |
| speed | 0–100 | Inference speed and latency score |
| cost | 0–100 | Cost efficiency score (higher = more cost-effective) |
| elo | ~800–1400 | Elo rating from head-to-head model comparisons |
| benchmarks.mmlu | 0–100 | MMLU (Massive Multitask Language Understanding) score |
| benchmarks.hellaswag | 0–100 | HellaSwag commonsense reasoning score |
| benchmarks.humanEval | 0–100 | HumanEval code generation score |
Filtering & Searching
Combine query parameters to narrow results. Here are some common patterns:
Search by name or provider:
curl "https://conduit.im/api/rankings?search=claude"Top 5 commercially-usable reasoning models:
curl "https://conduit.im/api/rankings?category=REASONING&commercialOnly=true&limit=5"Models sorted by score (highest first):
curl "https://conduit.im/api/rankings?sortBy=score&sortOrder=desc&limit=20"Paginate through results:
# Page 1
curl "https://conduit.im/api/rankings?limit=10&offset=0"
# Page 2
curl "https://conduit.im/api/rankings?limit=10&offset=10"Using Models with Chat Completions
The model.id field in the rankings response (e.g., gpt-4o, claude-opus-4-6) is the value you pass as the model parameter to the Chat Completions API.
curl -X POST "https://api.conduit.im/v1/chat/completions" \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-opus-4-6",
"messages": [{"role": "user", "content": "Hello!"}]
}'Next Steps
- Chat Completions API — Send requests to the models you discover here
- Error Handling — Handle API errors gracefully
- Live Rankings — Explore model rankings interactively