Skip to main content
Conduit.im

Models

Discover and compare available AI models, their capabilities, rankings, and benchmark scores.

Endpoint

GET https://conduit.im/api/rankings

This endpoint is public and does not require authentication. Responses are cached for up to 1 hour.

Query Parameters

ParameterTypeDefaultDescription
sourcestringINTERNALRanking source. One of: INTERNAL, LMSYS_ARENA, HUGGINGFACE, ARTIFICIAL_ANALYSIS, OPENROUTER
categorystringFilter by model category. One of: GENERAL, CODING, MATH, REASONING, CREATIVE, OPEN_SOURCE
limitinteger50Number of results to return (1–100)
offsetinteger0Number of results to skip for pagination
sortBystringrankSort field. One of: rank, score, name, change
sortOrderstringascSort direction. One of: asc, desc
searchstringSearch by model name, provider, or provider ID (case-insensitive)
commercialOnlybooleanWhen true, only return models available for commercial use

Example Request

Fetch the top 10 models in the CODING category, sorted by rank:

curl "https://conduit.im/api/rankings?category=CODING&limit=10&sortBy=rank&sortOrder=asc"

Response Format

The response contains ranked model data, pagination info, source health status, and a timestamp.

{
  "rankings": [
    {
      "rank": 1,
      "categoryRank": 1,
      "change": 0,
      "model": {
        "id": "claude-opus-4-6",
        "name": "Claude Opus 4.6",
        "provider": "Anthropic",
        "category": "CODING",
        "contextWindow": 200000,
        "imageSupport": true,
        "functionCalling": true,
        "commercialUse": true,
        "license": "Commercial"
      },
      "scores": {
        "overall": 94.2,
        "quality": 96.1,
        "speed": 78.5,
        "cost": 62.0,
        "elo": 1285,
        "benchmarks": {
          "mmlu": 92.3,
          "hellaswag": 95.1,
          "humanEval": 90.8
        }
      },
      "metadata": {
        "confidence": 0.95,
        "dataFreshnessHours": 2,
        "lastUpdated": "2026-03-09T12:00:00.000Z"
      }
    },
    {
      "rank": 2,
      "categoryRank": 2,
      "change": 1,
      "model": {
        "id": "gpt-4o",
        "name": "GPT-4o",
        "provider": "OpenAI",
        "category": "CODING",
        "contextWindow": 128000,
        "imageSupport": true,
        "functionCalling": true,
        "commercialUse": true,
        "license": "Commercial"
      },
      "scores": {
        "overall": 91.8,
        "quality": 93.4,
        "speed": 85.2,
        "cost": 70.5,
        "elo": 1260,
        "benchmarks": {
          "mmlu": 90.1,
          "hellaswag": 93.7,
          "humanEval": 88.4
        }
      },
      "metadata": {
        "confidence": 0.93,
        "dataFreshnessHours": 3,
        "lastUpdated": "2026-03-09T11:30:00.000Z"
      }
    }
  ],
  "pagination": {
    "total": 42,
    "limit": 10,
    "offset": 0,
    "hasMore": true
  },
  "sources": [
    {
      "source": "INTERNAL",
      "isHealthy": true,
      "lastSyncAt": "2026-03-09T10:00:00.000Z"
    }
  ],
  "timestamp": "2026-03-09T12:05:00.000Z"
}

Model Properties

FieldTypeDescription
idstringProvider model ID (e.g., gpt-4o, claude-opus-4-6). Use this as the model parameter in Chat Completions.
namestringHuman-readable model name
providerstringModel provider (e.g., OpenAI, Anthropic, Google)
categorystringModel category: GENERAL, CODING, MATH, REASONING, CREATIVE, or OPEN_SOURCE
contextWindowintegerMaximum context window size in tokens
imageSupportbooleanWhether the model supports image inputs
functionCallingbooleanWhether the model supports function/tool calling
commercialUsebooleanWhether the model is available for commercial use
licensestring | nullModel license type (e.g., "Commercial", "Apache 2.0", "MIT")

Scores & Benchmarks

Each ranked model includes composite scores and individual benchmark results. All scores range from 0 to 100 unless otherwise noted.

ScoreRangeDescription
overall0–100Weighted composite score across all dimensions
quality0–100Output quality and accuracy score
speed0–100Inference speed and latency score
cost0–100Cost efficiency score (higher = more cost-effective)
elo~800–1400Elo rating from head-to-head model comparisons
benchmarks.mmlu0–100MMLU (Massive Multitask Language Understanding) score
benchmarks.hellaswag0–100HellaSwag commonsense reasoning score
benchmarks.humanEval0–100HumanEval code generation score

Filtering & Searching

Combine query parameters to narrow results. Here are some common patterns:

Search by name or provider:

curl "https://conduit.im/api/rankings?search=claude"

Top 5 commercially-usable reasoning models:

curl "https://conduit.im/api/rankings?category=REASONING&commercialOnly=true&limit=5"

Models sorted by score (highest first):

curl "https://conduit.im/api/rankings?sortBy=score&sortOrder=desc&limit=20"

Paginate through results:

# Page 1
curl "https://conduit.im/api/rankings?limit=10&offset=0"

# Page 2
curl "https://conduit.im/api/rankings?limit=10&offset=10"

Using Models with Chat Completions

The model.id field in the rankings response (e.g., gpt-4o, claude-opus-4-6) is the value you pass as the model parameter to the Chat Completions API.

curl -X POST "https://api.conduit.im/v1/chat/completions" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "claude-opus-4-6",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Next Steps