staticdata.dev
Menu
Datasets About Contact

LLM Models

A current snapshot of production language model metadata across the major providers — Anthropic, OpenAI, Google, Mistral, DeepSeek, xAI, Meta, Alibaba, Cohere — with each model's context window, max output tokens, input and output prices in USD per million tokens, vision and tool-use capabilities, knowledge cutoff date, release date, and deprecation date if announced.

v1.0.0 MIT 22 records Updated 2026-04-25 stable Source:  Anthropic model documentation, OpenAI models, Google Gemini models

Quick fetch

Stable, immutable URL. CORS-enabled. No auth.

bash
curl https://staticdata.dev/v1/llm-models.json

Or use as a typed import: import { llmModels } from "https://staticdata.dev/v1/llm-models.ts"

Formats

  • JSON 9.2 KB

    /v1/llm-models.json

    Open

    min: 7.4 KB · /v1/llm-models.min.json

  • CSV 2.9 KB

    /v1/llm-models.csv

    Open
  • TypeScript 9.3 KB

    /v1/llm-models.ts

    export const llmModels

    type LlmModel = (typeof llmModels)[number]

    Open

Schema

Each record in the dataset has the following shape.

Schema
Field Type Description Example
id string API identifier exactly as accepted by the provider claude-opus-4-7
provider string Provider name Anthropic
family string Model family the version belongs to Claude 4
contextWindow number Total context window in tokens (input + output) 1000000
maxOutputTokens number Maximum output tokens per request 64000
inputPricePerMillionUsd number Input price per million tokens in USD 15
outputPricePerMillionUsd number Output price per million tokens in USD 75
supportsVision boolean Whether the model accepts image inputs true
supportsTools boolean Whether the model supports tool / function calling true
knowledgeCutoff string ISO YYYY-MM month of the model's training cutoff 2026-01
released string ISO YYYY-MM month the model was released 2026-04
deprecated? string | null ISO YYYY-MM month the model is announced for deprecation, or null
sourceUrl string URL to the provider's documentation for the entry https://docs.anthropic.com/claude/docs/models-overview

Preview

First 10 records.

Data preview
idproviderfamilycontextWindowmaxOutputTokensinputPricePerMillionUsd
claude-opus-4-7AnthropicClaude 410000006400015
claude-sonnet-4-6AnthropicClaude 4200000640003
claude-haiku-4-5AnthropicClaude 420000081921
claude-opus-4-5AnthropicClaude 42000003200015
claude-3-5-sonnet-20241022AnthropicClaude 3.520000081923
gpt-5OpenAIGPT-540000012800010
gpt-5-miniOpenAIGPT-54000001280001.5
gpt-4oOpenAIGPT-4128000163842.5
gpt-4o-miniOpenAIGPT-4128000163840.15
o3OpenAIo-series2000001000002

Showing 10 of 22. View full data:  JSON · CSV

Fetch examples

Drop-in snippets in five languages.

curl
curl -sSL https://staticdata.dev/v1/llm-models.json | jq '.[0]'
JavaScript
import type { LlmModel } from "https://staticdata.dev/v1/llm-models.ts";

const res = await fetch("https://staticdata.dev/v1/llm-models.min.json");
if (!res.ok) throw new Error(`Fetch failed: ${res.status}`);
const llmModels: LlmModel[] = await res.json();
console.log(llmModels[0]);
Python
import urllib.request, json

with urllib.request.urlopen("https://staticdata.dev/v1/llm-models.min.json") as r:
    llmModels = json.load(r)

print(llmModels[0])
Go
package main

import (
	"encoding/json"
	"fmt"
	"net/http"
)

func main() {
	resp, err := http.Get("https://staticdata.dev/v1/llm-models.min.json")
	if err != nil { panic(err) }
	defer resp.Body.Close()

	var data []map[string]any
	if err := json.NewDecoder(resp.Body).Decode(&data); err != nil {
		panic(err)
	}
	fmt.Println(data[0])
}
Rust
use serde_json::Value;

fn main() -> Result<(), Box<dyn std::error::Error>> {
    let body = ureq::get("https://staticdata.dev/v1/llm-models.min.json").call()?.into_string()?;
    let data: Vec<Value> = serde_json::from_str(&body)?;
    println!("{:?}", data.first());
    Ok(())
}

Sources and methodology

This dataset moves faster than any other on the site. Providers ship new models, change prices, deprecate old endpoints, and revise context-window limits on a near-monthly cadence. Treat this snapshot as authoritative as of the lastUpdated date and verify against the provider’s docs before making pricing or capability decisions in production.

Each entry’s sourceUrl points to the provider’s official documentation page. If a price or limit looks wrong, click through and check; we update from those pages.

Pricing is for standard, on-demand API access. Provisioned throughput, batch discounts, fine-tuned model surcharges, and prompt-caching discounts are not represented.

Context windows are total input+output token capacity. The maxOutputTokens is the per-request output cap, which is often much smaller than the full context window.

Models from open-weight families (Llama, Qwen, DeepSeek, Mistral) have prices reflecting hosted-API pricing from the model’s primary provider; running these models on third-party infrastructure or self-hosted will cost differently.

The deprecated field is set when the provider has announced an end-of-life date for the model. Models that are simply outdated but still served (gpt-4o, claude-3-5-sonnet-20241022) remain null until a deprecation date is published.

Versioning

URLs under /v1/ are immutable. The data they return will not change in a way that breaks consumers. Schema-incompatible updates ship under a new version path. See the llm-models changelog for this dataset's history.