Skip to content

Kilo Gateway: Sync model list with upstream (2026-04-11)#1418

Open
Ardakilic wants to merge 1 commit intoanomalyco:devfrom
Ardakilic:dev
Open

Kilo Gateway: Sync model list with upstream (2026-04-11)#1418
Ardakilic wants to merge 1 commit intoanomalyco:devfrom
Ardakilic:dev

Conversation

@Ardakilic
Copy link
Copy Markdown
Contributor

Kilo Gateway: Sync model list with upstream (2026-04-11)

Summary

Reconciles providers/kilo/models/ against the live Kilo gateway model list at https://api.kilo.ai/api/gateway/models. The previous sync left 25 stale TOML files (models removed from the gateway) and 23 missing TOML files (models added to the gateway). This PR brings the local tree into alignment with the upstream JSON.

The trigger was the MiniMax M2.7 addition in commit 8d8521e0, which prompted a full audit of the directory. All new TOML values were verified directly against the live API response from https://api.kilo.ai/api/gateway/models.


Files removed (25)

Models that no longer appear in the Kilo gateway response:

File Notes
allenai/molmo-2-8b Removed from gateway
allenai/olmo-3-7b-instruct Removed from gateway
allenai/olmo-3-7b-think Removed from gateway
allenai/olmo-3.1-32b-think Removed from gateway
anthropic/claude-3.5-sonnet Removed from gateway
arcee-ai/trinity-large-preview:free Removed from gateway
corethink:free Removed from gateway
giga-potato Removed from gateway
giga-potato-thinking Removed from gateway
google/gemini-3-pro-preview Removed from gateway
kilo/auto Superseded by kilo-auto/ subfolder
kilo/auto-free Superseded by kilo-auto/ subfolder
kilo/auto-small Superseded by kilo-auto/ subfolder
kwaipilot/kat-coder-pro Removed from gateway
liquid/lfm-2.2-6b Removed from gateway
liquid/lfm2-8b-a1b Removed from gateway
meta-llama/llama-3.1-405b Removed from gateway
meta-llama/llama-3.1-405b-instruct Removed from gateway
morph-warp-grep-v2 Removed from gateway
openrouter/healer-alpha Removed from gateway
openrouter/hunter-alpha Removed from gateway
qwen/qwen-2.5-vl-7b-instruct Removed from gateway
stepfun/step-3.5-flash:free Removed from gateway
x-ai/grok-4.20-beta Superseded by x-ai/grok-4.20
x-ai/grok-4.20-multi-agent-beta Superseded by x-ai/grok-4.20-multi-agent

The kilo/ subdirectory became empty after removing the three kilo/auto*.toml files and was deleted. The canonical kilo-auto/ subfolder (frontier, balanced, free) was already present and correct.


Files added (23)

A new rekaai/ provider subdirectory was created for the two Reka AI models.

For each added model, the table below shows the raw upstream JSON field value alongside the mapped TOML field value. Pricing JSON values are in USD per token; TOML values are USD per 1 million tokens. The release_date is derived from the upstream created Unix timestamp.


anthropic/claude-opus-4.6-fast

Property JSON value TOML value
name Anthropic: Claude Opus 4.6 (Fast) "Anthropic: Claude Opus 4.6 (Fast)"
created 1775592472 release_date = "2026-04-07"
last_updated "2026-04-11"
isFree false
pricing.prompt 0.00003 input = 30
pricing.completion 0.00015 output = 150
pricing.input_cache_read 0.000003 cache_read = 3
pricing.input_cache_write 0.0000375 cache_write = 37.5
top_provider.context_length 1000000 context = 1000000
top_provider.max_completion_tokens 128000 output = 128000
input_modalities ["text", "image"] ["image", "text"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights closed (anthropic) open_weights = false
attachment image in input_modalities attachment = true

arcee-ai/trinity-large-thinking

Property JSON value TOML value
name Arcee AI: Trinity Large Thinking "Arcee AI: Trinity Large Thinking"
created 1775058318 release_date = "2026-04-01"
last_updated "2026-04-11"
pricing.prompt 0.00000022 input = 0.22
pricing.completion 0.00000085 output = 0.85
pricing.input_cache_read 0 omitted (zero, no caching)
top_provider.context_length 262144 context = 262144
top_provider.max_completion_tokens 262144 output = 262144
input_modalities ["text"] ["text"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights open (arcee-ai) open_weights = true
attachment no image/audio/video attachment = false

arcee-ai/trinity-large-thinking:free

Property JSON value TOML value
name Arcee AI: Trinity Large Thinking (free) "Arcee AI: Trinity Large Thinking (free)"
created 1756238927 release_date = "2025-08-26"
last_updated "2026-04-11"
isFree true
pricing.prompt 0 input = 0
pricing.completion 0 output = 0
top_provider.context_length 262144 context = 262144
top_provider.max_completion_tokens 262144 output = 262144
input_modalities ["text"] ["text"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights open (arcee-ai) open_weights = true
attachment no image/audio/video attachment = false

bytedance-seed/dola-seed-2.0-pro:free

Property JSON value TOML value
name ByteDance Seed: Dola Seed 2.0 Pro (free) "ByteDance Seed: Dola Seed 2.0 Pro (free)"
created 1756238927 release_date = "2025-08-26"
last_updated "2026-04-11"
isFree true
pricing.prompt 0 input = 0
pricing.completion 0 output = 0
top_provider.context_length 256000 context = 256000
top_provider.max_completion_tokens 128000 output = 128000
input_modalities ["text", "image"] ["image", "text"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights open (bytedance-seed) open_weights = true
attachment image in input_modalities attachment = true

google/gemma-4-26b-a4b-it

Property JSON value TOML value
name Google: Gemma 4 26B A4B "Google: Gemma 4 26B A4B"
created 1775227989 release_date = "2026-04-03"
last_updated "2026-04-11"
pricing.prompt 0.00000012 input = 0.12
pricing.completion 0.0000004 output = 0.4
pricing.input_cache_read null omitted
top_provider.context_length 262144 context = 262144
top_provider.max_completion_tokens 262144 output = 262144
input_modalities ["image", "text", "video"] ["image", "text", "video"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights open (gemma) open_weights = true
attachment image+video in input_modalities attachment = true

google/gemma-4-31b-it

Property JSON value TOML value
name Google: Gemma 4 31B "Google: Gemma 4 31B"
created 1775148486 release_date = "2026-04-02"
last_updated "2026-04-11"
pricing.prompt 0.00000014 input = 0.14
pricing.completion 0.0000004 output = 0.4
pricing.input_cache_read null omitted
top_provider.context_length 262144 context = 262144
top_provider.max_completion_tokens 131072 output = 131072
input_modalities ["image", "text", "video"] ["image", "text", "video"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights open (gemma) open_weights = true
attachment image+video in input_modalities attachment = true

google/lyria-3-clip-preview

Property JSON value TOML value
name Google: Lyria 3 Clip Preview "Google: Lyria 3 Clip Preview"
created 1774907255 release_date = "2026-03-30"
last_updated "2026-04-11"
pricing.prompt 0 input = 0
pricing.completion 0 output = 0
top_provider.context_length 1048576 context = 1048576
top_provider.max_completion_tokens 65536 output = 65536
input_modalities ["text", "image"] ["image", "text"]
output_modalities ["text", "audio"] ["audio", "text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters false reasoning = false
"tools" in supported_parameters false tool_call = false
open_weights closed (lyria) open_weights = false
attachment image in input_modalities attachment = true

google/lyria-3-pro-preview

Property JSON value TOML value
name Google: Lyria 3 Pro Preview "Google: Lyria 3 Pro Preview"
created 1774907286 release_date = "2026-03-30"
last_updated "2026-04-11"
pricing.prompt 0 input = 0
pricing.completion 0 output = 0
top_provider.context_length 1048576 context = 1048576
top_provider.max_completion_tokens 65536 output = 65536
input_modalities ["text", "image"] ["image", "text"]
output_modalities ["text", "audio"] ["audio", "text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters false reasoning = false
"tools" in supported_parameters false tool_call = false
open_weights closed (lyria) open_weights = false
attachment image in input_modalities attachment = true

kwaipilot/kat-coder-pro-v2

Property JSON value TOML value
name Kwaipilot: KAT-Coder-Pro V2 "Kwaipilot: KAT-Coder-Pro V2"
created 1774649310 release_date = "2026-03-27"
last_updated "2026-04-11"
pricing.prompt 0.0000003 input = 0.3
pricing.completion 0.0000012 output = 1.2
pricing.input_cache_read 0.00000006 cache_read = 0.06
top_provider.context_length 256000 context = 256000
top_provider.max_completion_tokens 80000 output = 80000
input_modalities ["text"] ["text"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters false reasoning = false
"tools" in supported_parameters true tool_call = true
open_weights open (kwaipilot) open_weights = true
attachment no image/audio/video attachment = false

mistralai/mistral-small-2603

Property JSON value TOML value
name Mistral: Mistral Small 4 "Mistral: Mistral Small 4"
created 1773695685 release_date = "2026-03-16"
last_updated "2026-04-11"
pricing.prompt 0.00000015 input = 0.15
pricing.completion 0.0000006 output = 0.6
pricing.input_cache_read 0.000000015 cache_read = 0.015
top_provider.context_length 262144 context = 262144
top_provider.max_completion_tokens null → fallback to ctx output = 262144
input_modalities ["text", "image"] ["image", "text"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights open (mistralai) open_weights = true
attachment image in input_modalities attachment = true

nvidia/llama-3.1-nemotron-ultra-253b-v1

Property JSON value TOML value
name NVIDIA: Llama 3.1 Nemotron Ultra 253B v1 "NVIDIA: Llama 3.1 Nemotron Ultra 253B v1"
created 1744115059 release_date = "2025-04-08"
last_updated "2026-04-11"
pricing.prompt 0.0000006 input = 0.6
pricing.completion 0.0000018 output = 1.8
pricing.input_cache_read null omitted
top_provider.context_length 131072 context = 131072
top_provider.max_completion_tokens null → fallback to ctx output = 131072
input_modalities ["text"] ["text"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters false tool_call = false
open_weights open (nvidia/llama) open_weights = true
attachment no image/audio/video attachment = false

nvidia/nemotron-3-super-120b-a12b

Property JSON value TOML value
name NVIDIA: Nemotron 3 Super "NVIDIA: Nemotron 3 Super"
created 1773245239 release_date = "2026-03-11"
last_updated "2026-04-11"
pricing.prompt 0.0000001 input = 0.1
pricing.completion 0.0000005 output = 0.5
pricing.input_cache_read 0.0000001 cache_read = 0.1
top_provider.context_length 262144 context = 262144
top_provider.max_completion_tokens null → fallback to ctx output = 262144
input_modalities ["text"] ["text"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights open (nvidia/nemotron) open_weights = true
attachment no image/audio/video attachment = false

openai/gpt-5.4-mini

Property JSON value TOML value
name OpenAI: GPT-5.4 Mini "OpenAI: GPT-5.4 Mini"
created 1773748178 release_date = "2026-03-17"
last_updated "2026-04-11"
pricing.prompt 0.00000075 input = 0.75
pricing.completion 0.0000045 output = 4.5
pricing.input_cache_read 0.000000075 cache_read = 0.075
top_provider.context_length 400000 context = 400000
top_provider.max_completion_tokens 128000 output = 128000
input_modalities ["file", "image", "text"] ["image", "pdf", "text"] (filepdf)
output_modalities ["text"] ["text"]
"temperature" in supported_parameters false temperature = false
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights closed (openai) open_weights = false
attachment image+file in input_modalities attachment = true

openai/gpt-5.4-nano

Property JSON value TOML value
name OpenAI: GPT-5.4 Nano "OpenAI: GPT-5.4 Nano"
created 1773748187 release_date = "2026-03-17"
last_updated "2026-04-11"
pricing.prompt 0.0000002 input = 0.2
pricing.completion 0.00000125 output = 1.25
pricing.input_cache_read 0.00000002 cache_read = 0.02
top_provider.context_length 400000 context = 400000
top_provider.max_completion_tokens 128000 output = 128000
input_modalities ["file", "image", "text"] ["image", "pdf", "text"] (filepdf)
output_modalities ["text"] ["text"]
"temperature" in supported_parameters false temperature = false
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights closed (openai) open_weights = false
attachment image+file in input_modalities attachment = true

qwen/qwen3.6-plus

Property JSON value TOML value
name Qwen: Qwen3.6 Plus "Qwen: Qwen3.6 Plus"
created 1756238927 release_date = "2025-08-26"
last_updated "2026-04-11"
pricing.prompt 0.000000325 input = 0.325
pricing.completion 0.00000195 output = 1.95
pricing.input_cache_read 0.0000000325 cache_read = 0.0325
pricing.input_cache_write 0.00000040625 cache_write = 0.40625
top_provider.context_length 1000000 context = 1000000
top_provider.max_completion_tokens 65536 output = 65536
input_modalities ["text", "image"] ["image", "text"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights closed (qwen-plus) open_weights = false
attachment image in input_modalities attachment = true

rekaai/reka-edge

Property JSON value TOML value
name Reka Edge "Reka Edge"
created 1774026965 release_date = "2026-03-20"
last_updated "2026-04-11"
pricing.prompt 0.0000001 input = 0.1
pricing.completion 0.0000001 output = 0.1
pricing.input_cache_read null omitted
top_provider.context_length 16384 context = 16384
top_provider.max_completion_tokens 16384 output = 16384
input_modalities ["image", "text", "video"] ["image", "text", "video"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters false reasoning = false
"tools" in supported_parameters true tool_call = true
open_weights open (rekaai) open_weights = true
attachment image+video in input_modalities attachment = true

rekaai/reka-flash-3

Property JSON value TOML value
name Reka Flash 3 "Reka Flash 3"
created 1741812813 release_date = "2025-03-12"
last_updated "2026-04-11"
pricing.prompt 0.0000001 input = 0.1
pricing.completion 0.0000002 output = 0.2
pricing.input_cache_read null omitted
top_provider.context_length 65536 context = 65536
top_provider.max_completion_tokens 65536 output = 65536
input_modalities ["text"] ["text"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters false tool_call = false
open_weights open (rekaai) open_weights = true
attachment no image/audio/video attachment = false

x-ai/grok-4.20

Property JSON value TOML value
name xAI: Grok 4.20 "xAI: Grok 4.20"
created 1774979019 release_date = "2026-03-31"
last_updated "2026-04-11"
pricing.prompt 0.000002 input = 2
pricing.completion 0.000006 output = 6
pricing.input_cache_read 0.0000002 cache_read = 0.2
top_provider.context_length 2000000 context = 2000000
top_provider.max_completion_tokens null → fallback to ctx output = 2000000
input_modalities ["text", "image", "file"] ["image", "pdf", "text"] (filepdf)
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights closed (x-ai/grok) open_weights = false
attachment image+file in input_modalities attachment = true

x-ai/grok-4.20-multi-agent

Property JSON value TOML value
name xAI: Grok 4.20 Multi-Agent "xAI: Grok 4.20 Multi-Agent"
created 1774979158 release_date = "2026-03-31"
last_updated "2026-04-11"
pricing.prompt 0.000002 input = 2
pricing.completion 0.000006 output = 6
pricing.input_cache_read 0.0000002 cache_read = 0.2
top_provider.context_length 2000000 context = 2000000
top_provider.max_completion_tokens null → fallback to ctx output = 2000000
input_modalities ["text", "image", "file"] ["image", "pdf", "text"] (filepdf)
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters false tool_call = false
open_weights closed (x-ai/grok) open_weights = false
attachment image+file in input_modalities attachment = true

xiaomi/mimo-v2-omni

Property JSON value TOML value
name Xiaomi: MiMo-V2-Omni "Xiaomi: MiMo-V2-Omni"
created 1773863703 release_date = "2026-03-18"
last_updated "2026-04-11"
pricing.prompt 0.0000004 input = 0.4
pricing.completion 0.000002 output = 2
pricing.input_cache_read 0.00000008 cache_read = 0.08
top_provider.context_length 262144 context = 262144
top_provider.max_completion_tokens 65536 output = 65536
input_modalities ["text", "audio", "image", "video"] ["audio", "image", "text", "video"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights open (xiaomi) open_weights = true
attachment audio+image+video in input_modalities attachment = true

xiaomi/mimo-v2-pro

Property JSON value TOML value
name Xiaomi: MiMo-V2-Pro "Xiaomi: MiMo-V2-Pro"
created 1773863643 release_date = "2026-03-18"
last_updated "2026-04-11"
pricing.prompt 0.000001 input = 1
pricing.completion 0.000003 output = 3
pricing.input_cache_read 0.0000002 cache_read = 0.2
top_provider.context_length 1048576 context = 1048576
top_provider.max_completion_tokens 131072 output = 131072
input_modalities ["text"] ["text"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights open (xiaomi) open_weights = true
attachment no image/audio/video attachment = false

z-ai/glm-5-turbo

Property JSON value TOML value
name Z.ai: GLM 5 Turbo "Z.ai: GLM 5 Turbo"
created 1773583573 release_date = "2026-03-15"
last_updated "2026-04-11"
pricing.prompt 0.0000012 input = 1.2
pricing.completion 0.000004 output = 4
pricing.input_cache_read 0.00000024 cache_read = 0.24
top_provider.context_length 202752 context = 202752
top_provider.max_completion_tokens 131072 output = 131072
input_modalities ["text"] ["text"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights open (z-ai/glm) open_weights = true
attachment no image/audio/video attachment = false

z-ai/glm-5v-turbo

Property JSON value TOML value
name Z.ai: GLM 5V Turbo "Z.ai: GLM 5V Turbo"
created 1775061458 release_date = "2026-04-01"
last_updated "2026-04-11"
pricing.prompt 0.0000012 input = 1.2
pricing.completion 0.000004 output = 4
pricing.input_cache_read 0.00000024 cache_read = 0.24
top_provider.context_length 202752 context = 202752
top_provider.max_completion_tokens 131072 output = 131072
input_modalities ["image", "text", "video"] ["image", "text", "video"]
output_modalities ["text"] ["text"]
"temperature" in supported_parameters true temperature = true
"reasoning" in supported_parameters true reasoning = true
"tools" in supported_parameters true tool_call = true
open_weights open (z-ai/glm) open_weights = true
attachment image+video in input_modalities attachment = true

Field-mapping conventions

Rule Detail
pricing × 1,000,000 All cost fields converted from USD/token to USD/1M tokens
filepdf architecture.input_modalities uses file; schema uses pdf
max_completion_tokens = null Falls back to top_provider.context_length for [limit].output
cache_read = 0 or null Omitted from TOML (no caching advertised)
cache_write = null Omitted from TOML
isFree = true Cost fields set to 0, cache fields omitted
temperature Emitted as false when absent from supported_parameters (mirrors OpenAI o-series convention)
structured_output Never emitted — Kilo TOML convention does not use this field
knowledge Never emitted — not present in upstream

How to sync in the future

Run the script below any time you want to check whether the local TOML tree has drifted from the live Kilo gateway. It prints three sections: models to add (upstream only), models to remove (local only), and a count summary.

#!/usr/bin/env python3
"""
Usage:
    python3 sync_check.py

Compares providers/kilo/models/**/*.toml against the live Kilo gateway API
and prints which models need to be added or removed.

Run from the repo root.
"""

import json
import subprocess
from pathlib import Path

GATEWAY_URL = "https://api.kilo.ai/api/gateway/models"
MODELS_DIR  = Path("providers/kilo/models")


def fetch_upstream() -> set[str]:
    result = subprocess.run(
        ["curl", "-sS", GATEWAY_URL],
        capture_output=True, text=True, check=True,
    )
    data = json.loads(result.stdout)
    return {m["id"] for m in data["data"]}


def fetch_local() -> set[str]:
    return {
        str(p.relative_to(MODELS_DIR)).removesuffix(".toml")
        for p in MODELS_DIR.rglob("*.toml")
    }


def main() -> None:
    print("Fetching upstream model list …")
    upstream = fetch_upstream()
    local    = fetch_local()

    to_add    = sorted(upstream - local)
    to_remove = sorted(local - upstream)

    if to_add:
        print(f"\n### Models to ADD ({len(to_add)}) — present upstream, missing locally ###")
        for mid in to_add:
            print(f"  + {mid}")
    else:
        print("\nNo models to add.")

    if to_remove:
        print(f"\n### Models to REMOVE ({len(to_remove)}) — local only, gone from gateway ###")
        for mid in to_remove:
            print(f"  - {mid}")
    else:
        print("No models to remove.")

    print(f"\nSummary: upstream={len(upstream)}  local={len(local)}  "
          f"to_add={len(to_add)}  to_remove={len(to_remove)}")

    if not to_add and not to_remove:
        print("✓ Local tree is in sync with the upstream gateway.")


if __name__ == "__main__":
    main()

To fetch full details (pricing, modalities, limits) for a specific model ID so you can write the TOML:

curl -sS https://api.kilo.ai/api/gateway/models | python3 -c "
import json, sys
data = json.load(sys.stdin)
model_id = 'REPLACE_WITH_MODEL_ID'
match = next((m for m in data['data'] if m['id'] == model_id), None)
if match:
    print(json.dumps(match, indent=2))
else:
    print(f'Model {model_id!r} not found in gateway response.')
"

Verification

Post-sync file count:

find providers/kilo/models -name '*.toml' | wc -l
# → 334  (matches upstream gateway count)

To confirm the diff is clean against live upstream:

curl -sS https://api.kilo.ai/api/gateway/models \
  | python3 -c 'import json,sys; d=json.load(sys.stdin); print("\n".join(sorted(m["id"] for m in d["data"])))' \
  > /tmp/upstream.txt
(cd providers/kilo/models && find . -name '*.toml' | sed 's|^\./||; s|\.toml$||' | sort) > /tmp/local.txt
diff /tmp/local.txt /tmp/upstream.txt
# Expected: empty output

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant