feat(providers): add Novita AI as OpenAI-compatible provider#451
feat(providers): add Novita AI as OpenAI-compatible provider#451Alex-wuhu wants to merge 1 commit intomoltis-org:mainfrom
Conversation
Add Novita AI (https://api.novita.ai/openai) as an OpenAI-compatible LLM provider with three models: moonshotai/kimi-k2.5, deepseek/deepseek-v3.2, and zai-org/glm-5. Configured via NOVITA_API_KEY environment variable.
Greptile SummaryThis PR integrates Novita AI as an OpenAI-compatible provider following the existing Key observations:
Confidence Score: 4/5
Important Files Changed
Sequence DiagramsequenceDiagram
participant User
participant Moltis
participant NovitaAPI as Novita AI (api.novita.ai/openai)
User->>Moltis: Set NOVITA_API_KEY / [providers.novita]
Moltis->>NovitaAPI: GET /models (model discovery)
NovitaAPI-->>Moltis: model list (merged with NOVITA_MODELS static catalog)
Moltis-->>User: novita::moonshotai/kimi-k2.5, novita::deepseek/deepseek-v3.2, novita::zai-org/glm-5 appear in picker
User->>Moltis: Send message with novita::moonshotai/kimi-k2.5
Moltis->>NovitaAPI: POST /chat/completions (model: moonshotai/kimi-k2.5)
NovitaAPI-->>Moltis: stream SSE chunks
Moltis-->>User: streamed response
Last reviewed commit: "feat(providers): add..." |
| #[test] | ||
| fn novita_context_windows() { | ||
| // moonshotai/kimi-k2.5 — capability ID is "kimi-k2.5" → 128k | ||
| assert_eq!( | ||
| context_window_for_model("moonshotai/kimi-k2.5"), | ||
| 128_000 | ||
| ); | ||
| // zai-org/glm-5 — capability ID is "glm-5" → 128k | ||
| assert_eq!(context_window_for_model("zai-org/glm-5"), 128_000); | ||
| } |
There was a problem hiding this comment.
Missing context-window assertion for
deepseek/deepseek-v3.2
The novita_context_windows test deliberately covers two of the three Novita models (with explanatory comments about their capability IDs), but silently omits deepseek/deepseek-v3.2.
capability_model_id("deepseek/deepseek-v3.2") strips the org-prefix to "deepseek-v3.2", which does not match any of the rules in context_window_for_model — so the function falls through to the 200 k default. DeepSeek V3 is typically deployed with a 64 k context window on most platforms; reporting 200 k to the UI means users can construct prompts that will be rejected by the actual API.
Either add a deepseek- prefix rule to context_window_for_model (consistent with how kimi-, glm-, etc. are handled) or add a test assertion that explicitly documents the intended fallback value, so the behaviour is visible and deliberate:
| #[test] | |
| fn novita_context_windows() { | |
| // moonshotai/kimi-k2.5 — capability ID is "kimi-k2.5" → 128k | |
| assert_eq!( | |
| context_window_for_model("moonshotai/kimi-k2.5"), | |
| 128_000 | |
| ); | |
| // zai-org/glm-5 — capability ID is "glm-5" → 128k | |
| assert_eq!(context_window_for_model("zai-org/glm-5"), 128_000); | |
| } | |
| #[test] | |
| fn novita_context_windows() { | |
| // moonshotai/kimi-k2.5 — capability ID is "kimi-k2.5" → 128k | |
| assert_eq!( | |
| context_window_for_model("moonshotai/kimi-k2.5"), | |
| 128_000 | |
| ); | |
| // zai-org/glm-5 — capability ID is "glm-5" → 128k | |
| assert_eq!(context_window_for_model("zai-org/glm-5"), 128_000); | |
| // deepseek/deepseek-v3.2 — capability ID is "deepseek-v3.2" → falls | |
| // back to 200k default; update if a tighter window is confirmed. | |
| assert_eq!(context_window_for_model("deepseek/deepseek-v3.2"), 200_000); | |
| } |
| OpenAiCompatDef { | ||
| config_name: "novita", | ||
| env_key: "NOVITA_API_KEY", | ||
| env_base_url_key: "NOVITA_BASE_URL", | ||
| default_base_url: "https://api.novita.ai/openai", | ||
| models: NOVITA_MODELS, | ||
| supports_model_discovery: true, | ||
| requires_api_key: true, | ||
| local_only: false, | ||
| }, |
There was a problem hiding this comment.
Provider ordering inconsistent with
known_providers()
In provider-setup/src/lib.rs, the new KnownProvider entry is inserted between kimi-code and venice (matching the alphabetical-ish ordering of the surrounding list). Here in OPENAI_COMPAT_PROVIDERS the same provider is appended at the very end, after gemini.
Keeping the two lists in the same relative order makes it easier to cross-reference them and reduces the risk of accidentally missing a provider when adding the next one. Consider moving this entry to follow zai / moonshot, where the other same-region providers live:
// after the existing `zai` entry and before `venice`
OpenAiCompatDef {
config_name: "novita",
...
},Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
Summary
https://api.novita.ai/openaimoonshotai/kimi-k2.5,deepseek/deepseek-v3.2,zai-org/glm-5NOVITA_API_KEYenvironment variable (or[providers.novita]in config)OpenAiCompatDefpattern; no changes to existing providersValidation
Completed
cargo check -p moltis-providers -p moltis-provider-setup -p moltis-configpassescargo test -p moltis-providers -p moltis-provider-setup— 278 tests passnovita_provider_is_registered,novita_model_ids_are_chat_capable,novita_context_windowsknown_providers()and all existing provider test lists updatedManual QA
NOVITA_API_KEY=<key>and start moltis — Novita models should appear in the model picker[providers.novita]/api_key = "..."tomoltis.tomlnovita::moonshotai/kimi-k2.5and send a message — response should stream correctly