Add MiniMax as a first-class LLM provider#1738
Add MiniMax as a first-class LLM provider#1738octo-patch wants to merge 1 commit intoNVIDIA-NeMo:developfrom
Conversation
Add MiniMax model support via their OpenAI-compatible API endpoint. MiniMax (https://www.minimax.io/) offers models like MiniMax-M2.7, MiniMax-M2.5, and MiniMax-M2.5-highspeed with up to 1M token context. Changes: - Add _init_minimax_model() provider initializer using ChatOpenAI with MiniMax's base_url, temperature clamping [0,1], and MINIMAX_API_KEY env var support - Register "minimax" in _PROVIDER_INITIALIZERS for automatic dispatch - Add configuration example at examples/configs/llm/minimax/ - Add 16 unit + integration tests covering registry, initialization, temperature clamping, API key handling, and multiple model variants
Pouyanpi
left a comment
There was a problem hiding this comment.
Thank you @octo-patch for the PR.
Is there any reason that you are avoiding following config:
models:
- type: main
engine: openai
model: MiniMax-M2.7
parameters:
api_key: ${MINIMAX_API_KEY}
base_url: https://api.minimax.io/v1
temperature: 0.5|
Great point @Pouyanpi! You're right — since MiniMax's API is OpenAI-compatible, using the The main reason for the dedicated engine was to handle MiniMax-specific behavior like temperature clamping (MiniMax requires temperature > 0) and think-tag stripping from reasoning model responses. But these could also be handled as pre/post-processing within the openai engine config. If you'd prefer the simpler |
Summary
Add MiniMax as a first-class LLM provider in NeMo Guardrails, enabling users to configure MiniMax models (M2.7, M2.5, M2.5-highspeed) via the standard
config.ymlwithengine: minimax.MiniMax provides an OpenAI-compatible API, so the integration uses
ChatOpenAIfromlangchain-openaiunder the hood with MiniMax's API endpoint (https://api.minimax.io/v1). This follows the same pattern used by the existing NIM provider initializer.Changes
nemoguardrails/llm/models/langchain_initializer.py: Add_init_minimax_model()function and register"minimax"in_PROVIDER_INITIALIZERSChatOpenAIwith MiniMax's base URLMINIMAX_API_KEYenv var andapi_keyparameterbase_urloverride supportexamples/configs/llm/minimax/config.yml: Configuration exampleexamples/configs/llm/minimax/README.md: Documentation with available models and usagetests/llm/models/test_minimax_provider.py: 16 tests (unit + integration)_handle_model_special_casesUsage
Set the API key via environment variable:
Test Plan
pytest tests/llm/models/test_minimax_provider.py)pytest tests/llm/models/test_langchain_special_cases.py— 6 passed, 4 skipped)MINIMAX_API_KEYis set)