Minimax M2
MiniMax
minimax/minimax-m2
196K context
Context Window
196K
196,000 tokens
Max Output
16K
16,384 tokens
About this model
MiniMax M2, solid general-purpose model via Bedrock
This model supports up to 196K tokens of context. It provides strong code generation and debugging capabilities.
Access it through Chuizi.AI with a single ck- API key β no separate MiniMax account needed.
Highlights
196K context window
16K max output
Strong code generation
Best For
Code generationRefactoringDebuggingDocumentation
2025-10-27
Capabilities
ChatCodetools
Aliases
minimax-m2MiniMax-M2Pricing (per 1M tokens)
| Pricing (per 1M tokens) | / 1M tokens |
|---|---|
| Input / 1M | $0.23 |
| Output / 1M | $0.93 |
Final prices shown
Quick Start
main.py
from openai import OpenAI client = OpenAI( base_url="https://api.chuizi.ai/v1", api_key="ck-your-key-here", ) response = client.chat.completions.create( model="minimax/minimax-m2", messages=[{"role": "user", "content": "Hello!"}], ) print(response.choices[0].message.content)