Deepseek V3.2
DeepSeek
deepseek/deepseek-v3.2
Open-source flagship, hybrid reasoning
Context Window
131K
131,072 tokens
Max Output
66K
65,536 tokens
About this model
DeepSeek V3.2 is DeepSeek's latest flagship model with hybrid reasoning mode that automatically adjusts reasoning depth based on task complexity. Achieves top-tier performance in programming, mathematics, and general conversation.
128K context window with automatic KV Cache disk caching, saving 90% of input costs for requests with repeated prefixes. Open-source weights available for research.
Unifies the deepseek-chat and deepseek-reasoner endpoints.
Highlights
Hybrid reasoning mode
Auto disk caching
128K context
Open-source weights
Best For
Programming & code reviewMath reasoningLong document analysisGeneral conversation
2025-09-29671B (MoE)MoE TransformerDeepSeek LicenseOpen Source
Capabilities
ChatReasoningCodetoolscache
Aliases
deepseek-v3.2deepseek-latestPricing (per 1M tokens)
| Pricing (per 1M tokens) | / 1M tokens |
|---|---|
| Input / 1M | $0.29 |
| Output / 1M | $0.44 |
| Cache Read | $0.03 |
Final prices shown
Quick Start
main.py
from openai import OpenAI client = OpenAI( base_url="https://api.chuizi.ai/v1", api_key="ck-your-key-here", ) response = client.chat.completions.create( model="deepseek/deepseek-v3.2", messages=[{"role": "user", "content": "Hello!"}], ) print(response.choices[0].message.content)