GPT Oss 120b
Bedrock
openai/gpt-oss-120b
128K context
Context Window
128K
128,000 tokens
Max Output
16K
16,384 tokens
About this model
GPT-OSS 120B open-weight model via Bedrock
This model supports up to 128K tokens of context. It provides strong code generation and debugging capabilities.
Access it through Chuizi.AI with a single ck- API key β no separate OpenAI account needed.
Highlights
128K context window
16K max output
Strong code generation
Best For
Code generationRefactoringDebuggingDocumentation
2025-08-05
Capabilities
ChatReasoningCodetoolscache
Aliases
gpt-oss-120bPricing (per 1M tokens)
| Pricing (per 1M tokens) | / 1M tokens |
|---|---|
| Input / 1M | $0.63 |
| Output / 1M | $1.89 |
| Cache Read | $0.16 |
Final prices shown
Quick Start
main.py
from openai import OpenAI client = OpenAI( base_url="https://api.chuizi.ai/v1", api_key="ck-your-key-here", ) response = client.chat.completions.create( model="openai/gpt-oss-120b", messages=[{"role": "user", "content": "Hello!"}], ) print(response.choices[0].message.content)