Llama 4 Maverick
Meta
meta/llama-4-maverick
1M context, vision
Context Window
1.0M
1,000,000 tokens
Max Output
33K
32,768 tokens
About this model
Latest Llama with 1M context and multimodal support
This model supports up to 1M tokens of context. It includes native vision understanding for analyzing images and documents. It provides strong code generation and debugging capabilities.
Access it through Chuizi.AI with a single ck- API key β no separate Meta account needed.
Highlights
1M context window
33K max output
Native vision support
Strong code generation
Best For
Code generationRefactoringDebuggingDocumentation
2025-04-05
Capabilities
ChatVisionCodetools
Aliases
llama-4-maverickPricing (per 1M tokens)
| Pricing (per 1M tokens) | / 1M tokens |
|---|---|
| Input / 1M | $0.28 |
| Output / 1M | $0.89 |
Final prices shown
Quick Start
main.py
from openai import OpenAI client = OpenAI( base_url="https://api.chuizi.ai/v1", api_key="ck-your-key-here", ) response = client.chat.completions.create( model="meta/llama-4-maverick", messages=[{"role": "user", "content": "Hello!"}], ) print(response.choices[0].message.content)