Grok 2

xAI
xai/grok-2

128K context

Context Window

128K

128,000 tokens

Max Output

8K

8,192 tokens

About this model

Previous generation Grok model

This model supports up to 128K tokens of context. It provides strong code generation and debugging capabilities.

Access it through Chuizi.AI with a single ck- API key β€” no separate xAI account needed.

Highlights

128K context window
8K max output
Strong code generation

Best For

Code generationRefactoringDebuggingDocumentation
2024-08-13

Capabilities

ChatCodecachetools

Aliases

grok-2

Pricing (per 1M tokens)

Pricing (per 1M tokens)/ 1M tokens
Input / 1M$2.10
Output / 1M$10.50
Cache Read$1.05

Final prices shown

Quick Start

main.py
from openai import OpenAI

client = OpenAI(
    base_url="https://api.chuizi.ai/v1",
    api_key="ck-your-key-here",
)

response = client.chat.completions.create(
    model="xai/grok-2",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

FAQ

Related Models

Grok 2 β€” Pricing, Context, Capabilities | Chuizi AI