Codestral

Mistral
mistral/codestral

256K context

Context Window

256K

256,000 tokens

Max Output

16K

16,384 tokens

About this model

Mistral Codestral β€” specialized code generation model

This model supports up to 256K tokens of context. It provides strong code generation and debugging capabilities.

Access it through Chuizi.AI with a single ck- API key β€” no separate Mistral AI account needed.

Highlights

256K context window
16K max output
Strong code generation

Best For

Code generationRefactoringDebuggingDocumentation
2025-01-13

Capabilities

ChatCodetools

Aliases

codestral
codestral-2501

Pricing (per 1M tokens)

Pricing (per 1M tokens)/ 1M tokens
Input / 1M$0.32
Output / 1M$0.94

Final prices shown

Quick Start

main.py
from openai import OpenAI

client = OpenAI(
    base_url="https://api.chuizi.ai/v1",
    api_key="ck-your-key-here",
)

response = client.chat.completions.create(
    model="mistral/codestral",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

FAQ

Related Models

Codestral β€” Pricing, Context, Capabilities | Chuizi AI