Mistral Large

Mistral
mistral/mistral-large

262K context

Context Window

262K

262,000 tokens

Max Output

8K

8,192 tokens

About this model

Mistral flagship, strong multilingual and reasoning

This model supports up to 262K tokens of context. It provides strong code generation and debugging capabilities.

Access it through Chuizi.AI with a single ck- API key β€” no separate Mistral AI account needed.

Highlights

262K context window
8K max output
Strong code generation

Best For

Code generationRefactoringDebuggingDocumentation
2024-11-18

Capabilities

ChatCodetools

Aliases

mistral-large

Pricing (per 1M tokens)

Pricing (per 1M tokens)/ 1M tokens
Input / 1M$2.10
Output / 1M$6.30

Final prices shown

Quick Start

main.py
from openai import OpenAI

client = OpenAI(
    base_url="https://api.chuizi.ai/v1",
    api_key="ck-your-key-here",
)

response = client.chat.completions.create(
    model="mistral/mistral-large",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

FAQ

Related Models

Mistral Large β€” Pricing, Context, Capabilities | Chuizi AI