Llama 3.2 11b

Meta
meta/llama-3.2-11b

128K context, vision

Context Window

128K

128,000 tokens

Max Output

4K

4,096 tokens

About this model

Llama 3.2 vision model with 11B parameters

This model supports up to 128K tokens of context. It includes native vision understanding for analyzing images and documents.

Access it through Chuizi.AI with a single ck- API key β€” no separate Meta account needed.

Highlights

128K context window
4K max output
Native vision support

Best For

Image analysisDocument OCRVisual Q&AMultimodal chat
2024-09-25

Capabilities

ChatVisiontools

Aliases

llama-3.2-11b

Pricing (per 1M tokens)

Pricing (per 1M tokens)/ 1M tokens
Input / 1M$0.37
Output / 1M$0.37

Final prices shown

Quick Start

main.py
from openai import OpenAI

client = OpenAI(
    base_url="https://api.chuizi.ai/v1",
    api_key="ck-your-key-here",
)

response = client.chat.completions.create(
    model="meta/llama-3.2-11b",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

FAQ

Related Models

Llama 3.2 11b β€” Pricing, Context, Capabilities | Chuizi AI