Llama 3.2 90b

Meta
meta/llama-3.2-90b

128K context, vision

Context Window

128K

128,000 tokens

Max Output

4K

4,096 tokens

About this model

Llama 3.2 vision model with 90B parameters

This model supports up to 128K tokens of context. It includes native vision understanding for analyzing images and documents. It provides strong code generation and debugging capabilities.

Access it through Chuizi.AI with a single ck- API key β€” no separate Meta account needed.

Highlights

128K context window
4K max output
Native vision support
Strong code generation

Best For

Code generationRefactoringDebuggingDocumentation
2024-09-25

Capabilities

ChatVisionCodetools

Aliases

llama-3.2-90b

Pricing (per 1M tokens)

Pricing (per 1M tokens)/ 1M tokens
Input / 1M$2.10
Output / 1M$2.10

Final prices shown

Quick Start

main.py
from openai import OpenAI

client = OpenAI(
    base_url="https://api.chuizi.ai/v1",
    api_key="ck-your-key-here",
)

response = client.chat.completions.create(
    model="meta/llama-3.2-90b",
    messages=[{"role": "user", "content": "Hello!"}],
)
print(response.choices[0].message.content)

FAQ

Related Models

Llama 3.2 90b β€” Pricing, Context, Capabilities | Chuizi AI