Anthropic Messages API

Chuizi.AI proxies the Anthropic Messages API as a native passthrough. Your request body goes directly to Anthropic with zero format conversion, and automatic failover ensures high availability. The response comes back in Anthropic's own format, including streaming SSE event types.

This matters if you use Claude Code, Cursor, Cline, or OpenCode. These tools expect the Anthropic Messages API, not OpenAI's format. With Chuizi.AI, you set two environment variables and everything works.

Endpoints

MethodPathDescription
POST/anthropic/v1/messagesCreate a message (streaming and non-streaming)
GET/anthropic/v1/modelsList available Anthropic models

Authentication

Chuizi.AI accepts your ck- API key through two headers. Use whichever your tool expects:

HeaderFormatExample
x-api-keyck-your-key-hereAnthropic SDK default
AuthorizationBearer ck-your-key-hereOpenAI convention

Both resolve to the same user account, balance, and rate limits.

Required Headers

HeaderValueNotes
anthropic-version2023-06-01Required. Passed through to upstream.
content-typeapplication/jsonRequired for POST requests.

How It Differs from OpenAI /v1/chat/completions

If you are used to the OpenAI Chat Completions format, these are the key structural differences in Anthropic's Messages API:

ConceptOpenAI Chat CompletionsAnthropic Messages
System promptmessages array entry with role: "system"Top-level system field (string or array of blocks)
Content formatString or array of content partsAlways an array of content blocks ([{type: "text", text: "..."}])
Stop indicatorfinish_reason: "stop"stop_reason: "end_turn"
Max tokensOptional (max_tokens or max_completion_tokens)Required (max_tokens)
Token usageusage.prompt_tokens, usage.completion_tokensusage.input_tokens, usage.output_tokens
Streaming eventsdata: {"choices": [...]} chunksTyped events: message_start, content_block_delta, message_delta
Model prefixanthropic/claude-sonnet-4-6claude-sonnet-4-6 (bare name) or anthropic/claude-sonnet-4-6

Request Format

config.json
json
{
  "model": "claude-sonnet-4-6",
  "max_tokens": 1024,
  "system": "You are a helpful assistant.",
  "messages": [
    {
      "role": "user",
      "content": "Explain how TCP handshakes work."
    }
  ]
}

Supported Parameters

ParameterTypeRequiredDescription
modelstringYesModel name. Bare names (claude-sonnet-4-6) and prefixed names (anthropic/claude-sonnet-4-6) both work.
max_tokensintegerYesMaximum tokens to generate.
messagesarrayYesConversation messages.
systemstring or arrayNoSystem prompt. Can be a string or an array of content blocks (useful for caching).
streambooleanNoEnable SSE streaming. Default: false.
temperaturenumberNoSampling temperature, 0-1.
top_pnumberNoNucleus sampling threshold.
top_kintegerNoTop-K sampling.
stop_sequencesarrayNoCustom stop sequences.
toolsarrayNoTool definitions for function calling.
tool_choiceobjectNoTool selection strategy.
metadataobjectNoRequest metadata (e.g., user_id for abuse tracking).

Response Format

Non-streaming

config.json
json
{
  "id": "msg_01XFDUDYJgAACzvnptvVoYEL",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "TCP uses a three-way handshake to establish a connection..."
    }
  ],
  "model": "claude-sonnet-4-6",
  "stop_reason": "end_turn",
  "usage": {
    "input_tokens": 25,
    "output_tokens": 150
  }
}

Next Steps