LangChain / LlamaIndex
Use 200+ models in LangChain and LlamaIndex by pointing the OpenAI-compatible API endpoint to Chuizi.AI.
Prerequisites
- Python 3.9+
- A Chuizi.AI account with an API key (starts with
ck-)
LangChain Configuration
Installation
terminal
bash
pip install langchain-openai
Basic Usage
example.py
python
from langchain_openai import ChatOpenAI llm = ChatOpenAI( openai_api_base="https://api.chuizi.ai/v1", openai_api_key="ck-your-key-here", model="anthropic/claude-sonnet-4-6", ) response = llm.invoke("What is RAG?") print(response.content)
Using in a Chain
example.py
python
from langchain_core.prompts import ChatPromptTemplate prompt = ChatPromptTemplate.from_messages([ ("system", "You are a helpful AI assistant."), ("user", "{input}"), ]) chain = prompt | llm response = chain.invoke({"input": "Explain LangChain"}) print(response.content)
Next Steps
- Embeddings API — vector embedding reference for RAG pipelines
- Choose a Model — compare models for RAG and agent tasks
- Structured Output — get JSON responses for data extraction