cURL
Python
JavaScript
Go
Java
PHP
Ruby
Swift
C#
C
Objective-C
OCaml
Dart
R
curl https://api.apimart.ai/v1/messages \
-H "x-api-key: $API_KEY " \
-H "anthropic-version: 2023-06-01" \
-H "content-type: application/json" \
-d '{
"model": "claude-sonnet-4-5-20250929",
"max_tokens": 1024,
"messages": [
{"role": "user", "content": "Hello, world"}
]
}'
{
"code" : 200 ,
"data" : {
"id" : "msg_013Zva2CMHLNnXjNJJKqJ2EF" ,
"type" : "message" ,
"role" : "assistant" ,
"content" : [
{
"type" : "text" ,
"text" : "こんにちは!私はClaudeです。お会いできて嬉しいです。"
}
],
"model" : "claude-sonnet-4-5-20250929" ,
"stop_reason" : "end_turn" ,
"stop_sequence" : null ,
"usage" : {
"input_tokens" : 12 ,
"output_tokens" : 18
}
}
}
認証用APIキー APIキー管理ページ にアクセスしてAPIキーを取得してくださいリクエストヘッダーに追加:
APIバージョン 使用するClaude APIバージョンを指定します 例:2023-06-01
リクエストボディ
モデル名
claude-haiku-4-5-20251001 - Claude 4.5 高速レスポンス版
claude-sonnet-4-5-20250929 - Claude 4.5 バランス版
claude-opus-4-1-20250805 - 最も強力な Claude 4.1 フラッグシップモデル
claude-opus-4-1-20250805-thinking - Claude 4.1 Opus 深層思考版
claude-sonnet-4-5-20250929-thinking - Claude 4.5 Sonnet 深層思考版
メッセージのリスト モデルが次のレスポンスを生成するためのメッセージの配列。userとassistantロールの交互をサポートします。 各メッセージには以下が含まれます:
role: ロール(userまたはassistant)
content: コンテンツ(文字列またはコンテンツブロックの配列)
単一のユーザーメッセージ: [{ "role" : "user" , "content" : "こんにちは、Claude" }]
複数ターンの会話: [
{ "role" : "user" , "content" : "こんにちは。" },
{ "role" : "assistant" , "content" : "こんにちは、私はClaudeです。どのようにお手伝いできますか?" },
{ "role" : "user" , "content" : "LLMを分かりやすく説明していただけますか?" }
]
事前入力されたアシスタント応答: [
{ "role" : "user" , "content" : "太陽のギリシャ語名は? (A) Sol (B) Helios (C) Sun" },
{ "role" : "assistant" , "content" : "正解は(" }
]
生成する最大トークン数 停止前に生成する最大トークン数。モデルはこの制限に達する前に停止する可能性があります。 モデルによって最大値が異なります。 最小値:1
システムプロンプト システムプロンプトはClaudeの役割、性格、目標、指示を設定します。 文字列形式: {
"system" : "あなたはプロフェッショナルなPythonプログラミング講師です"
}
構造化形式: {
"system" : [
{
"type" : "text" ,
"text" : "あなたはプロフェッショナルなPythonプログラミング講師です"
}
]
}
温度パラメータ、範囲は0~1 出力のランダム性を制御:
低い値(例:0.2):より決定的、保守的
高い値(例:0.8):よりランダム、創造的
デフォルト:1.0
ニュークレアスサンプリングパラメータ、範囲は0~1 ニュークレアスサンプリングを使用します。temperatureまたはtop_pのいずれかの使用を推奨します。 デフォルト:1.0
Top-Kサンプリング 上位K個のオプションのみからサンプリングし、「ロングテール」の低確率応答を除去します。 高度なユースケースにのみ推奨。
ストリーミングを有効化 trueの場合、Server-Sent Events(SSE)を使用してレスポンスをストリーミングします。デフォルト:false
停止シーケンス モデルの生成を停止させるカスタムテキストシーケンス。 最大4つのシーケンス。 例:["\n\nHuman:", "\n\nAssistant:"]
Metadata Metadata object for the request. Includes:
Tool definitions List of tools the model can use to complete tasks. Function tool example: {
"tools" : [
{
"name" : "get_weather" ,
"description" : "Get the current weather in a given location" ,
"input_schema" : {
"type" : "object" ,
"properties" : {
"location" : {
"type" : "string" ,
"description" : "The city and state, e.g. San Francisco, CA"
},
"unit" : {
"type" : "string" ,
"enum" : [ "celsius" , "fahrenheit" ],
"description" : "Temperature unit"
}
},
"required" : [ "location" ]
}
}
]
}
Supported tool types:
Custom function tools
Computer use tool (computer_20241022)
Text editor tool (text_editor_20241022)
Bash tool (bash_20241022)
Tool choice strategy Controls how the model uses tools:
{"type": "auto"}: Auto-decide (default)
{"type": "any"}: Must use a tool
{"type": "tool", "name": "tool_name"}: Use specific tool
Response
Unique message identifier Example: "msg_013Zva2CMHLNnXjNJJKqJ2EF"
Object type Always "message"
Content blocks array Content generated by the model, as an array of content blocks. Text content: [{ "type" : "text" , "text" : "Hello! I'm Claude." }]
Tool use: [
{
"type" : "tool_use" ,
"id" : "toolu_01A09q90qw90lq917835lq9" ,
"name" : "get_weather" ,
"input" : { "location" : "San Francisco, CA" , "unit" : "celsius" }
}
]
Content types:
text: Text content
tool_use: Tool invocation
Model that handled the request Example: "claude-sonnet-4-5-20250929"
Stop reason Possible values:
end_turn: Natural completion
max_tokens: Reached maximum tokens
stop_sequence: Hit stop sequence
tool_use: Invoked a tool
Stop sequence triggered The stop sequence that was generated, if any; otherwise null
Usage Examples
Basic Conversation
import anthropic
client = anthropic.Anthropic(
api_key = "YOUR_API_KEY" ,
base_url = "https://api.apimart.ai"
)
message = client.messages.create(
model = "claude-sonnet-4-5-20250929" ,
max_tokens = 1024 ,
messages = [
{ "role" : "user" , "content" : "Explain quantum computing basics" }
]
)
print (message.content[ 0 ].text)
Multi-turn Conversation
messages = [
{ "role" : "user" , "content" : "What is machine learning?" },
{ "role" : "assistant" , "content" : "Machine learning is a branch of AI..." },
{ "role" : "user" , "content" : "Can you give a practical example?" }
]
message = client.messages.create(
model = "claude-sonnet-4-5-20250929" ,
max_tokens = 1024 ,
messages = messages
)
Using System Prompts
message = client.messages.create(
model = "claude-sonnet-4-5-20250929" ,
max_tokens = 1024 ,
system = "You are a senior Python developer expert in code review and optimization." ,
messages = [
{ "role" : "user" , "content" : "How to optimize this code? \n\n [code]" }
]
)
Streaming Response
with client.messages.stream(
model = "claude-sonnet-4-5-20250929" ,
max_tokens = 1024 ,
messages = [
{ "role" : "user" , "content" : "Write a short essay about AI" }
]
) as stream:
for text in stream.text_stream:
print (text, end = "" , flush = True )
tools = [
{
"name" : "get_stock_price" ,
"description" : "Get real-time stock price" ,
"input_schema" : {
"type" : "object" ,
"properties" : {
"ticker" : {
"type" : "string" ,
"description" : "Stock ticker symbol, e.g., AAPL"
}
},
"required" : [ "ticker" ]
}
}
]
message = client.messages.create(
model = "claude-sonnet-4-5-20250929" ,
max_tokens = 1024 ,
tools = tools,
messages = [
{ "role" : "user" , "content" : "What's Tesla's stock price?" }
]
)
# Handle tool calls
if message.stop_reason == "tool_use" :
tool_use = next (block for block in message.content if block.type == "tool_use" )
print ( f "Calling tool: { tool_use.name } " )
print ( f "Arguments: { tool_use.input } " )
Vision Understanding
message = client.messages.create(
model = "claude-sonnet-4-5-20250929" ,
max_tokens = 1024 ,
messages = [
{
"role" : "user" ,
"content" : [
{
"type" : "image" ,
"source" : {
"type" : "url" ,
"url" : "https://example.com/image.jpg"
}
},
{
"type" : "text" ,
"text" : "Describe this image"
}
]
}
]
)
Base64 Image
import base64
with open ( "image.jpg" , "rb" ) as image_file:
image_data = base64.b64encode(image_file.read()).decode( "utf-8" )
message = client.messages.create(
model = "claude-sonnet-4-5-20250929" ,
max_tokens = 1024 ,
messages = [
{
"role" : "user" ,
"content" : [
{
"type" : "image" ,
"source" : {
"type" : "base64" ,
"media_type" : "image/jpeg" ,
"data" : image_data
}
},
{
"type" : "text" ,
"text" : "Analyze this image"
}
]
}
]
)
Best Practices
1. Prompt Engineering
Clear role definition:
system = """You are an experienced data scientist specializing in:
- Statistical analysis and data visualization
- Machine learning model development
- Python and R programming
Provide professional, accurate advice."""
Structured output:
message = "Please return the analysis results in JSON format with summary, key_findings, and recommendations fields."
2. Error Handling
from anthropic import APIError, RateLimitError
try :
message = client.messages.create(
model = "claude-sonnet-4-5-20250929" ,
max_tokens = 1024 ,
messages = [{ "role" : "user" , "content" : "Hello" }]
)
except RateLimitError:
print ( "Rate limit exceeded, please retry later" )
except APIError as e:
print ( f "API error: { e } " )
3. Token Optimization
# Use shorter prompts
messages = [
{ "role" : "user" , "content" : "Summarize key points: \n\n [long text]" }
]
# Limit output length
message = client.messages.create(
model = "claude-sonnet-4-5-20250929" ,
max_tokens = 500 , # Limit output
messages = messages
)
4. Prefilling Responses
# Guide model to specific format
messages = [
{ "role" : "user" , "content" : "List 5 Python best practices" },
{ "role" : "assistant" , "content" : "Here are 5 Python best practices: \n\n 1." }
]
message = client.messages.create(
model = "claude-sonnet-4-5-20250929" ,
max_tokens = 1024 ,
messages = messages
)
Streaming Response Handling
Python Streaming
import anthropic
client = anthropic.Anthropic(
api_key = "YOUR_API_KEY" ,
base_url = "https://api.apimart.ai"
)
with client.messages.stream(
model = "claude-sonnet-4-5-20250929" ,
max_tokens = 1024 ,
messages = [
{ "role" : "user" , "content" : "Write a Python decorator example" }
]
) as stream:
for text in stream.text_stream:
print (text, end = "" , flush = True )
JavaScript Streaming
import Anthropic from '@anthropic-ai/sdk' ;
const client = new Anthropic ({
apiKey: process . env . API_KEY ,
baseURL: 'https://api.apimart.ai'
});
const stream = await client . messages . stream ({
model: 'claude-sonnet-4-5-20250929' ,
max_tokens: 1024 ,
messages: [
{ role: 'user' , content: 'Write a React component example' }
]
});
for await ( const chunk of stream ) {
if ( chunk . type === 'content_block_delta' &&
chunk . delta . type === 'text_delta' ) {
process . stdout . write ( chunk . delta . text );
}
}
Important Notes
API Key Security :
Store API keys in environment variables
Never hardcode keys in source code
Rotate keys regularly
Rate Limiting :
Be aware of API rate limits
Implement retry mechanisms
Use exponential backoff
Token Management :
Monitor token usage
Optimize prompt length
Use appropriate max_tokens values
Model Selection :
Opus: Complex tasks, deep thinking required
Sonnet: Balanced performance and cost
Haiku: Fast response, simple tasks
Content Filtering :
Validate user input
Filter sensitive information
Implement content moderation