メインコンテンツへスキップ
POST
/
v1
/
messages
curl https://api.apimart.ai/v1/messages \
  -H "x-api-key: $API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -H "content-type: application/json" \
  -d '{
    "model": "claude-sonnet-4-5-20250929",
    "max_tokens": 1024,
    "messages": [
      {"role": "user", "content": "Hello, world"}
    ]
  }'
{
  "code": 200,
  "data": {
    "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
    "type": "message",
    "role": "assistant",
    "content": [
      {
        "type": "text",
        "text": "こんにちは!私はClaudeです。お会いできて嬉しいです。"
      }
    ],
    "model": "claude-sonnet-4-5-20250929",
    "stop_reason": "end_turn",
    "stop_sequence": null,
    "usage": {
      "input_tokens": 12,
      "output_tokens": 18
    }
  }
}
curl https://api.apimart.ai/v1/messages \
  -H "x-api-key: $API_KEY" \
  -H "anthropic-version: 2023-06-01" \
  -H "content-type: application/json" \
  -d '{
    "model": "claude-sonnet-4-5-20250929",
    "max_tokens": 1024,
    "messages": [
      {"role": "user", "content": "Hello, world"}
    ]
  }'
{
  "code": 200,
  "data": {
    "id": "msg_013Zva2CMHLNnXjNJJKqJ2EF",
    "type": "message",
    "role": "assistant",
    "content": [
      {
        "type": "text",
        "text": "こんにちは!私はClaudeです。お会いできて嬉しいです。"
      }
    ],
    "model": "claude-sonnet-4-5-20250929",
    "stop_reason": "end_turn",
    "stop_sequence": null,
    "usage": {
      "input_tokens": 12,
      "output_tokens": 18
    }
  }
}

認証

x-api-key
string
required
認証用APIキーAPIキー管理ページにアクセスしてAPIキーを取得してくださいリクエストヘッダーに追加:
x-api-key: YOUR_API_KEY
anthropic-version
string
required
APIバージョン使用するClaude APIバージョンを指定します例:2023-06-01

リクエストボディ

model
string
required
モデル名
  • claude-haiku-4-5-20251001 - Claude 4.5 高速レスポンス版
  • claude-sonnet-4-5-20250929 - Claude 4.5 バランス版
  • claude-opus-4-1-20250805 - 最も強力な Claude 4.1 フラッグシップモデル
  • claude-opus-4-1-20250805-thinking - Claude 4.1 Opus 深層思考版
  • claude-sonnet-4-5-20250929-thinking - Claude 4.5 Sonnet 深層思考版
messages
array
required
メッセージのリストモデルが次のレスポンスを生成するためのメッセージの配列。userassistantロールの交互をサポートします。各メッセージには以下が含まれます:
  • role: ロール(userまたはassistant
  • content: コンテンツ(文字列またはコンテンツブロックの配列)
単一のユーザーメッセージ:
[{"role": "user", "content": "こんにちは、Claude"}]
複数ターンの会話:
[
  {"role": "user", "content": "こんにちは。"},
  {"role": "assistant", "content": "こんにちは、私はClaudeです。どのようにお手伝いできますか?"},
  {"role": "user", "content": "LLMを分かりやすく説明していただけますか?"}
]
事前入力されたアシスタント応答:
[
  {"role": "user", "content": "太陽のギリシャ語名は? (A) Sol (B) Helios (C) Sun"},
  {"role": "assistant", "content": "正解は("}
]
max_tokens
integer
required
生成する最大トークン数停止前に生成する最大トークン数。モデルはこの制限に達する前に停止する可能性があります。モデルによって最大値が異なります。最小値:1
system
string | array
システムプロンプトシステムプロンプトはClaudeの役割、性格、目標、指示を設定します。文字列形式:
{
  "system": "あなたはプロフェッショナルなPythonプログラミング講師です"
}
構造化形式:
{
  "system": [
    {
      "type": "text",
      "text": "あなたはプロフェッショナルなPythonプログラミング講師です"
    }
  ]
}
temperature
number
温度パラメータ、範囲は0~1出力のランダム性を制御:
  • 低い値(例:0.2):より決定的、保守的
  • 高い値(例:0.8):よりランダム、創造的
デフォルト:1.0
top_p
number
ニュークレアスサンプリングパラメータ、範囲は0~1ニュークレアスサンプリングを使用します。temperatureまたはtop_pのいずれかの使用を推奨します。デフォルト:1.0
top_k
integer
Top-Kサンプリング上位K個のオプションのみからサンプリングし、「ロングテール」の低確率応答を除去します。高度なユースケースにのみ推奨。
stream
boolean
ストリーミングを有効化trueの場合、Server-Sent Events(SSE)を使用してレスポンスをストリーミングします。デフォルト:false
stop_sequences
array
停止シーケンスモデルの生成を停止させるカスタムテキストシーケンス。最大4つのシーケンス。例:["\n\nHuman:", "\n\nAssistant:"]
metadata
object
MetadataMetadata object for the request.Includes:
  • user_id: User identifier
tools
array
Tool definitionsList of tools the model can use to complete tasks.Function tool example:
{
  "tools": [
    {
      "name": "get_weather",
      "description": "Get the current weather in a given location",
      "input_schema": {
        "type": "object",
        "properties": {
          "location": {
            "type": "string",
            "description": "The city and state, e.g. San Francisco, CA"
          },
          "unit": {
            "type": "string",
            "enum": ["celsius", "fahrenheit"],
            "description": "Temperature unit"
          }
        },
        "required": ["location"]
      }
    }
  ]
}
Supported tool types:
  • Custom function tools
  • Computer use tool (computer_20241022)
  • Text editor tool (text_editor_20241022)
  • Bash tool (bash_20241022)
tool_choice
object
Tool choice strategyControls how the model uses tools:
  • {"type": "auto"}: Auto-decide (default)
  • {"type": "any"}: Must use a tool
  • {"type": "tool", "name": "tool_name"}: Use specific tool

Response

id
string
Unique message identifierExample: "msg_013Zva2CMHLNnXjNJJKqJ2EF"
type
string
Object typeAlways "message"
role
string
RoleAlways "assistant"
content
array
Content blocks arrayContent generated by the model, as an array of content blocks.Text content:
[{"type": "text", "text": "Hello! I'm Claude."}]
Tool use:
[
  {
    "type": "tool_use",
    "id": "toolu_01A09q90qw90lq917835lq9",
    "name": "get_weather",
    "input": {"location": "San Francisco, CA", "unit": "celsius"}
  }
]
Content types:
  • text: Text content
  • tool_use: Tool invocation
model
string
Model that handled the requestExample: "claude-sonnet-4-5-20250929"
stop_reason
string
Stop reasonPossible values:
  • end_turn: Natural completion
  • max_tokens: Reached maximum tokens
  • stop_sequence: Hit stop sequence
  • tool_use: Invoked a tool
stop_sequence
string | null
Stop sequence triggeredThe stop sequence that was generated, if any; otherwise null
usage
object
Token usage statistics

Usage Examples

Basic Conversation

import anthropic

client = anthropic.Anthropic(
    api_key="YOUR_API_KEY",
    base_url="https://api.apimart.ai"
)

message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Explain quantum computing basics"}
    ]
)

print(message.content[0].text)

Multi-turn Conversation

messages = [
    {"role": "user", "content": "What is machine learning?"},
    {"role": "assistant", "content": "Machine learning is a branch of AI..."},
    {"role": "user", "content": "Can you give a practical example?"}
]

message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=messages
)

Using System Prompts

message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    system="You are a senior Python developer expert in code review and optimization.",
    messages=[
        {"role": "user", "content": "How to optimize this code?\n\n[code]"}
    ]
)

Streaming Response

with client.messages.stream(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Write a short essay about AI"}
    ]
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)

Tool Use

tools = [
    {
        "name": "get_stock_price",
        "description": "Get real-time stock price",
        "input_schema": {
            "type": "object",
            "properties": {
                "ticker": {
                    "type": "string",
                    "description": "Stock ticker symbol, e.g., AAPL"
                }
            },
            "required": ["ticker"]
        }
    }
]

message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    tools=tools,
    messages=[
        {"role": "user", "content": "What's Tesla's stock price?"}
    ]
)

# Handle tool calls
if message.stop_reason == "tool_use":
    tool_use = next(block for block in message.content if block.type == "tool_use")
    print(f"Calling tool: {tool_use.name}")
    print(f"Arguments: {tool_use.input}")

Vision Understanding

message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "image",
                    "source": {
                        "type": "url",
                        "url": "https://example.com/image.jpg"
                    }
                },
                {
                    "type": "text",
                    "text": "Describe this image"
                }
            ]
        }
    ]
)

Base64 Image

import base64

with open("image.jpg", "rb") as image_file:
    image_data = base64.b64encode(image_file.read()).decode("utf-8")

message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "image",
                    "source": {
                        "type": "base64",
                        "media_type": "image/jpeg",
                        "data": image_data
                    }
                },
                {
                    "type": "text",
                    "text": "Analyze this image"
                }
            ]
        }
    ]
)

Best Practices

1. Prompt Engineering

Clear role definition:
system = """You are an experienced data scientist specializing in:
- Statistical analysis and data visualization
- Machine learning model development
- Python and R programming
Provide professional, accurate advice."""
Structured output:
message = "Please return the analysis results in JSON format with summary, key_findings, and recommendations fields."

2. Error Handling

from anthropic import APIError, RateLimitError

try:
    message = client.messages.create(
        model="claude-sonnet-4-5-20250929",
        max_tokens=1024,
        messages=[{"role": "user", "content": "Hello"}]
    )
except RateLimitError:
    print("Rate limit exceeded, please retry later")
except APIError as e:
    print(f"API error: {e}")

3. Token Optimization

# Use shorter prompts
messages = [
    {"role": "user", "content": "Summarize key points:\n\n[long text]"}
]

# Limit output length
message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=500,  # Limit output
    messages=messages
)

4. Prefilling Responses

# Guide model to specific format
messages = [
    {"role": "user", "content": "List 5 Python best practices"},
    {"role": "assistant", "content": "Here are 5 Python best practices:\n\n1."}
]

message = client.messages.create(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=messages
)

Streaming Response Handling

Python Streaming

import anthropic

client = anthropic.Anthropic(
    api_key="YOUR_API_KEY",
    base_url="https://api.apimart.ai"
)

with client.messages.stream(
    model="claude-sonnet-4-5-20250929",
    max_tokens=1024,
    messages=[
        {"role": "user", "content": "Write a Python decorator example"}
    ]
) as stream:
    for text in stream.text_stream:
        print(text, end="", flush=True)

JavaScript Streaming

import Anthropic from '@anthropic-ai/sdk';

const client = new Anthropic({
  apiKey: process.env.API_KEY,
  baseURL: 'https://api.apimart.ai'
});

const stream = await client.messages.stream({
  model: 'claude-sonnet-4-5-20250929',
  max_tokens: 1024,
  messages: [
    { role: 'user', content: 'Write a React component example' }
  ]
});

for await (const chunk of stream) {
  if (chunk.type === 'content_block_delta' && 
      chunk.delta.type === 'text_delta') {
    process.stdout.write(chunk.delta.text);
  }
}

Important Notes

  1. API Key Security:
    • Store API keys in environment variables
    • Never hardcode keys in source code
    • Rotate keys regularly
  2. Rate Limiting:
    • Be aware of API rate limits
    • Implement retry mechanisms
    • Use exponential backoff
  3. Token Management:
    • Monitor token usage
    • Optimize prompt length
    • Use appropriate max_tokens values
  4. Model Selection:
    • Opus: Complex tasks, deep thinking required
    • Sonnet: Balanced performance and cost
    • Haiku: Fast response, simple tasks
  5. Content Filtering:
    • Validate user input
    • Filter sensitive information
    • Implement content moderation