Skip to main content
POST
/
v1
/
messages
curl --request POST \
  --url https://api.foxapi.cc/v1/messages \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data @- <<EOF
{
  "model": "claude-sonnet-4-20250514",
  "max_tokens": 1024,
  "messages": [
    {
      "role": "user",
      "content": "What's the weather like in San Francisco now? Is it good for the beach?"
    }
  ],
  "tools": [
    {
      "name": "get_weather",
      "description": "Get current weather for a specified city",
      "input_schema": {
        "type": "object",
        "properties": {
          "city": {
            "type": "string",
            "description": "City name"
          },
          "unit": {
            "type": "string",
            "enum": [
              "celsius",
              "fahrenheit"
            ],
            "description": "Temperature unit"
          }
        },
        "required": [
          "city"
        ]
      }
    }
  ]
}
EOF
{
  "id": "msg_abc123",
  "type": "message",
  "role": "assistant",
  "content": [
    {
      "type": "text",
      "text": "Hello! How can I help you today?",
      "id": "<string>",
      "name": "<string>",
      "input": {}
    }
  ],
  "model": "<string>",
  "stop_reason": "end_turn",
  "usage": {
    "input_tokens": 25,
    "output_tokens": 150
  }
}

Authorizations

Authorization
string
header
required

All APIs require Bearer Token authentication

Add to request header:

Authorization: Bearer YOUR_API_KEY

Body

application/json
model
string
required

Model name, e.g. claude-sonnet-4-20250514

Example:

"claude-sonnet-4-20250514"

messages
object[]
required

List of messages in the conversation

max_tokens
integer
required

Maximum number of tokens to generate (required)

Example:

1024

system

System prompt

temperature
number

Sampling temperature (0-1)

Required range: 0 <= x <= 1
top_p
number

Nucleus sampling parameter (0-1)

Required range: 0 <= x <= 1
top_k
integer

Top-K sampling parameter

stream
boolean
default:false

Whether to stream the response

stop_sequences
string[]

Custom stop sequences that will cause the model to stop generating

tools
object[]

List of tools the model may use

tool_choice
object

How the model should use the provided tools

metadata
object

Request metadata

Response

Claude Messages API response

id
string

Unique message ID

Example:

"msg_abc123"

type
enum<string>
Available options:
message
role
enum<string>
Available options:
assistant
content
object[]

Response content blocks

model
string

Model used

stop_reason
enum<string>

Reason the generation stopped

Available options:
end_turn,
max_tokens,
stop_sequence,
tool_use
Example:

"end_turn"

usage
object

Token usage statistics