Use the returned task ID to query the task for the final result.
Documentation Index
Fetch the complete documentation index at: https://docs.foxapi.cc/llms.txt
Use this file to discover all available pages before exploring further.
All endpoints require Bearer Token authentication. Add to the request header:
Authorization: Bearer YOUR_API_KEY
YOUR_API_KEY is the API Token (sk-... format).
Model name. Common vision models:
claude-opus-4-7gemini-2.5-progpt-5.5 (single-image set only; does not support video / audio)nemotron-3-nano-omni (single image only)"claude-opus-4-7"
"gemini-2.5-pro"
"nemotron-3-nano-omni"
User prompt, up to 100,000 characters.
100000"Describe this image in one sentence."
Array of image sources (1–10 images). Each element accepts one of the following two forms:
data:image/<type>;base64,<payload> data URI (base64 inline)Model constraints:
nemotron-3-nano-omni: single image only; image_urls.length > 1 → 422; when a URL is intermittently unreachable, fall back to an inline data URIBase64 data is not size-validated; oversized payloads may trigger 422.
1 - 10 elements[
"https://fal.media/files/lion/AOtzfcyHpx-MOITAUeMrK.jpeg"
]Synchronous mode (see llm-text schema).
false
Whether to stream (see llm-text schema).
false
Generation token limit. Optional.
x >= 164
Sampling temperature, range [0, 2]. Optional.
0 <= x <= 20.3
System instruction. Optional.
10000"You are a vision assistant."
Whether to include reasoning tokens. Some thinking models require this to be set to true.
Task created (async mode) / full response (sync mode)
Submit response, conforming to the unified task standard shape. results / error are fixed at null during submit; they are returned via GET /v1/tasks/{task_id} after the task completes or fails. In sync=true, stream=false mode, the endpoint directly returns the full OpenAI ChatCompletion JSON.
Task ID, formatted as task-llmrouter-{timestamp}-{8random}.
"task-llmrouter-1776874565-yq3szvcu"
llm.generation.task "llm.generation.task"
llm "llm"
The model name submitted by the client (echoed verbatim)
"claude-opus-4-7"
pending "pending"
0
1776874565
Returns {url: ...} when stream=true; null when stream=false.
Fixed at null during submit; returned via GET /v1/tasks/{task_id} after the task completes — results[0] is the full OpenAI ChatCompletion response.
null
Fixed at null during submit; returned via GET /v1/tasks/{task_id} when the task fails.
null