Completions
Given a prompt, the model will return one or more predicted completions.
Create Completion
POST https://api.sybil.com/v1/completions
Creates a completion for the provided prompt and parameters.
Request Body
| Name | Type | Description |
|---|---|---|
model | string | Required. The name of the model to use. |
prompt | string | Required. The prompt(s) to generate completions for. |
max_tokens | integer | The maximum number of tokens to generate in the completion. |
temperature | number | What sampling temperature to use, between 0 and 2. |
top_p | number | An alternative to sampling with temperature, called nucleus sampling. |
stream | boolean | Whether to stream back partial progress. |
Example Request
curl https://api.sybil.com/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $SYBIL_API_KEY" \
-d '{
"model": "DeepSeek-R3",
"prompt": "Say this is a test",
"max_tokens": 7,
"temperature": 0
}'
Response Format
{
"model": "DeepSeek-R3",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}