Skip to main content
POST
/
chat
/
completions
Chat Completions
curl --request POST \
  --url https://api.lumegates.com/v1/chat/completions \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "gpt-4o",
  "messages": [
    {
      "role": "user",
      "content": "Hello LumeGates!"
    }
  ],
  "stream": false
}
'
{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4o",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 9,
    "completion_tokens": 12,
    "total_tokens": 21
  }
}
LumeGates provides an OpenAI-compatible interface for all models. Use this endpoint to send a list of messages and get a model-generated response.

Request Headers

Standard Bearer Token authentication is used.

Body Parameters

Refer to the interactive panel on the right to see all supported parameters like model, messages, stream, and temperature.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Body

application/json
model
string
Example:

"gpt-4o"

messages
object[]
stream
boolean
default:false

Response

200 - application/json

Successful response

id
string
Example:

"chatcmpl-123"

object
string
Example:

"chat.completion"

created
integer
Example:

1677652288

model
string
Example:

"gpt-4o"

choices
object[]
usage
object