Skip to main content

✂️ Tokenize

Use the ✂️ Tokenize endpoint to see how the models slices the text into tokens.

Available at https://api.lighton.ai/muse/v1/tokenize.


Example request

curl -X 'POST' \
'https://api.lighton.ai/muse/v1/tokenize' \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-H 'X-API-KEY: YOUR_API_KEY' \
-H 'X-Model: orion-fr' \
-d '{"text": "Il était une fois"}'
Response (JSON)
{
"request_id": "46dfb88e-812f-4424-96a9-3dec57caf8da",
"outputs": [
[
{
"execution_metadata": {
"cost": {
"tokens_used": 0,
"tokens_input": 4,
"tokens_generated": 0,
"cost_type": "orion-fr@default",
"batch_size": 1
},
"finish_reason": "length"
},
"text": "Il était une fois",
"n_tokens": 4,
"tokens": [" Il", " était", " une", " fois"]
}
]
],
"costs": {
"orion-fr@default": {
"total_tokens_used": 0,
"total_tokens_input": 4,
"total_tokens_generated": 0,
"batch_size": 1
}
}
}

Parameters

  • text string/array[string] ⚠️ required

    The input(s) that will be used by the model for generation, also known as the prompt. They can be provided either as a single string or as an array of strings for batch processing.

Response (outputs)

An array of outputs shaped like your batch.

  • execution_metadata ExecutionMetadata

    An Execution metadata structure.

  • text string

    The input text.

  • n_tokens int

    The number of tokens of the input text.

  • tokens array[string]

    An array of tokens of the input text.