OpenAI Embeddings

Creates embeddings for the given input text.

Note: In dedicated deployments, api.langdock.com maps to <Base URL>/api/public.

circle-info

To use the API you need an API key. Admins can create API keys in the settings.

circle-exclamation

Endpoint

POST https://api.langdock.com/openai/{region}/v1/embeddings

Path parameter:

  • region (required) — The region of the API to use. Available options: eu, us.

Headers:

  • Authorization (required) — API key as Bearer token. Format: Bearer YOUR_API_KEY

  • Content-Type: application/json

Body

Content type: application/json

  • input (required) — Input text to get embeddings for, encoded as a string or array of tokens. To get embeddings for multiple inputs in a single request, pass an array of strings or array of tokens (e.g. ["text1", "text2"]). Each input must not exceed 8192 tokens in length.

  • model (required) — ID of the model to use.

  • encoding_format (optional, default: float) — The format to return the embeddings in. Available options: float, base64.

  • dimensions (optional) — The number of dimensions the resulting output embeddings should have. Only supported in text-embedding-3 and later models. Required range: x >= 1.

  • user (optional) — A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

Rate limits

The rate limit for the Embeddings endpoint is 500 RPM (requests per minute) and 60,000 TPM (tokens per minute). Rate limits are defined at the workspace level (not at an API key level). If you exceed your rate limit, you will receive a 429 Too Many Requests response.

Please note that the rate limits are subject to change. If you need a higher rate limit, contact [email protected]envelope.

Examples

Response

200 OK — application/json

The response follows the OpenAI embeddings format. Example:

chevron-rightExample responsehashtag

Response fields:

  • data (required) — List of embeddings generated by the model.

    • data[].index (required) — The index of the embedding in the list.

    • data[].embedding (required) — The embedding vector (list of floats). Length depends on the model.

    • data[].object (required) — The object type, always "embedding".

  • model (required) — Name of the model used.

  • object (required) — The object type, always "list".

  • usage (required) — Token usage details:

    • usage.prompt_tokens (required) — Number of tokens used for the prompt(s).

    • usage.total_tokens (required) — Total number of tokens used by the request.

Using OpenAI-compatible libraries

As the request and response format is the same as the OpenAI API, you can use libraries such as the OpenAI Python libraryarrow-up-right or the Vercel AI SDKarrow-up-right with the Langdock API (see examples above).

Was this page helpful?

  • Yes

  • No