Anthropic Messages
Creates a model response for the given chat conversation. This endpoint follows the Anthropic API specification and the requests are sent to the AWS Bedrock Anthropic endpoint.
To use the API you need an API key. Admins can create API keys in the settings.
All parameters from the Anthropic “Create a message” endpoint are supported according to the Anthropic specifications, with the following exception:
model: The supported models are:claude-sonnet-4-5-20250929,claude-sonnet-4-20250514,claude-3-7-sonnet-20250219,claude-3-5-sonnet-20240620.The list of available models might differ if you are using your own API keys in Langdock (“Bring-your-own-keys / BYOK”, see https://docs.langdock.com/settings/models/byok). In that case, please reach out to your admin to understand which models are available in the API.
Endpoint
POST https://api.langdock.com/anthropic/{region}/v1/messages
Path parameter:
region(required) — enum: available options:eu,us
Example: cURL
curl --request POST \
--url https://api.langdock.com/anthropic/{region}/v1/messages \
--header 'Authorization: <authorization>' \
--header 'Content-Type: application/json' \
--data '
{
"max_tokens": 1024,
"messages": [
{
"content": "Write a haiku about cats.",
"role": "user"
}
],
"model": "claude-sonnet-4-20250514"
}
'Example successful response (200):
Rate limits
The rate limit for the Messages endpoint is 500 RPM (requests per minute) and 60.000 TPM (tokens per minute). Rate limits are defined at the workspace level — not at an API key level. Each model has its own rate limit. If you exceed your rate limit, you will receive a 429 Too Many Requests response. Rate limits are subject to change; refer to this documentation for the most up-to-date information.
If you need a higher rate limit, contact: [email protected]
Using Anthropic-compatible libraries
As the request and response format is the same as the Anthropic API, you can use libraries such as the Anthropic Python library (https://github.com/anthropics/anthropic-sdk-python) or the Vercel AI SDK (https://sdk.vercel.ai/docs/introduction) with the Langdock API.
Example using the Anthropic Python library
Example using the Vercel AI SDK in Node.js
Request body (application/json)
All fields follow Anthropic's Messages API, with the supported models noted above.
model (required) — enum: The model to complete your prompt.
Available:
claude-sonnet-4-5-20250929,claude-sonnet-4-20250514,claude-3-7-sonnet-20250219,claude-3-5-sonnet-20240620
messages (required) — array of InputMessage objects:
Each input message must be an object with
roleandcontent.Roles:
user,assistantThe first message must always use the
userrole.contentmay be a string (shorthand for a single text block) or an array of content blocks with types (e.g.,text,image).Starting with Claude 3 models, image content blocks are supported using base64-encoded images:
Supported media types:
image/jpeg,image/png,image/gif,image/webp.
max_tokens (required) — integer: maximum number of tokens to generate. Range: x >= 1
stop_sequences — array of strings: custom text sequences that will cause the model to stop generating.
stream — boolean: whether to incrementally stream the response using server-sent events.
system — string or array: system prompt (see Anthropic system prompts guide https://docs.anthropic.com/en/docs/system-prompts).
temperature — number: randomness, default 1.0, range 0.0 to 1.0.
tool_choice — object: how the model should use provided tools. Options include
auto,any,tool.tools — array of tool definitions. Each tool includes:
name(required)description(strongly recommended)input_schema(required) — JSON schema describing the tool input shape. Tools allow the model to producetool_usecontent blocks that you can execute and return results back astool_resultblocks.
top_k — integer: sample only from top K options for each token (advanced use).
top_p — number: nucleus sampling (0.0–1.0, advanced use).
(See Anthropic docs for additional examples and details: https://docs.anthropic.com/en/api/messages)
Response (200: application/json)
Returns a Message object. Key fields:
id — string: unique object identifier.
type — string: object type, always
"message".role — string: conversational role, always
"assistant".content — array of content blocks (e.g.,
{"type":"text","text":"Hi, I'm Claude."}).model — string: model used.
stop_reason — enum:
"end_turn","max_tokens","stop_sequence","tool_use".stop_sequence — string: which custom stop sequence was generated (if any).
usage — object:
input_tokens,output_tokens(token counts).
Example content block:
Tools and tool workflows
If you include tools in your request, the model may return tool_use content blocks representing the model's intended tool invocation. You can run those tools and optionally return results back to the model using tool_result content blocks.
Example tool definition and usage are available in the Anthropic docs (https://docs.anthropic.com/en/docs/tool-use).
Additional resources
Anthropic API spec: https://docs.anthropic.com/en/api/messages
Anthropic models overview: https://docs.anthropic.com/en/docs/models-overview
Anthropic system prompts: https://docs.anthropic.com/en/docs/system-prompts
Langdock BYOK settings: https://docs.langdock.com/settings/models/byok
Was this page helpful? (This line retained from original content.)

