Best Practices in REST API Design for Conversational Assistants

23-09-2025

By Cristian Manuel Suarez Vera

When developing platforms that integrate intelligent chatbots, one recurring challenge is defining a clear, extensible, and easy-to-consume API. Beyond the language model logic, the interface connecting the frontend and backend must be well-structured: it is responsible for managing chat sessions, storing messages, and coordinating assistant calls.

A frequent debate arises around the creation of the first chat: which HTTP verb should be used when you need to fetch the existing history and, if it does not exist, generate a new session with a welcome message? This seemingly simple decision has important implications for the API's consistency, maintainability, and usability.

The Initial Dilemma: GET or POST

In many prototypes, the logic starts with a single endpoint, something like:

  • GET /chat-start
  • POST /chat-start

The idea is that, when invoked, the API returns the conversation messages if a session already exists, or creates a new one with the welcome message if it does not.

Although simple in appearance, this approach has several problems:

  • Violation of HTTP semantics: a GET should not have side effects (such as creating a resource).
  • GET with body: some prototypes send a userId in the body of a GET, which is not supported by all libraries and is considered an anti-pattern.
  • Ambiguity in responsibilities: an endpoint that "reads or creates" can confuse consumers and complicate testing.

The natural conclusion is to clearly separate read and create operations, or to consider a controlled upsert pattern.

Principles for Cleaner Design

When analyzing different alternatives, some best practices emerge:

  1. Do not use verbs in paths: avoid routes like /chat-start. Instead, use nouns that represent resources (/chats, /messages).
  2. Separate reading from writing: GET should be idempotent and have no side effects; use POST to create.
  3. Use system messages for the welcome: the initial greeting can be represented as another message with role: system.
  4. Versioning and extensibility: always include /api/v1/ in routes to allow evolution without breaking compatibility.

A More Robust REST Model

Based on these ideas, a clearer API contract can be defined for support assistants.

Create or Retrieve an Active Chat

The frontend should be able to obtain an existing chat or start a new one with a single endpoint:

POST /api/v1/assistants/{assistantSlug}/chats
  • If the chat already exists for that user and assistant, it returns 200 OK with the session.
  • If it does not exist, it creates a new one and returns 201 Created with the resource Location.
  • In both cases, the message history is returned (including the welcome message if just created).

This upsert pattern simplifies frontend consumption.

Get Chat Messages

GET /api/v1/chats/{chatId}/messages?limit=50&before=...&after=...

Returns the conversation messages, with pagination support and ETag headers for efficient caching.

Send User Messages

POST /api/v1/chats/{chatId}/messages

Creates a new message with role: user and persists it in the history.

Complete Turns with the Assistant

Since in many cases it is not enough to just store a message, but a response from the language model is required, it is advisable to introduce an additional resource:

POST /api/v1/chats/{chatId}/turns

This endpoint:

  • Adds the user's message,
  • Calls the support assistant,
  • Adds the response (role: assistant) to the history,
  • Returns both messages in the response.

For scenarios where the response may take longer or requires streaming, an asynchronous model with work queues (/jobs/assistant-reply) or Server-Sent Events is also possible.

Scaling to Multiple Assistants

In real environments, there is not just one chatbot: a user may interact with several specialized assistants (for example, technical support, billing, or general inquiries).

To achieve this, it is useful to structure the API as follows:

  • POST /api/v1/assistants/{assistantSlug}/chats
  • GET /api/v1/assistants/{assistantSlug}/chats/{chatId}/messages
  • POST /api/v1/assistants/{assistantSlug}/chats/{chatId}/turns

This way, each assistant has its own conversation space, but all follow the same contract and usage pattern.

Additional Best Practices

  • Idempotency: for creation operations, support headers like Idempotency-Key to allow safe retries from the client.
  • Chat states: maintain a status field (active, archived, closed) and expose a PATCH /chats/{chatId} to manage it.
  • Security: ideally, the userId should not travel in the request body; it should be inferred from the authentication token.
  • Clear errors: use standard codes (404 Not Found, 409 Conflict, 422 Unprocessable Entity) and JSON responses with code, message, and request_id.
  • Documentation: publish an OpenAPI contract with examples for each endpoint, including 200/201/404 codes and error flows.

Example of a Complete Flow

  1. The user accesses the support website and the frontend calls:
POST /api/v1/assistants/support/chats

201 Created response with the chat and welcome message (role: system).

  1. The user asks:
POST /api/v1/chats/123/turns { "user_message": "¿Cómo reinstalo el software?" }

200 OK response with the user's and assistant's messages.

  1. The frontend refreshes the conversation showing both.

Conclusion

Designing a REST API for conversational assistants is not just about choosing one verb or another: it involves thinking about consistency, extensibility, and developer experience. Avoiding ambiguous endpoints like /chat-start, separating responsibilities, and relying on REST conventions helps build more robust and maintainable systems.

In the case of an environment with multiple assistants—such as different technical support areas—a clear structure based on /assistants/{assistantSlug}/chats offers flexibility without sacrificing simplicity.

As a final reminder, the API is as important as the assistant itself: its design determines whether integration is smooth, scalable, and ready to grow with new capabilities in the future.