Skip to main content

MongoDB Knowledge Service API (1.0.0)

Download OpenAPI specification:Download

License: Apache 2.0

The Knowledge Service is a RESTful API.

The API is designed to be used by the MongoDB RAG UI frontend, but can also be used by other clients.

If you have a use case not currently supported by the API, please reach out to the MongoDB EAI team.

Search Content

Search MongoDB content

Search content

Authorizations:
CustomHeaderAuth
Request Body schema: application/json
required
query
string

The search query string.

Array of objects

An array of data sources to search. If not provided, latest version of all data sources will be searched.

limit
integer [ 1 .. 100 ]
Default: 5

The maximum number of results to return.

Responses

Request samples

Content type
application/json
{
  • "query": "string",
  • "dataSources": [
    ],
  • "limit": 5
}

Response samples

Content type
application/json
{
  • "results": [
    ]
}

List available data sources

Returns metadata about all available data sources.

Authorizations:
CustomHeaderAuth

Responses

Response samples

Content type
application/json
{
  • "dataSources": [
    ]
}

Conversations

Interact with MongoDB Chatbot

Start new conversation Deprecated

Start a new conversation.

This endpoint is deprecated. Use the createResponse endpoint instead.

Authorizations:
CustomHeaderAuth

Responses

Response samples

Content type
application/json
{
  • "_id": "string",
  • "messages": [
    ],
  • "createdAt": 0
}

Add message to the conversation Deprecated

Add a message to the conversation and get a response back from chatbot.

This endpoint is deprecated. Use the createResponse endpoint instead.

You can configure your server to create new conversations when you set the conversation ID to null. If you do this, the server creates a new conversation and returns the conversation ID in the response's metadata.conversationId field.

Authorizations:
CustomHeaderAuth
path Parameters
conversationId
required
string

The unique identifier for a conversation.

query Parameters
stream
boolean
Default: false

If true, the response will be streamed to the client. This is useful for long-running conversations.

Request Body schema: application/json
message
required
string <markdown>

Markdown-formatted message from the user.

object

Additional context provided by the client. This is useful for passing information to the chatbot that is not part of the user's message but may be relevant to the chatbot's response. For example, the client may pass information about the user's preferences, location, preferred language, etc.

Responses

Request samples

Content type
application/json
{
  • "message": "string",
  • "clientContext": { }
}

Response samples

Content type
{
  • "id": "string",
  • "conversationId": "string",
  • "role": "user",
  • "content": "string",
  • "rating": true,
  • "userComment": "string",
  • "createdAt": 0,
  • "references": []
}

Rate message Deprecated

Rate a message.

This endpoint is deprecated. Use the rateStandaloneMessage endpoint instead.

Authorizations:
CustomHeaderAuth
path Parameters
conversationId
required
string

The unique identifier for a conversation.

messageId
required
string

The unique identifier for a message.

Request Body schema: application/json
rating
boolean

Set to true if the user liked the message, false if the user didn't like the message.

Responses

Request samples

Content type
application/json
{
  • "rating": true
}

Response samples

Content type
application/json
{ }

Rate message

Authorizations:
CustomHeaderAuth
path Parameters
messageId
required
string

The unique identifier for a message.

Request Body schema: application/json
rating
boolean

Set to true if the user liked the message, false if the user didn't like the message.

Responses

Request samples

Content type
application/json
{
  • "rating": true
}

Response samples

Content type
application/json
{ }

Add comment to assistant message Deprecated

Add a comment to an assistant message that clarifies a thumbs up/down rating.

You can only rate an an assistant message that has a thumbs up/down rating. You can only rate a message once. The server returns a 400 error response if the message is not from the assistant, is not rated, or has already been rated.

Authorizations:
CustomHeaderAuth
path Parameters
conversationId
required
string

The unique identifier for a conversation.

messageId
required
string

The unique identifier for a message.

Request Body schema: application/json
comment
string

Comment from the user clarifying why they liked or disliked the message.

Responses

Request samples

Content type
application/json
{
  • "comment": "string"
}

Response samples

Content type
application/json
{ }

Add comment to assistant message

Add a comment to an assistant message that clarifies a thumbs up/down rating.

You can only rate an an assistant message that has a thumbs up/down rating. You can only rate a message once. The server returns a 400 error response if the message is not from the assistant, is not rated, or has already been rated.

Authorizations:
CustomHeaderAuth
path Parameters
messageId
required
string

The unique identifier for a message.

Request Body schema: application/json
comment
string

Comment from the user clarifying why they liked or disliked the message.

Responses

Request samples

Content type
application/json
{
  • "comment": "string"
}

Response samples

Content type
application/json
{ }

Responses

Responses API

Create response

Create a response from an LLM. Follows the specification of the OpenAI Responses API.

You can rate or comment on a message using the rateMessage and commentMessage endpoints.

For more information on using the Responses API, refer to the Responses API documentation.

Authorizations:
CustomHeaderAuth
Request Body schema: application/json
model
required
string

The model to use for generating the response

instructions
string [ 1 .. 50000 ] characters

System instructions for the model (1-50,000 characters, ~10,000 tokens)

required
StringInput (string) or (Array of MessageArrayInput (MessageItem (object) or FunctionCallItem (object) or FunctionCallOutputItem (object)))
max_output_tokens
integer >= 0
Default: 1000

Maximum number of tokens to generate

object <= 16 properties

Additional metadata with max 16 fields, each value max 512 chars. These are stored in the database.

Note that the conversation_id key is reserved and is not allowed in metadata.

previous_response_id
string

The unique ID of the previous response to the model. Including previous_response_id in the request causes the model to use the previous response as context.

Note that conversations support a maximum of 50 user messages. If you want to append more messages, create a new conversation with the previous content consolidated.

Usage notes:

  • You can only use previous_response_id if store: true.
  • All requests on the same conversation must have the same user ID leave it undefined.
store
boolean
Default: true

Whether to store the response in the database.

If store is true:

  • The response is stored in the database.
  • You may use the previous_response_id to statefully append new messages to the conversation.
  • The response is traced and annotated with metadata.

If store is false:

  • Only metadata about the response is stored in the database.
  • The response is not traced.
  • You cannot add staefully interact with the converastion via previous_response_id.
stream
required
boolean
Value: true

Stream response. Must always be true for current implementation. Non-streaming is not yet supported.

temperature
number
Default: 0
Value: 0

Temperature for the model. Must be 0.

string or object
Default: "auto"

Tool choice for the model.

Note that the OpenAI Responses API options none and required are not supported by our implementation. We do not support them because it would be complex to use them with the internal tool call system that our API uses for RAG. If you need these options supported, please contact the Education AI team.

Array of objects <= 10 items

Tools for the model to use.

Limitations:

  • Maximum 10 tools
  • Total serialized content must be ≤25,000 characters (~5,000 tokens)
user
string

The ID of the user. Can be any arbitrary string.

If used in combination with previous_response_id, the value for user must be the same for all requests on the same conversation.

Responses

Request samples

Content type
application/json
{
  • "model": "string",
  • "instructions": "string",
  • "input": "string",
  • "max_output_tokens": 1000,
  • "metadata": {
    },
  • "previous_response_id": "string",
  • "store": true,
  • "stream": true,
  • "temperature": 0,
  • "tool_choice": "auto",
  • "tools": [
    ],
  • "user": "string"
}

Response samples

Content type
application/json
{ }