Interface IndexInformationOptions

interface IndexInformationOptions {
    awaitData?: boolean;
    batchSize?: number;
    bsonRegExp?: boolean;
    checkKeys?: boolean;
    comment?: unknown;
    enableUtf8Validation?: boolean;
    fieldsAsRaw?: Document;
    full?: boolean;
    ignoreUndefined?: boolean;
    maxAwaitTimeMS?: number;
    maxTimeMS?: number;
    noCursorTimeout?: boolean;
    promoteBuffers?: boolean;
    promoteLongs?: boolean;
    promoteValues?: boolean;
    raw?: boolean;
    readConcern?: ReadConcernLike;
    readPreference?: ReadPreferenceLike;
    serializeFunctions?: boolean;
    session?: ClientSession;
    tailable?: boolean;
    timeoutMode?: CursorTimeoutMode;
    timeoutMS?: number;
    useBigInt64?: boolean;
}

Hierarchy (view full)

Properties

awaitData?: boolean

If awaitData is set to true, when the cursor reaches the end of the capped collection, MongoDB blocks the query thread for a period of time waiting for new data to arrive. When new data is inserted into the capped collection, the blocked thread is signaled to wake up and return the next batch to the client.

batchSize?: number

Specifies the number of documents to return in each response from MongoDB

bsonRegExp?: boolean

return BSON regular expressions as BSONRegExp instances.

false

checkKeys?: boolean

the serializer will check if keys are valid.

false

comment?: unknown

Comment to apply to the operation.

In server versions pre-4.4, 'comment' must be string. A server error will be thrown if any other type is provided.

In server versions 4.4 and above, 'comment' can be any valid BSON type.

enableUtf8Validation?: boolean

Enable utf8 validation when deserializing BSON documents. Defaults to true.

fieldsAsRaw?: Document

allow to specify if there what fields we wish to return as unserialized raw buffer.

null

full?: boolean

When true, an array of index descriptions is returned. When false, the driver returns an object that with keys corresponding to index names with values corresponding to the entries of the indexes' key.

For example, the given the following indexes:

[ { name: 'a_1', key: { a: 1 } }, { name: 'b_1_c_1' , key: { b: 1, c: 1 } }]

When full is true, the above array is returned. When full is false, the following is returned:

{
'a_1': [['a', 1]],
'b_1_c_1': [['b', 1], ['c', 1]],
}
ignoreUndefined?: boolean

serialize will not emit undefined fields note that the driver sets this to false

true

maxAwaitTimeMS?: number

When applicable maxAwaitTimeMS controls the amount of time subsequent getMores that a cursor uses to fetch more data should take. (ex. cursor.next())

maxTimeMS?: number

When applicable maxTimeMS controls the amount of time the initial command that constructs a cursor should take. (ex. find, aggregate, listCollections)

noCursorTimeout?: boolean
promoteBuffers?: boolean

when deserializing a Binary will return it as a node.js Buffer instance.

false

promoteLongs?: boolean

when deserializing a Long will fit it into a Number if it's smaller than 53 bits.

true

promoteValues?: boolean

when deserializing will promote BSON values to their Node.js closest equivalent types.

true

raw?: boolean

Enabling the raw option will return a Node.js Buffer which is allocated using allocUnsafe API. See this section from the Node.js Docs here for more detail about what "unsafe" refers to in this context. If you need to maintain your own editable clone of the bytes returned for an extended life time of the process, it is recommended you allocate your own buffer and clone the contents:

const raw = await collection.findOne({}, { raw: true });
const myBuffer = Buffer.alloc(raw.byteLength);
myBuffer.set(raw, 0);
// Only save and use `myBuffer` beyond this point

Please note there is a known limitation where this option cannot be used at the MongoClient level (see NODE-3946). It does correctly work at Db, Collection, and per operation the same as other BSON options work.

readConcern?: ReadConcernLike
readPreference?: ReadPreferenceLike
serializeFunctions?: boolean

serialize the javascript functions

false

session?: ClientSession
tailable?: boolean

By default, MongoDB will automatically close a cursor when the client has exhausted all results in the cursor. However, for capped collections you may use a Tailable Cursor that remains open after the client exhausts the results in the initial cursor.

timeoutMode?: CursorTimeoutMode

Specifies how timeoutMS is applied to the cursor. Can be either 'cursorLifeTime' or 'iteration' When set to 'iteration', the deadline specified by timeoutMS applies to each call of cursor.next(). When set to 'cursorLifetime', the deadline applies to the life of the entire cursor.

Depending on the type of cursor being used, this option has different default values. For non-tailable cursors, this value defaults to 'cursorLifetime' For tailable cursors, this value defaults to 'iteration' since tailable cursors, by definition can have an arbitrarily long lifetime.

const cursor = collection.find({}, {timeoutMS: 100, timeoutMode: 'iteration'});
for await (const doc of cursor) {
// process doc
// This will throw a timeout error if any of the iterator's `next()` calls takes more than 100ms, but
// will continue to iterate successfully otherwise, regardless of the number of batches.
}
const cursor = collection.find({}, { timeoutMS: 1000, timeoutMode: 'cursorLifetime' });
const docs = await cursor.toArray(); // This entire line will throw a timeout error if all batches are not fetched and returned within 1000ms.
timeoutMS?: number

Specifies the time an operation will run until it throws a timeout error. See AbstractCursorOptions.timeoutMode for more details on how this option applies to cursors.

useBigInt64?: boolean

when deserializing a Long return as a BigInt.

false