Interface AggregationCursorOptions

interface AggregationCursorOptions {
    allowDiskUse?: boolean;
    authdb?: string;
    awaitData?: boolean;
    batchSize?: number;
    bsonRegExp?: boolean;
    bypassDocumentValidation?: boolean;
    checkKeys?: boolean;
    collation?: CollationOptions;
    comment?: unknown;
    cursor?: Document;
    dbName?: string;
    enableUtf8Validation?: boolean;
    explain?: ExplainVerbosityLike | ExplainCommandOptions;
    fieldsAsRaw?: Document;
    hint?: Hint;
    ignoreUndefined?: boolean;
    let?: Document;
    maxAwaitTimeMS?: number;
    maxTimeMS?: number;
    noCursorTimeout?: boolean;
    noResponse?: boolean;
    omitReadPreference?: boolean;
    out?: string;
    promoteBuffers?: boolean;
    promoteLongs?: boolean;
    promoteValues?: boolean;
    raw?: boolean;
    readConcern?: ReadConcernLike;
    readPreference?: ReadPreferenceLike;
    retryWrites?: boolean;
    serializeFunctions?: boolean;
    session?: ClientSession;
    tailable?: boolean;
    timeoutMode?: CursorTimeoutMode;
    timeoutMS?: number;
    useBigInt64?: boolean;
    willRetryWrite?: boolean;
    writeConcern?: WriteConcern | WriteConcernSettings;
}

Hierarchy (view full)

Properties

allowDiskUse?: boolean

allowDiskUse lets the server know if it can use disk to store temporary results for the aggregation (requires mongodb 2.6 >).

authdb?: string
awaitData?: boolean

If awaitData is set to true, when the cursor reaches the end of the capped collection, MongoDB blocks the query thread for a period of time waiting for new data to arrive. When new data is inserted into the capped collection, the blocked thread is signaled to wake up and return the next batch to the client.

batchSize?: number

Specifies the number of documents to return in each response from MongoDB

bsonRegExp?: boolean

return BSON regular expressions as BSONRegExp instances.

false

bypassDocumentValidation?: boolean

Allow driver to bypass schema validation.

checkKeys?: boolean

the serializer will check if keys are valid.

false

collation?: CollationOptions

Specify collation.

comment?: unknown

Comment to apply to the operation.

In server versions pre-4.4, 'comment' must be string. A server error will be thrown if any other type is provided.

In server versions 4.4 and above, 'comment' can be any valid BSON type.

cursor?: Document

Return the query as cursor, on 2.6 > it returns as a real cursor on pre 2.6 it returns as an emulated cursor.

dbName?: string
enableUtf8Validation?: boolean

Enable utf8 validation when deserializing BSON documents. Defaults to true.

Specifies the verbosity mode for the explain output.

This API is deprecated in favor of collection.aggregate().explain() or db.aggregate().explain().

fieldsAsRaw?: Document

allow to specify if there what fields we wish to return as unserialized raw buffer.

null

hint?: Hint

Add an index selection hint to an aggregation command

ignoreUndefined?: boolean

serialize will not emit undefined fields note that the driver sets this to false

true

let?: Document

Map of parameter names and values that can be accessed using $$var (requires MongoDB 5.0).

maxAwaitTimeMS?: number

When applicable maxAwaitTimeMS controls the amount of time subsequent getMores that a cursor uses to fetch more data should take. (ex. cursor.next())

maxTimeMS?: number

When applicable maxTimeMS controls the amount of time the initial command that constructs a cursor should take. (ex. find, aggregate, listCollections)

noCursorTimeout?: boolean
noResponse?: boolean
omitReadPreference?: boolean
out?: string
promoteBuffers?: boolean

when deserializing a Binary will return it as a node.js Buffer instance.

false

promoteLongs?: boolean

when deserializing a Long will fit it into a Number if it's smaller than 53 bits.

true

promoteValues?: boolean

when deserializing will promote BSON values to their Node.js closest equivalent types.

true

raw?: boolean

Enabling the raw option will return a Node.js Buffer which is allocated using allocUnsafe API. See this section from the Node.js Docs here for more detail about what "unsafe" refers to in this context. If you need to maintain your own editable clone of the bytes returned for an extended life time of the process, it is recommended you allocate your own buffer and clone the contents:

const raw = await collection.findOne({}, { raw: true });
const myBuffer = Buffer.alloc(raw.byteLength);
myBuffer.set(raw, 0);
// Only save and use `myBuffer` beyond this point

Please note there is a known limitation where this option cannot be used at the MongoClient level (see NODE-3946). It does correctly work at Db, Collection, and per operation the same as other BSON options work.

readConcern?: ReadConcernLike

Specify a read concern and level for the collection. (only MongoDB 3.2 or higher supported)

readPreference?: ReadPreferenceLike

The preferred read preference (ReadPreference.primary, ReadPreference.primary_preferred, ReadPreference.secondary, ReadPreference.secondary_preferred, ReadPreference.nearest).

retryWrites?: boolean

Should retry failed writes

serializeFunctions?: boolean

serialize the javascript functions

false

session?: ClientSession

Specify ClientSession for this command

tailable?: boolean

By default, MongoDB will automatically close a cursor when the client has exhausted all results in the cursor. However, for capped collections you may use a Tailable Cursor that remains open after the client exhausts the results in the initial cursor.

timeoutMode?: CursorTimeoutMode

Specifies how timeoutMS is applied to the cursor. Can be either 'cursorLifeTime' or 'iteration' When set to 'iteration', the deadline specified by timeoutMS applies to each call of cursor.next(). When set to 'cursorLifetime', the deadline applies to the life of the entire cursor.

Depending on the type of cursor being used, this option has different default values. For non-tailable cursors, this value defaults to 'cursorLifetime' For tailable cursors, this value defaults to 'iteration' since tailable cursors, by definition can have an arbitrarily long lifetime.

const cursor = collection.find({}, {timeoutMS: 100, timeoutMode: 'iteration'});
for await (const doc of cursor) {
// process doc
// This will throw a timeout error if any of the iterator's `next()` calls takes more than 100ms, but
// will continue to iterate successfully otherwise, regardless of the number of batches.
}
const cursor = collection.find({}, { timeoutMS: 1000, timeoutMode: 'cursorLifetime' });
const docs = await cursor.toArray(); // This entire line will throw a timeout error if all batches are not fetched and returned within 1000ms.
timeoutMS?: number

Specifies the time an operation will run until it throws a timeout error. See AbstractCursorOptions.timeoutMode for more details on how this option applies to cursors.

useBigInt64?: boolean

when deserializing a Long return as a BigInt.

false

willRetryWrite?: boolean

Write Concern as an object