Optional
allowOptional
authdbOptional
awaitIf awaitData is set to true, when the cursor reaches the end of the capped collection, MongoDB blocks the query thread for a period of time waiting for new data to arrive. When new data is inserted into the capped collection, the blocked thread is signaled to wake up and return the next batch to the client.
Optional
batchSpecifies the number of documents to return in each response from MongoDB
Optional
bsonreturn BSON regular expressions as BSONRegExp instances.
Optional
bypassAllow driver to bypass schema validation.
Optional
checkthe serializer will check if keys are valid.
Optional
collationSpecify collation.
Optional
commentComment to apply to the operation.
In server versions pre-4.4, 'comment' must be string. A server error will be thrown if any other type is provided.
In server versions 4.4 and above, 'comment' can be any valid BSON type.
Optional
cursorReturn the query as cursor, on 2.6 > it returns as a real cursor on pre 2.6 it returns as an emulated cursor.
Optional
dbOptional
enableEnable utf8 validation when deserializing BSON documents. Defaults to true.
Optional
explainSpecifies the verbosity mode for the explain output.
Optional
fieldsallow to specify if there what fields we wish to return as unserialized raw buffer.
Optional
hintAdd an index selection hint to an aggregation command
Optional
ignoreserialize will not emit undefined fields
note that the driver sets this to false
Optional
letMap of parameter names and values that can be accessed using $$var (requires MongoDB 5.0).
Optional
maxWhen applicable maxAwaitTimeMS
controls the amount of time subsequent getMores
that a cursor uses to fetch more data should take. (ex. cursor.next())
Optional
maxWhen applicable maxTimeMS
controls the amount of time the initial command
that constructs a cursor should take. (ex. find, aggregate, listCollections)
Optional
noOptional
noOptional
omitOptional
outOptional
promotewhen deserializing a Binary will return it as a node.js Buffer instance.
Optional
promotewhen deserializing a Long will fit it into a Number if it's smaller than 53 bits.
Optional
promotewhen deserializing will promote BSON values to their Node.js closest equivalent types.
Optional
rawEnabling the raw option will return a Node.js Buffer which is allocated using allocUnsafe API. See this section from the Node.js Docs here for more detail about what "unsafe" refers to in this context. If you need to maintain your own editable clone of the bytes returned for an extended life time of the process, it is recommended you allocate your own buffer and clone the contents:
const raw = await collection.findOne({}, { raw: true });
const myBuffer = Buffer.alloc(raw.byteLength);
myBuffer.set(raw, 0);
// Only save and use `myBuffer` beyond this point
Please note there is a known limitation where this option cannot be used at the MongoClient level (see NODE-3946).
It does correctly work at Db
, Collection
, and per operation the same as other BSON options work.
Optional
readSpecify a read concern and level for the collection. (only MongoDB 3.2 or higher supported)
Optional
readThe preferred read preference (ReadPreference.primary, ReadPreference.primary_preferred, ReadPreference.secondary, ReadPreference.secondary_preferred, ReadPreference.nearest).
Optional
retryShould retry failed writes
Optional
serializeserialize the javascript functions
Optional
sessionSpecify ClientSession for this command
Optional
tailableBy default, MongoDB will automatically close a cursor when the client has exhausted all results in the cursor. However, for capped collections you may use a Tailable Cursor that remains open after the client exhausts the results in the initial cursor.
Optional
Experimental
timeoutSpecifies how timeoutMS
is applied to the cursor. Can be either 'cursorLifeTime'
or 'iteration'
When set to 'iteration'
, the deadline specified by timeoutMS
applies to each call of
cursor.next()
.
When set to 'cursorLifetime'
, the deadline applies to the life of the entire cursor.
Depending on the type of cursor being used, this option has different default values.
For non-tailable cursors, this value defaults to 'cursorLifetime'
For tailable cursors, this value defaults to 'iteration'
since tailable cursors, by
definition can have an arbitrarily long lifetime.
const cursor = collection.find({}, {timeoutMS: 100, timeoutMode: 'iteration'});
for await (const doc of cursor) {
// process doc
// This will throw a timeout error if any of the iterator's `next()` calls takes more than 100ms, but
// will continue to iterate successfully otherwise, regardless of the number of batches.
}
const cursor = collection.find({}, { timeoutMS: 1000, timeoutMode: 'cursorLifetime' });
const docs = await cursor.toArray(); // This entire line will throw a timeout error if all batches are not fetched and returned within 1000ms.
Optional
timeoutMSSpecifies the time an operation will run until it throws a timeout error. See AbstractCursorOptions.timeoutMode for more details on how this option applies to cursors.
Optional
usewhen deserializing a Long return as a BigInt.
Optional
willOptional
writeWrite Concern as an object
allowDiskUse lets the server know if it can use disk to store temporary results for the aggregation (requires mongodb 2.6 >).