MongoDB Storage

To properly understand how a schema design impacts performance it’s important to understand how MongoDB works under the covers.

Memory Mapped Files

MongoDB uses memory-mapped files to store it’s data (A memory-mapped file is a segment of virtual memory which has been assigned a direct byte-for-byte correlation with some portion of a file or file).

Memory Mapped Files

Memory mapped files lets MongoDB delegate the handling of Virtual Memory to the operating system instead of explicitly managing memory itself. Since the Virtual Address Space is much larger than any physical RAM (Random Access Memory) installed in a computer there is contention about what parts of the Virtual Memory is kept in RAM at any given point in time. When the operating system runs out of RAM and an application requests something that’s not currently in RAM it will swap out memory to disk to make space for the newly requested data. Most operating systems will do this using a Least Recently Used (LRU) strategy where the oldest data is swapped to disk first.

When reading up on MongoDB you’ll most likely run into the word “Working Set”. This is the data that your application is constantly requesting. If your “Working Set” all fits in RAM then all access will be fast as the operating system will not have to swap to and from disk as much. However if your “Working Set” does not fit in RAM you suffer performance penalties as the operating system needs to swap one part of your “Working Set” to disk to access another part of it.

Determine if the Working Set is to big

You can get an indication of if your working set fits in memory by looking at the number of page faults over time. If it’s rapidly increasing it might mean your Working Set does not fit in memory.

>   use mydb
>   db.serverStatus().extra_info.page_faults

This is usually a sign that it’s time to consider either increasing the amount of RAM in your machine or to shard your MongoDB system so more of your “Working Set” can be kept in memory (sharding splits your “Working Set” across multiple machines RAM resources).

Padding

Another important aspect to understand with MongoDB is how documents physically grow in the database. Let’s take the simple document example below.

{
  "hello": "world"
}

If we add a new field named name to the document

{
  "hello": "world",
  "name": "Christian"
}

The document will grow in size. If MongoDB was naively implemented it would now need to move the document to a new bigger space as it would have outgrown it’s originally allocated space.

However MongoDB stored the original document it added a bit of empty space at the end of the document hence referred to as padding. The reason for this padding is that MongoDB expects the document to grow in size over time. As long as this document growth stays inside the additional padding space MongoDB does not need to move the document to a new bigger space thus avoiding the cost of copying bytes around in memory, and on disk.

Document With Padding

Over time the padding factor that governs how much extra space is appended to a document inserted into MongoDB changes as the database attempts to find the balance between the eventual size of documents and the unused space take up by the padding. However if the growth of individual documents is random MongoDB will not be able to correctly Pre-Allocate the right level of padding and the database might end up spending a lot of time copying documents around in memory and on disk instead of performing application specific work causing an impact on write performance.

How to determine the padding factor

You can determine the padding factor for a specific collection in the following way

>   use mydb
>   db.my_collection.stats()

The returned result contains a field paddingFactor. The value tells you how much padding is added. A value of 1 means no padding added a value of 2 means the padding is the same size as the document size.

A padding factor of 1 is usually a sign that the database is spending most of it’s time writing new data to memory and disk instead of moving existing data. Having said that one has to take into account the scale of the writing operations. If you have only a 1000 documents in a collection it might not matter if you’re padding factor is closer to 2. On the other hand if you are writing massive amounts of time series data the impact of moving documents around in memory and on disk might have a severe impact on your performance.

Fragmentation

When documents move around or are removed they leave holes. MongoDB tries to reuse these holes for new documents when ever possible, but over time it will slowly and steadily find itself having a lot of holes that cannot be reused because documents cannot fit in them. This effect is called fragmentation and is common in all systems that allocate memory including your operating system.

Document With Padding

The effect of fragmentation is to waste space. Due to the fact that MongoDB uses memory mapped files any fragmentation on disk will be reflected in fragmentation in RAM as well. This has the effect of making less of the “Working Set” fit in RAM and causing more swapping to disk.

How to determine the fragmentation

You can get a good indication of fragmentation by

>   use mydb
>   var s = db.my_collection.stats()
>   var frag = s.storageSize / (s.size + s.totalIndexSize)

A frag value larger than 1 indicates some level of fragmentation

There are three main ways of avoiding or limiting fragmentation for your MongoDB data.

The first one is to use the compact command on MongoDB to rewrite the data and thus remove the fragmentation. Unfortunately as of 2.6 compact is an off-line operation meaning that the database has to be taking out of production for the duration of the compact operation

The second option is to use the usePowerOf2Sizes option to make MongoDB allocate memory in powers of 2. So instead of allocating memory to fit a specific document MongoDB allocates only in powers of 2 (128 bytes, 256 bytes, 512 bytes, 1024 bytes and so forth). This means there is less chance of a hole not being reused as it will always be a standard size. However it does increase the likeliness of wasted space as a document that is 257 bytes long will occupy a 512 bytes big allocation.

As of 2.6 usePowerOf2Sizes is the default allocation strategy for collections.

The third and somewhat harder option is to consider fragmentation in your schema design. The application can model it’s documents to minimize fragmentation doing such things as pre-allocating the max size of a document and ensuring document size growth is managed correctly. Some of the patterns in this book will discuss aspects of this.