Isolation in MongoDB
Join the DZone community and get the full member experience.
Join For Free
mongodb features weaker forms of the acid properties with respect to what rdbms give us; for example, the atomicity of an update is only guaranteed inside a single document. all these properties are traded away for a focus on availability via replication to multiple server and sharding.
one of the properties put in question is isolation of operations from each other: while iterating on cursors, it's not a given. if you use mongodb in production, you have to improve your knowledge of the database properties or write migration scripts that corrupt your data.
the collection physical model
mongodb stores data in memory-mapped files - in short, areas of mass storage which are preallocated and mapped into virtual memory directly, as if they were areas of ram. if i understood this model correctly, the operating system deals with the writes, but mongodb can occasionally force the synchronization to disk (with the fsync option in write operation, and in committing its journal periodically).
documents are stored in each collection sequentially, and returned in the order of insertion while iterating over cursors without sorting operations set. this means that during updates, document may potentially overflow their area of storage:
d1[10], d2[9], d3[12], ...
if d2 is updated by adding a field, it may become larger and have to be moved forward in the collection:
d1[10, ..., d3[12], ..., d2[15]
so documents can be moved during update, changing their insertion order. while this is usually hidden in relational databases, it is exposed in some reading operations of mongodb.
snapshotting
the mongo cursor documentation explicitly says that
cursors may return the same document multiple times
. the first reaction to this statement is usually:
but i guess we have to understand that we have traded-off even the consistency of a read operation over multiple documents, getting in return the possibility to operate over millions of documents and multiple shards (while if you execute an alter table over millions of rows you may observe spectacular explosions.)
in both relational and nosql databases, cursor return a result at a time with their api, guaranteeing that a bounded number of rows or document is ever reconstituted into memory in any moment.
however, mongo cursors may use the natural order of documents when the actual order of selection doesn't matter; for example, during migrations. but since documents may move in the natural order, you may find that a migration script passes twice over each document in a collection.
db.my_collection.find().foreach(function(d) { d.new_field = newvalue(d); db.my_collection.save(d); });
it is natural for this code to move each d instance further down in the collection to an empty bucket, if documents are not padded.
to avoid this effect, you can snapshot a cursor so that its results are cached server side and passed to you no matter what happens to the collection after the cursor creation:
db.my_collection.find().snapshot().foreach(function(d) { d.new_field = newvalue(d); db.my_collection.save(d); });
in this way, a multiple-document read can effectively be isolated from the operations interleaved between reading the different documents. there is no other option than interleaving writes to reads if you want to operate on one document at a time, to save memory and avoid crashing the current process.
isolation in other read operations
however, db.collection.count(filter) executed from another connection is only eventually consistent with the updates, as it shows an increased number of documents that comes back to normal after all updates are performed.
so other reads present the same problem of the cursor, although they are in another connection in this case. db.collection.count() isn't affected, as probably when called without filters is able to perform some optimizations.
here's
the discussion we opened on the mongodb-user mailing list
, along with the .js code necessary to reproduce this behavior.
the solution we found was to call db.collection.find(filter).snapshot().count(), which performs the count over another snapshot; the results are
almost
consistent with this option. all operations were performed on mongodb 2.2.3, the newest available release.
Opinions expressed by DZone contributors are their own.
Comments