Greetings from Stockholm, where I spent a day attending 10gen's MongoDB conference. The conference was successful in providing a general overview of MongoDB's most important aspects for developers. We also got some insight on 10gen's future plans and occasionally even delved a bit deeper into some technical details. And, of course, everybody got free MongoDB coffee mugs.

Some general notes

  • 10gen's business model is currently based on support and services related to MongoDB. According to Max Schireson, President of 10gen (who, by the way, also gave a technical presentation on MongoDB indexing), they do not plan on adding paid features, unless it becomes a financial necessity. In other words, MongoDB is staying fully open source for the time being.
  • 10gen has a shortlist of features they would like to develop soon. Full text search is at the top. Other things included are at least data compression and possibly schema validation as a related feature. You can probably find the full shortlist if you watch the presentations of the conference.
  • 10gen is also working with various cloud hosting partners to enable MongoDB as a hosted NoSQL solution. This includes Amazon AWS, where it could become an RDS-like hosted service, but 10gen's talks with Amazon are still in a "whitepaper writing phase". 10gen doesn't seem to be a big fan of running MongoDB on AWS, because the EC2/EBS platform isn't the most efficient with regards to I/O capacity and reliability (although software-RAID can help a bit with that). However, MongoHQ already provides a third-party hosted MongoDB service that runs on AWS as well as some other cloud platforms.

Notes about indexing

  • MongoDB's array indexing is interesting, because the database actually indexes the individual array values. So for instance, a field value of [1, 2, 3] gets indexed as three separate values; 1, 2 and 3. This approach enables queries by any value in the array. It also adds the restriction that a MongoDB index can only contain a single array field. Otherwise all permutations of the array value pairs would have to be indexed, which would quickly become too complex (particularly when indexing more than two array fields).
  • A good way to structure a tree model is to store each tree node as a document that contains all its ancestors as an array of ObjectIds. The array can then be indexed, which provides an easy way to retrieve all children and grand-children of any node. The parent ObjectId can be stored in a separate field, to provide access to immediate children only.
  • MongoDB's index directions (ascending / descending) are only meaningful for compound indexes. For single-field indexes, it makes no difference if the index is ascending or descending; only one index needs to be created. Also, a compound index (X ASC, Y DESC) is logically equivalent to its inverse (X DESC, Y ASC), so only either one of them needs to be created. Note that e.g. MySQL stores all compound indexes in ascending order, so you generally need to invert your data if you want mixed sort orders. In MongoDB you can actually use mixed ordering in the index.

Notes about replica sets

  • I learned that oplogSize is a really important parameter for replica sets, because it must be big enough to allow for full re-replication from the primary to the secondary node. Basically, when a secondary starts a full replication, the new operations that occur while the snapshot is being copied are stored in the oplog. Then they are played back to bring the secondary up-to-date. The oplog is a capped collection so it has a fixed size that cannot be exceeded.
  • Recapped MongoDB's write concern options when writing data:
    • Call getLastError to perform the following checks when writing (otherwise operations are 'fire-and-forget'):
    • w:N to choose how many replicas the data must have been written to (N can be a number or 'majority')
    • j:true to ensure data has been journaled successfully
    • fsync:true to ensure data has been fsync'ed to disk (used only when not journaling)
    • Generally, you would use j:true and w:'majority' for critical data (say, financial transactions), and no options at all for throw-away data like statistics gathering. Other stuff is probably somewhere in between, depending on how acceptable it is for end-users to lose a blog comment or a 'like' click because of unexpected hardware failure, and how much performance degradation it's worth to avoid it.
  • About replica set primary elections: Good to keep in mind that a primary will go read-only if it doesn't have the majority of votes at all times. If you have a two-node replica set that gets disconnected, each node is voting itself but only gets 50% of total votes, so they both become secondaries. But if there is a third arbiter node, and either server node can still connect to it, then that node can get 2 votes (66⅔%) and become the primary. So if you have a two-node replica set, keeping an arbiter around is a good idea, and preferrably in a third data-center so it doesn't go down together with one of the nodes.

Notes about sharding and performance

  • The most difficult problem in sharding is choosing the right sharding key. A common solution is to use a computed hash field. The hash function should provide uniform distribution, so that new database entries can be evenly spread out to chunks. The downside is that ranged queries are not possible, since the hash key is completely random. Also, the whole index must be kept in memory. However, it can be useful to prefix the random hash key with e.g. the current month number, which makes it possible to segment the index (the older parts of the index are rarely used so they don't have to be in RAM). Apparently, MongoDB 2.2 will also have automatic shard key hashing, so you don't have to store the computed hash key in a separate field.
  • It's a general rule that indexSize + storageSize (shown by calling db.stats() in the shell) should fit in RAM. That means MongoDB can keep all the indexes and the data in memory. I am assuming that this is one rule of thumb of when it might be a good idea to start sharding, so you can spread your data out to multiple servers if it doesn't fit in RAM any more.
  • One more interesting note about queries over sharded collections: When you sort your results (by something else than the sharding key), MongoDB performs a distributed merge sort. It executes the same query on all shards simultaneously and starts streaming results back, merging them together for the client. It is slightly unclear what happens if you limit your query to e.g. 10 results, but presumably MongoDB is smart enough to continue merge-sorting only until it has gathered enough results to get the "top 10".

So those are my highlights and take-aways from the conference. Big thanks to 10gen for organizing it.