This is a very good question, simply put MongoDB does not guarantee whether the result contains new documents because it involves multiple documents, even documents that may be inserted in the future. In a traditional database, it is possible to read newly inserted values. This anomaly is called a Phantom. The isolation level that can satisfy this is the highest serializable (serializable), that is In the example, the two sets of operations, one reading and one writing, seem to be performed one after the other. The cost of implementing the lock mechanism is also very high, and the performance is relatively poor. See this paper. Back to MongoDB, MongoDB uses documents as units and can ensure document-level isolation, but it does not guarantee the isolation (independence) of operations between multiple documents, and does not support transactions, in exchange for high performance.
@huandu is right. When you test, usefind().batchSize(2)to read 2 documents in each batch, and you will find that the newly added documents can be read. The default batchSize in the shell is 20, which may not be easy to observe. Don't use batchSize(1), which is equal to limit() for historical reasons.
mongodb cursor has no isolation and may return updated data.
However, during the actual trial, it was found that no matter how inserted, the cursor could never return the newly inserted data. This may be an implementation detail of mongodb, or it may be that the cursor can only access the newly inserted data under certain special circumstances. In short, there is no documentation support and this behavior should not be relied on.
This is a very good question, simply put MongoDB does not guarantee whether the result contains new documents because it involves multiple documents, even documents that may be inserted in the future. In a traditional database, it is possible to read newly inserted values. This anomaly is called a Phantom. The isolation level that can satisfy this is the highest serializable (serializable), that is In the example, the two sets of operations, one reading and one writing, seem to be performed one after the other. The cost of implementing the lock mechanism is also very high, and the performance is relatively poor. See this paper. Back to MongoDB, MongoDB uses documents as units and can ensure document-level isolation, but it does not guarantee the isolation (independence) of operations between multiple documents, and does not support transactions, in exchange for high performance.
@huandu is right. When you test, use
find().batchSize(2)
to read 2 documents in each batch, and you will find that the newly added documents can be read. The default batchSize in the shell is 20, which may not be easy to observe. Don't use batchSize(1), which is equal to limit() for historical reasons.Not necessarily.
mongodb cursor has no isolation and may return updated data.
However, during the actual trial, it was found that no matter how inserted, the cursor could never return the newly inserted data. This may be an implementation detail of mongodb, or it may be that the cursor can only access the newly inserted data under certain special circumstances. In short, there is no documentation support and this behavior should not be relied on.