I know I should removeAll documents instead of dropping the collection. But why does mongoose not recreate the index after the collection is dropped?
I have a schema in which one of the fields has unique:true set.
The index is created when I start the server. But if i drop the collection, and new records come, the collection is recreated and records are inserted. BUT, the index for the unique field is not created again.
Why is it so? How do I ask it to recreate the index?
It has nothing to do with unique. If you drop the index in mongodb, you have to restart the node.
When your application starts up, Mongoose automatically calls
ensureIndex for each defined index in your schema. Mongoose will call
ensureIndex for each index sequentially, and emit an 'index' event on
the model when all the ensureIndex calls succeeded or when there was
an error. While nice for development, it is recommended this behavior
be disabled in production since index creation can cause a significant
performance impact.
Nevertheless, in development mode, you can recreate indexes in method (for example method tied to route or event) with:
(Mongodb > 3.0) db.collection.createIndex({})
(Mongodb < 3.0) db.collection.ensureIndex({})
With Mongoose it looks like this:
Model.ensureIndexes(function (err) {
if (err) {
return res.end(err);
}
});
Related
I have a cluster of node worker servers that handle hitting an api and inserting data into a mongo db. The problem I am having is that one of these functions appears to every so often insert two copies of the same document. It checks if the document has already been created with a query like so:
gameDetails.findOne({ gameId: gameId }, function(err, gameCheck) {
if (!gameCheck) { //insert the document };
How can I ensure that this function always is only running one instance at a time. Alternatively, if I have not deduced the actual root problem, what could cause a mongo query like this to sometimes result in multiple of the same document, containing the same gameId, to be inserting?
findOne is being called multiple times before the document has had time to be inserted, i.e. something like the following is happening:
findThenInsert()
findThenInsert()
findThenInsert()
// findOne returns null, insert called
// findOne returns null, insert called
// document gets inserted
// findOne returns a gameCheck
// document gets inserted
You should use a unique index to prevent duplicates. Then, your node instances could optimistically call insert straight away, and simply handle the error if they were too late, which is similar to your 'if found do nothing' logic.
Alternatively if you don't mind the document being updated each time, you can use the upsert method, which is atomic:
db.collection.update(query, update, {upsert: true})
Also see
MongoDB atomic "findOrCreate": findOne, insert if nonexistent, but do not update
My server application (using node.js, mongodb, mongoose) has a collection of documents for which it is important that two client applications cannot modify them at the same time without seeing each other's modification.
To prevent this I added a simple document versioning system: a pre-hook on the schema which checks if the version of the document is valid (i.e., not higher than the one the client last read). At first sight it works fine:
// Validate version number
UserSchema.pre("save", function(next) {
var user = this
user.constructor.findById(user._id, function(err, userCurrent) { // userCurrent is the user that is currently in the db
if (err) return next(err)
if (userCurrent == null) return next()
if(userCurrent.docVersion > user.docVersion) {
return next(new Error("document was modified by someone else"))
} else {
user.docVersion = user.docVersion + 1
return next()
}
})
})
The problem is the following:
When one User document is saved at the same time by two client applications, is it possible that these interleave between the pre-hook and the actual save operations? What I mean is the following, imagine time going from left to right and v being the version number (which is persisted by save):
App1: findById(pre)[v:1] save[v->2]
App2: findById(pre)[v:1] save[v->2]
Resulting in App1 saving something that has been modified meanwhile (by App2), and it has no way to notice that it was modified. App2's update is completely lost.
My question might boil down to: Do the Mongoose pre-hook and the save method happen in one atomic step?
If not, could you give me a suggestion on how to fix this problem so that no update ever gets lost?
Thank you!
MongoDB has findAndModify which, for a single matching document, is an atomic operation.
Mongoose has various methods that use this method, and I think that they will suit your use case:
Model.findOneAndUpdate()
Model.findByIdAndUpdate()
Model.findOneAndRemove()
Model.findByIdAndRemove()
Another solution (one that Mongoose itself uses as well for its own document versioning) is to use the Update Document if Current pattern.
Meteor Mongo and Mongodb query is doest same. I am using external Mongodb. so I need to debug my query. Is their any way to find last executed query in Mongo?
Don't know if this works in meteor mongo -but you seem to be using an external mongo - presumably you set up profiling with a capped collection, so that the collection never grows over a certain size. If you only need the last op, then you make the size pretty much smaller than this.
db.createCollection( "system.profile", { capped: true, size:4000000 } )
The mongo doc is here: http://docs.mongodb.org/manual/tutorial/manage-the-database-profiler/
From the mongo docs:
To return the most recent 10 log entries in the system.profile
collection, run a query similar to the following:
db.system.profile.find().limit(10).sort( { ts : -1 } ).pretty()
Since it's sorted inversely by time, just take the first record from the result.
Otherwise you could roll your own with a temporary client-only mongo collection:
Queries = new Mongo.Collection(null);
Create an object containing your query, cancel the last record and insert the new one.
I had a very weird issue with the way Mongoose interacted with my Node and Mongo database.
I was using express to create a basic get api route to fetch some data from my mongodb.
I had a database called test and it had a collection call "billings"
so the schema and route was pretty basic
apiRouter.route('/billing/')
.get(function(req, res) {
Billing.find(function(err, billings) {
if (err) res.send(err);
// return the bills
res.json(billings);
});
});
Where "Billing" was my mongoose schema. that simply had 1 object {test: string}
This worked fine, I got a response with all the items in my mongo db called "billings" which is only one item {test: "success"}
Next I created a collection called "historys"
I setup the exact same setup as my billings.
apiRouter.route('/historys/')
// get all the history
.get(function(req, res) {
Historys.find(function(err, historys) {
if (err) res.send(err);
// return the history
res.json(historys);
});
});
where again "Historys" was my mongoose schema. This schema was identical in setup to my billings since I didnt have any real data, the fields were the same, i just had it with a test field so the json object returned from both billings and historys should have been
{ test: "success" }
However, this time I didnt get any data back, I just got an empty object
[].
I went through my code multiple times to make sure maybe a capital got lost, or a comma somewhere etc, but the code was identical. the setup and formatting in my mongodb was identical. I went into robomongo and viewed the database and everything was named correctly.
Except, I had 2 new collections now.
My original : "Historys" AND a brand new collection "Histories"
Once i fixed my api route to go look at Histories instead of Historys, I was able to get the test data successfully. I still however cannot pull data from Historys, its like it doesnt exist yet there it was in my robomongo console when I refreshed.
I searched all my code for any mention of histories and got 0 results. Where did the system know to fix the grammar on my collection?
From the docs:
When no collection argument is passed, Mongoose produces a collection name by passing the model name to the utils.toCollectionName method. This method pluralizes the name. If you don't like this behavior, either pass a collection name or set your schemas collection name option.
So, when you did, in your schema definition, this:
mongoose.model('Historys', YourSchema);
, mongoose created the Histories collection.
When you do:
db.historys.insert({ test: "success" })
through mongodb console, if the historys collection doesn't exist, it'll be created. That's why you have the two collections in your db. Like the docs said, if you don't want mongoose to create a collection with a pluralized name based on your model, just specify the name you want.
Before saving a new document to a mongodb collection via nodejs in my MongoLab database, I'm using model.count to check certain fields to prevent a duplicate entry:
MyModel.count({field1: criteria1, field2: criteria2}, function (err, count) {
if (count == 0) {
// Create new document and call .save()
}
});
However, during testing I'm noticing many duplicates (inconsistent in number across test runs) in the collection after the process finishes, although not as many as if I did not do the .count() check.
Since the MyModel.count() statement is embedded in a callback being repeatedly called whenever the 'readable' event is emitted by one of several ReadStreams, I suspect there is an async issue caused by rapid writes to the collection. Specifically, two or more identical and nearly simultaneous calls to MyModel.count return a count of 0, and end up each creating and saving (identical) documents to the collection.
Does this sound probable? And if so how can I enforce uniqueness of document writes without setting timeouts or using a synchronous pattern?
As Peter commented, the right way to enforce uniqueness is to create a unique index on the collection over those fields and then handle the code: 11000 insert error to recover from attempts at creating duplicates.
You can add the index via your schema before you create the model from it:
mySchema.index({field1: 1, field2: 1}, {unique: true});