How can I atomically get the latest "rounds" record ObjectId and use that when inserting to the "deposits" collection?
This post answer says it can't be done: Is there any way to atomically update two collections in MongoDB?
Is this still true?
In process A, I want to atomically FIND the latest round id (rid) and INSERT that into deposits. The race condition is that after A finds rid, another process B might insert into rounds, so now A has an rid that isn't the latest, but is 1 behind. How can A finds rid in rounds + insert this rid into deposits (act on these 2 collections) atomically?
// GET ROUND ID (RID) OF LATEST
var rid;
db.rounds.find().limit(1).sort({$natural:-1}, function(err, latestInsertedRound){
rid = latestInsertedRound[0]._id;
print(rid, 'rid'); // if another process B inserts now, this rid is 1 behind
// INSERT INTO DEPOSITS
db.deposits.insert({uid:uid, iid:iid, rid:rid}, function(err, insertedDeposit){
print(insertedDeposit, 'insertedDeposit');
});
});
Inserting a document in Mongodb has a callback function that can be used. This callback function has a second parameter which returns the document inserted.
I tried printing the second parameter using console.log. It looks like :
{ result: { ok: 1, n: 1 },
ops:
[ { username: 'user1',
password: 'password1',
_id: 562099bae1872f58b3a22aed } ],
insertedCount: 1,
insertedIds: [ 562099bae1872f58b3a22aed ]
}
insertedIds is the array that holds the _ids of the inserted document or documents.
So you can insert your object in the second collection in the function callback of the insertion of first collection. A little confusing.
In simple terms : Insert the document in first collection. In it's callback, insert the document in the second collection.
MongoClient.connect(MONGOLAB_URI, function (err, db) {
if (err)
console.log("We have some error : " + err);
else {
db.createCollection('rounds', function (err, rounds) {
if (err) {
console.log('Error while creating rounds collection');
throw err;
}
else {
rounds.insert({ 'username' : 'user1', 'password' : 'password1' }, function(err,docsInserted){
console.log('Last document inserted id :', docsInserted.insertedIds[0]);
//inserting the document in the function callback
db.createCollection('deposits', function (err, deposits) {
if (err) {
console.log('Error while creating deposits collection');
throw err;
}
else {
//change array index according to your need
//you may be inserting multiple objects simultaneously
deposits.insert({'last_inserted_object' : docsInserted.insertedIds[0]);
console.log('inserted into deposits collection');
}
});
});
}
});
}
});
It seems it's not possible to operate atomically on 2 collections in MongoDB, as explained in this answer:
Is there any way to atomically update two collections in MongoDB?
I leave the question up because it has a slightly different focus (not 2 updates, but find+insert).
Related
I am using .pull to remove a record from an array in mongo db and it works fine, but a comment I read somewhere on stack overflow (can't find it again to post the link) is bothering me in that it commented that it was bad to use .save instead of using .findByIdAndUpdate or .updateOne
I just wanted to find out if this is accurate or subjective.
This is how I am doing it currently. I check if the product with that id actually exists, and if so I pull that record from the array.
exports.deleteImg = (req, res, next) => {
const productId = req.params.productId;
const imgId = req.params.imgId;
Product.findById(productId)
.then(product => {
if (!product) {
return res.status(500).json({ message: "Product not found" });
} else {
product.images.pull(imgId);
product.save()
.then(response => {
return res.status(200).json( { message: 'Image deleted'} );
})
}
})
.catch(err => {
console.log(err);
});
};
I think what they were saying though was it should rather be done something like this (an example I found after a google)
users.findByIdAndUpdate(userID,
{$pull: {friends: friend}},
{safe: true, upsert: true},
function(err, doc) {
if(err){
console.log(err);
}else{
//do stuff
}
}
);
The main difference is that when you use findById and save, you first get the object from MongoDB and then update whatever you want to and then save. This is ok when you don't need to worry about parallelism or multiple queries to the same object.
findByIdAndUpdate is atomic. When you execute this multiple times, MongoDB will take care of the parallelism for you. Folllowing your example, if two requests are made at the same time on the same object, passing { $pull: { friends: friendId } }, the result will be the expected: only one friend will be pulled from the array.
But let's say you've a counter on the object, like friendsTotal with starting value at 0. And you hit the endpoint that must increase the counter by one twice, for the same object.
If you use findById, then increase and then save, you'd have some problems because you are setting the whole value. So, you first get the object, increase to 1, and update. But the other request did the same. You'll end up with friendsTotal = 1.
With findByIdAndUpdate you could use { $inc: { friendsTotal: 1 } }. So, even if you execute this query twice, on the same time, on the same object, you would end up with friendsTotal = 2, because MongoDB use these update operators to better handle parallelism, data locking and more.
See more about $inc here: https://docs.mongodb.com/manual/reference/operator/update/inc/
I am using .pull to remove a record from an array in mongo db and it works fine, but a comment I read somewhere on stack overflow (can't find it again to post the link) is bothering me in that it commented that it was bad to use .save instead of using .findByIdAndUpdate or .updateOne
I just wanted to find out if this is accurate or subjective.
This is how I am doing it currently. I check if the product with that id actually exists, and if so I pull that record from the array.
exports.deleteImg = (req, res, next) => {
const productId = req.params.productId;
const imgId = req.params.imgId;
Product.findById(productId)
.then(product => {
if (!product) {
return res.status(500).json({ message: "Product not found" });
} else {
product.images.pull(imgId);
product.save()
.then(response => {
return res.status(200).json( { message: 'Image deleted'} );
})
}
})
.catch(err => {
console.log(err);
});
};
I think what they were saying though was it should rather be done something like this (an example I found after a google)
users.findByIdAndUpdate(userID,
{$pull: {friends: friend}},
{safe: true, upsert: true},
function(err, doc) {
if(err){
console.log(err);
}else{
//do stuff
}
}
);
The main difference is that when you use findById and save, you first get the object from MongoDB and then update whatever you want to and then save. This is ok when you don't need to worry about parallelism or multiple queries to the same object.
findByIdAndUpdate is atomic. When you execute this multiple times, MongoDB will take care of the parallelism for you. Folllowing your example, if two requests are made at the same time on the same object, passing { $pull: { friends: friendId } }, the result will be the expected: only one friend will be pulled from the array.
But let's say you've a counter on the object, like friendsTotal with starting value at 0. And you hit the endpoint that must increase the counter by one twice, for the same object.
If you use findById, then increase and then save, you'd have some problems because you are setting the whole value. So, you first get the object, increase to 1, and update. But the other request did the same. You'll end up with friendsTotal = 1.
With findByIdAndUpdate you could use { $inc: { friendsTotal: 1 } }. So, even if you execute this query twice, on the same time, on the same object, you would end up with friendsTotal = 2, because MongoDB use these update operators to better handle parallelism, data locking and more.
See more about $inc here: https://docs.mongodb.com/manual/reference/operator/update/inc/
I wanted to know if when inserting a record in a mongodb collection, if the unique key already has the case-insensitive value, then mongodb doesn't insert the record, but returns a duplicate error.
Example:
1. Adds { name: "wow" } // inserts
2. Adds { name: "wOW" } // error: duplicate record found.
I've tried this, but it doesn't work (sorry I am new to mongo and don't understand NoSQL that much).
let data = {
name: new RegExp('^' + params.input.name + '$', 'i')
};
db.collection(collectionName).insertOne(data, function(err, res) {
db.close();
if (err) return callback(err, false);
return callback(false, res);
}
I hope there is a solution to this problem, without me having to hit the database collection just to check if a duplicate exists.
I am using Mongoskin + NodeJS to add new keywords to MongoDB. I want to notify the user that the entry was a duplicate but not sure how to do this.
/*
* POST to addkeyword.
*/
router.post('/addkeyword', function(req, res) {
var db = req.db;
db.collection('users').update({email:"useremail#gmail.com"}, {'$addToSet': req.body }, function(err, result) {
if (err) throw err;
if (!err) console.log('addToSet Keyword.' );
});
});
The result does not seem to be of any use since it doesn't state if the keyword was added or not.
At least in the shell you can differentiate if the document was modified or not (see nModified).
> db.test4.update({_id:2}, {$addToSet: {tags: "xyz" }})
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
> db.test4.update({_id:2}, {$addToSet: {tags: "xyz" }})
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 0 })
Update for Node
When you use collection.update(criteria, update[[, options], callback]); you can retrieve the count of records that were modified.
From the node docs
callback is the callback to be run after the records are updated. Has
two parameters, the first is an error object (if error occured), the
second is the count of records that were modified.
Another Update
It seems at least in version 1.4.3 the native Mongo Node driver is not behaving as documented. It is possible to work around using the bulk API (introduced in Mongo 2.6):
var col = db.collection('test');
// Initialize the Ordered Batch
var batch = col.initializeOrderedBulkOp();
batch.find({a: 2}).upsert().updateOne({"$addToSet": {"tags": "newTag"}});
// Execute the operations
batch.execute(function(err, result) {
if (err) throw err;
console.log("nUpserted: ", result.nUpserted);
console.log("nInserted: ", result.nInserted);
console.log("nModified: ", result.nModified); // <- will tell if a value was added or not
db.close();
});
You could use db.users.findAndModify({email:"useremail#gmail.com"},[],{'$addToSet': { bodies: req.body }},{'new':false}). Pay attention to new:false switcher, it allows you to get document before update and you could check whether array contained item before update. However, it could be problematic approach if your documents are big, because you analyze it on client side.
P.S. Your original query with $addToSet is wrong: field name is missing.
Edit: I tried to use count returned by update, but it returns 1 for me in all cases. Here is the code I used for test with MongoDB 2.6:
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect('mongodb://localhost:27017/mtest', function(err, db) {
if(err) throw err;
db.collection('test').insert({_id:1,bodies:["test"]},function(err,item){
db.collection('test').update({_id:1},{$addToSet:{bodies:"test"}}, function(err,affected){
if(err) throw err;
console.log(affected); //1 in console
});
});
});
i am update a array from Collection with this JSON:
{
"<arrayname>":"<value>"
}
route.js
routes.post("/api/:id", Controller.addOne);
Controller.js
async addOne(req, res) {
//juryman id to list add
if (Object.keys(req.body).length === 1) {
console.log("Size 1");
}
await Session.findOneAndUpdate(
{ _id: req.params.id },
{ $addToSet: req.body }
)
.then(function(success) {
res.send("Successfully saved.");
})
.catch(function(error) {
res.status(404).send(error);
});
},
I have five arrays in my Collection and this changes the JSON array name-value and updates correctly, the respectively Collection array. This works only for one item.
I'm getting a duplicate document when using the mongodb-native-driver to save an update to a document. My first call to save() correctly creates the document and adds a _id with an ObjectID value. A second call creates a new document with a text _id of the original ObjectID. For example I end up with:
> db.people.find()
{ "firstname" : "Fred", "lastname" : "Flintstone", "_id" : ObjectId("52e55737ae49620000fd894e") }
{ "firstname" : "Fred", "lastname" : "Flintstone with a change", "_id" : "52e55737ae49620000fd894e" }
My first call correctly created Fred Flinstone. A second call that added " with a change" to the lastname, created a second document.
I'm using MongoDB 2.4.8 and mongo-native-driver 1.3.23.
Here is my NodeJS/Express endpoint:
app.post("/contacts", function (req, res) {
console.log("POST /contacts, req.body: " + JSON.stringify(req.body));
db.collection("people").save(req.body, function (err, inserted) {
if (err) {
throw err;
} else {
console.dir("Successfully inserted/updated: " + JSON.stringify(inserted));
res.send(inserted);
}
});
});
Here is the runtime log messages:
POST /contacts, req.body: {"firstname":"Fred","lastname":"Flintstone"}
'Successfully inserted/updated: {"firstname":"Fred","lastname":"Flintstone","_id":"52e55737ae49620000fd894e"}'
POST /contacts, req.body: {"firstname":"Fred","lastname":"Flintstone with a change","_id":"52e55737ae49620000fd894e"}
'Successfully inserted/updated: 1'
Why doesn't my second update the existing record? Does the driver not cast the _id value to an ObjectID?
What you are posting back the 2nd time contains a field named "_id", and it's a string. That is the problem.
Look at the document, what the save method does is a "Simple full document replacement function". I don't use this function quit often so here's what I guess. The function use the _id field to find the document and then replace the full document with what you provided. However, what you provided is a string _id. Apparently it doesn't equal to the ObjectId. I think you should wrap it to an ObjectId before passing to the function.
Besides, the save method is not recommended according to the document. you should use update (maybe with upsert option) instead
I don't exactly know why a second document is created, but why don't you use the update function (maybe with the upsert operator)?
An example for the update operation:
var query = { '_id': '52e55737ae49620000fd894e' };
db.collection('people').findOne(query, function (err, doc) {
if (err) throw err;
if (!doc) {
return db.close();
}
doc['lastname'] = 'Flintstone with a change';
db.collection('people').update(query, doc, function (err, updated) {
if (err) throw err;
console.dir('Successfully updated ' + updated + ' document!');
return db.close();
});
});
And now with the upsert operator:
var query = { '_id': '52e55737ae49620000fd894e' };
var operator = { '$set': { 'lastname': 'Flintstone with a change' } };
var options = { 'upsert': true };
db.collection('people').update(query, operator, options, function (err, upserted) {
if (err) throw err;
console.dir('Successfully upserted ' + upserted + ' document!');
return db.close();
});
The difference is that the upsert operator will update the document if it exist, otherwise it will create a new one. When using the upsert operator you should keep in mind that this operation can be underspecified. That means if your query does not contain enough information to identify a single document, a new document will be inserted.