I am using .pull to remove a record from an array in mongo db and it works fine, but a comment I read somewhere on stack overflow (can't find it again to post the link) is bothering me in that it commented that it was bad to use .save instead of using .findByIdAndUpdate or .updateOne
I just wanted to find out if this is accurate or subjective.
This is how I am doing it currently. I check if the product with that id actually exists, and if so I pull that record from the array.
exports.deleteImg = (req, res, next) => {
const productId = req.params.productId;
const imgId = req.params.imgId;
Product.findById(productId)
.then(product => {
if (!product) {
return res.status(500).json({ message: "Product not found" });
} else {
product.images.pull(imgId);
product.save()
.then(response => {
return res.status(200).json( { message: 'Image deleted'} );
})
}
})
.catch(err => {
console.log(err);
});
};
I think what they were saying though was it should rather be done something like this (an example I found after a google)
users.findByIdAndUpdate(userID,
{$pull: {friends: friend}},
{safe: true, upsert: true},
function(err, doc) {
if(err){
console.log(err);
}else{
//do stuff
}
}
);
The main difference is that when you use findById and save, you first get the object from MongoDB and then update whatever you want to and then save. This is ok when you don't need to worry about parallelism or multiple queries to the same object.
findByIdAndUpdate is atomic. When you execute this multiple times, MongoDB will take care of the parallelism for you. Folllowing your example, if two requests are made at the same time on the same object, passing { $pull: { friends: friendId } }, the result will be the expected: only one friend will be pulled from the array.
But let's say you've a counter on the object, like friendsTotal with starting value at 0. And you hit the endpoint that must increase the counter by one twice, for the same object.
If you use findById, then increase and then save, you'd have some problems because you are setting the whole value. So, you first get the object, increase to 1, and update. But the other request did the same. You'll end up with friendsTotal = 1.
With findByIdAndUpdate you could use { $inc: { friendsTotal: 1 } }. So, even if you execute this query twice, on the same time, on the same object, you would end up with friendsTotal = 2, because MongoDB use these update operators to better handle parallelism, data locking and more.
See more about $inc here: https://docs.mongodb.com/manual/reference/operator/update/inc/
Related
I am using .pull to remove a record from an array in mongo db and it works fine, but a comment I read somewhere on stack overflow (can't find it again to post the link) is bothering me in that it commented that it was bad to use .save instead of using .findByIdAndUpdate or .updateOne
I just wanted to find out if this is accurate or subjective.
This is how I am doing it currently. I check if the product with that id actually exists, and if so I pull that record from the array.
exports.deleteImg = (req, res, next) => {
const productId = req.params.productId;
const imgId = req.params.imgId;
Product.findById(productId)
.then(product => {
if (!product) {
return res.status(500).json({ message: "Product not found" });
} else {
product.images.pull(imgId);
product.save()
.then(response => {
return res.status(200).json( { message: 'Image deleted'} );
})
}
})
.catch(err => {
console.log(err);
});
};
I think what they were saying though was it should rather be done something like this (an example I found after a google)
users.findByIdAndUpdate(userID,
{$pull: {friends: friend}},
{safe: true, upsert: true},
function(err, doc) {
if(err){
console.log(err);
}else{
//do stuff
}
}
);
The main difference is that when you use findById and save, you first get the object from MongoDB and then update whatever you want to and then save. This is ok when you don't need to worry about parallelism or multiple queries to the same object.
findByIdAndUpdate is atomic. When you execute this multiple times, MongoDB will take care of the parallelism for you. Folllowing your example, if two requests are made at the same time on the same object, passing { $pull: { friends: friendId } }, the result will be the expected: only one friend will be pulled from the array.
But let's say you've a counter on the object, like friendsTotal with starting value at 0. And you hit the endpoint that must increase the counter by one twice, for the same object.
If you use findById, then increase and then save, you'd have some problems because you are setting the whole value. So, you first get the object, increase to 1, and update. But the other request did the same. You'll end up with friendsTotal = 1.
With findByIdAndUpdate you could use { $inc: { friendsTotal: 1 } }. So, even if you execute this query twice, on the same time, on the same object, you would end up with friendsTotal = 2, because MongoDB use these update operators to better handle parallelism, data locking and more.
See more about $inc here: https://docs.mongodb.com/manual/reference/operator/update/inc/
Am I doing this correctly in the backend API? How would you delete an object inside an array within a parent array in the backend? I first found the main parent array index and then I found the object from tasks array using .tasks[index]. The question is how would I delete this in node? Tutorials I found uses req.params.id to delete an item but mine is more complicated.
exports.deleteTaskItem = async (req, res) => {
const taskindex = req.params.id;
const index = req.params.index;
try {
const taskfound = await Task.findById(taskindex);
const taskfounditem = await taskfound.tasks[index];
//code to type here
res.status(204).json({
status: "success",
data: null
});
} catch (err) {
res.status(404).json({
status: "fail",
message: err
});
}
};
I believe this piece of documentation would interest you:
https://docs.mongodb.com/manual/reference/operator/update-array/
And to specify, I believe you want to use the $pull operator.
Something like this:
const {id, index} = req.params;
await Task.findByIdAndUpdate(id,{
$pull: {
tasks: { _id: index }
}
});
(Disclaimer: I did not test this out this time, sorry. But it should be close.)
edit: Now when I reread the question I notice that you want to use the index. Personally I think it'd be easier to just add ids since you get that automatically if you use a sub-document. But if you insist on using index, maybe this answer can help:
https://stackoverflow.com/a/4970050/1497533
edit again:
It seems this helped getting a working solution, so I'll copy it in from comments:
taskfound.tasks.splice(taskindex, 1);
taskfound.markModified('tasks');
await taskfound.save();
I'd like to update some data in Mongoose by using array value that I've find before.
Company.findById(id_company,function(err, company) {
if(err){
console.log(err);
return res.status(500).send({message: "Error, check the console. (Update Company)"});
}
const Students = company.students;
User.find({'_id':{"$in" : Students}},function(err, users) {
console.log(Students);
// WANTED QUERY : Update company = null from Users where _id = Students[];
});
});
Students returns users._id in array with object inside, and I use that to find users object, and then I want to set null a field inside users object, that field named as "company". How I can do that? Thank you.
From what you posted (I took the liberty to use Promises but you can roughly achieve the same thing with callbacks), you can do something like:
User.find({'_id':{"$in" : Students}})
.then( users =>{
return Promise.all( users.map( user => {
user.company = null;
return user.save()
}) );
})
.then( () => {
console.log("yay");
})
.catch( e => {
console.log("failed");
});
Basically, what I'm doing here is making sure .all() user models returned by the .find() call are saved properly, by checking the Promised value returned for .save()ing each of them.
If one of these fails for some reasons, Promise.all() return a rejection you can catch afterhand.
However, in this case, each item will be mapped to a query to your database which is not good. A better strategy would be to use Model.update(), which will achieve the same, in, intrinsically, less database queries.
User.update({
'_id': {"$in": Students}
}, {
'company': <Whatever you want>
})
.then()
use .update but make sure you pass option {multi: true} something like:
User.update = function (query, {company: null}, {multi: true}, function(err, result ) { ... });
when I run code
var collection = db.get('categories');
console.log(collection.find().limit(1).sort( { _id : -1 } ));
on nodejs using mongodb I am getting error Object # has no method 'limit' . I am a beginner to node and really stuck on this section of node
here is full code for geting last insert document.
router.post('/addcategory', function(req, res) {
// Set our internal DB variable
var db = req.db;
// Get our form values. These rely on the "name" attributes
var name = req.body.name;
var description = req.body.description;
// Set our collection
var collection = db.get('categories');
// Submit to the DB
collection.insert({
"name" : name,
"description" : description,
}, function (err, doc) {
if (err) {
// If it failed, return error
res.send("There was a problem adding the information to the database.");
}
else {
// And forward to success page
/******************/
console.log(collection.find().limit(1).sort( { _id : -1 } ));
/*************/
}
});
});
The key piece of missing information here was that you are using Monk, not the native MongoDB Node.JS driver. The command you have for find() is how you would use the native driver (with the changes suggested by #BlakesSeven above for asynchronity), but Monk works a little bit differently.
Try this instead:
collection.find({}, { limit : 1, sort : { _id : -1 } }, function (err,res) {
console.log(res);
});
The method is still asynchronous so you still need to invoke itm either as a promise with .then() or a callback. No methods are sychronous and return results in-line.
Also the result returned from the driver is s "Cursor" and not the object(s) you expect. You either iterate the returned cursor or just use .toArray() or similar to convert:
collection.find().limit(1).sort({ "_id": -1 }).toArray().then(function(docs) {
console.log(docs[0]);
});
Or:
collection.find().limit(1).sort({ "_id": -1 }).toArray(function(err,docs) {
console.log(docs[0]);
});
But really the whole premise is not correct. You seem to basically want to return what you just inserted. Event with the correction in your code, the returned document is not necessarily the one you just inserted, but rather the last one inserted into the collection, which could have occurred from another operation or call to this route from another source.
If you want what you inserted back then rather call the .insertOne() method and inspect the result:
collection.insertOne({ "name": name, "description": description },function(err,result) {
if (err) {
res.send("There was an error");
} else {
console.log(result.ops)
}
});
The .insert() method is considered deprecated, but basically returns the same thing. The consideration is that they return a insertWriteOpResult object where the ops property contains the document(s) inserted and their _id value(s),
How can I atomically get the latest "rounds" record ObjectId and use that when inserting to the "deposits" collection?
This post answer says it can't be done: Is there any way to atomically update two collections in MongoDB?
Is this still true?
In process A, I want to atomically FIND the latest round id (rid) and INSERT that into deposits. The race condition is that after A finds rid, another process B might insert into rounds, so now A has an rid that isn't the latest, but is 1 behind. How can A finds rid in rounds + insert this rid into deposits (act on these 2 collections) atomically?
// GET ROUND ID (RID) OF LATEST
var rid;
db.rounds.find().limit(1).sort({$natural:-1}, function(err, latestInsertedRound){
rid = latestInsertedRound[0]._id;
print(rid, 'rid'); // if another process B inserts now, this rid is 1 behind
// INSERT INTO DEPOSITS
db.deposits.insert({uid:uid, iid:iid, rid:rid}, function(err, insertedDeposit){
print(insertedDeposit, 'insertedDeposit');
});
});
Inserting a document in Mongodb has a callback function that can be used. This callback function has a second parameter which returns the document inserted.
I tried printing the second parameter using console.log. It looks like :
{ result: { ok: 1, n: 1 },
ops:
[ { username: 'user1',
password: 'password1',
_id: 562099bae1872f58b3a22aed } ],
insertedCount: 1,
insertedIds: [ 562099bae1872f58b3a22aed ]
}
insertedIds is the array that holds the _ids of the inserted document or documents.
So you can insert your object in the second collection in the function callback of the insertion of first collection. A little confusing.
In simple terms : Insert the document in first collection. In it's callback, insert the document in the second collection.
MongoClient.connect(MONGOLAB_URI, function (err, db) {
if (err)
console.log("We have some error : " + err);
else {
db.createCollection('rounds', function (err, rounds) {
if (err) {
console.log('Error while creating rounds collection');
throw err;
}
else {
rounds.insert({ 'username' : 'user1', 'password' : 'password1' }, function(err,docsInserted){
console.log('Last document inserted id :', docsInserted.insertedIds[0]);
//inserting the document in the function callback
db.createCollection('deposits', function (err, deposits) {
if (err) {
console.log('Error while creating deposits collection');
throw err;
}
else {
//change array index according to your need
//you may be inserting multiple objects simultaneously
deposits.insert({'last_inserted_object' : docsInserted.insertedIds[0]);
console.log('inserted into deposits collection');
}
});
});
}
});
}
});
It seems it's not possible to operate atomically on 2 collections in MongoDB, as explained in this answer:
Is there any way to atomically update two collections in MongoDB?
I leave the question up because it has a slightly different focus (not 2 updates, but find+insert).