In my application, I have a MongoDB document that will be updated with $inc operation to increase/decrease the number in appliesTo object. Here is a sample object
{
name: "test-document",
appliesTo: {
profiles: {
Profile1: 3,
Profile2: 1
},
tags: {
Tag1: 7,
Tag2: 1
}
}
}
After I'm running the following command
await db.items.updateOne({name: "test-document"}, {$inc: {'appliesTo.profiles.Profile2': -1})
my document will be changed to
{
name: "test-document",
appliesTo: {
profiles: {
Profile1: 3,
Profile2: 0
},
tags: {
Tag1: 7,
Tag2: 1
}
}
}
I'm struggling with writing a query that will remove all keys, which values are 0. The only solution I have currently is to iterate over each key and update it using $unset command. But this is not an atomic operation
Is there a smarter way to handle it in one query?
There is no way to do both operations in a single regular update quey, you can try update with aggregation pipeline starting from MongoDB 4.2,
$cond to check is a key field's value greater than 1 then do $add operation otherwise remove it by $$REMOVE operator
await db.items.updateOne(
{ name: "test-document" },
[{
$set: {
"appliesTo.profiles.Profile2": {
$cond: [
{ $gt: ["$appliesTo.profiles.Profile2", 1] },
{ $add: ["$appliesTo.profiles.Profile2", -1] }
"$$REMOVE"
]
}
}
}]
)
Playground
Related
I'm new to MongoDB and getting to grips with its syntax and capabilities. To achieve the functionality described in the title I believe I can create a promise that will run 2 simultaneous queries on the document - one to get the full content of one item in the array (or at least the data that is omitted in the other query, to re-add after), searched for by most recent date, the other to return the array minus specific properties. I have the following document:
{
_id : ObjectId('5rtgwr6gsrtbsr6hsfbsr6bdrfyb'),
uuid : 'something',
mainArray : [
{
id : 1,
title: 'A',
date: 05/06/2020,
array: ['lots','off','stuff']
},
{
id : 2,
title: 'B',
date: 28/05/2020,
array: ['even','more','stuff']
},
{
id : 3,
title: 'C',
date: 27/05/2020,
array: ['mountains','of','knowledge']
}
]
}
and I would like to return
{
uuid : 'something',
mainArray : [
{
id : 1,
title: 'A',
date: 05/06/2020,
array: ['lots','off','stuff']
},
{
id : 2,
title: 'B'
},
{
id : 3,
title: 'C'
}
]
}
How valid and performant is the promise approach versus constructing one query that would achieve this? I have no idea how to perform such 'combined-rule'/conditions in MongoDB, if anyone could give an example?
If your subdocument array you want to omit is not very large. I would just remove it at the application side. Doing processing in MongoDB means you choose to use the compute resources of MongoDB instead of your application. Generally your application is easier and cheaper to scale, so implementation at the application layer is preferable.
But in this exact case it's not too complex to implement it in MongoDB:
db.collection.aggregate([
{
$addFields: { // keep the first element somewhere
first: { $arrayElemAt: [ "$mainArray", 0] }
}
},
{
$project: { // remove the subdocument field
"mainArray.array": false
}
},
{
$addFields: { // join the first element with the rest of the transformed array
mainArray: {
$concatArrays: [
[ // first element
"$first"
],
{ // select elements from the transformed array except the first
$slice: ["$mainArray", 1, { $size: "$mainArray" }]
}
]
}
}
},
{
$project: { // remove the temporary first elemnt
"first": false
}
}
])
MongoDB Playground
I am learning Mongodb to use with NodeJS. The below is my code:
let configMap = {
'1': 10,
'2': 12,
'3': 13
}
let myData = await mongo_groups_collection.find({
_id: {
$in: [1,2,3]
},
active: true
}).project({
'_id': 1,
'name': 1,
'members': 1,
'messages': {
$slice: [-(configMap[_id]), (DEFAULT_PAGE_SIZE_FILL * PAGE_SIZE) + (PAGE_SIZE - ((configMap[_id]) % PAGE_SIZE))] //Need to use _id to get configMap value
}
}).toArray();
I am using "configMap" nodeJS variable in Mongo slice function to retrieve certain number of elements from the array. To retrieve I need to lookup using "_id" field value which I am not getting and encounter _id "undefined" error.
That's because (configMap[_id]) gets compiled in node.js code but not on database as a query. So in code when this query gets compiled it's not able to find _id since you've not declared it as the way you did it for configMap variable.
You're expecting _id value to be actual _id from document. So, in case if you use (configMap[$_id]) it doesn't work that way cause you can't club a javascript object with documents field.
You can still do this using aggregate query like below - where we actually add needed v value as a field to document for further usage :
/** Make `configMap` an array of objects */
var configMap = [
{ k: 1, v: 10 },
{ k: 2, v: 12 },
{ k: 3, v: 13 },
];
let myData = await mongo_groups_collection.aggregate([
{
$match: { _id: { $in: [1, 2, 3] }, active: true }
},
/** For each matched doc we'll iterate on input `configMap` array &
* find respective `v` value from matched object where `k == _id` */
{
$addFields: {
valueToBeSliced: {
$let: {
vars: { matchedK: { $arrayElemAt: [ { $filter: { input: configMap, cond: { $eq: ["$$this.k", "$_id"] } } }, 0 ] } },
in: "$$matchedK.v",
}
}
}
},
{
$project: {
messages: { $slice: ["$messages", { $subtract: [0, "$valueToBeSliced"] }, someInputValue ] },
name: 1, members: 1
}
}
]).toArray();
Ref : $let
Note :
In projection you don't need to mention this '_id': 1 as it's included by default.
Since your query is using .find() you might be using $slice-projection-operator which can't be used in aggregation, instead we need to use $slice-aggregation-operator, both does same thing but have syntax variation, you need to refer to docs for that.
Also in someInputValue you need to pass in value of (DEFAULT_PAGE_SIZE_FILL * PAGE_SIZE) + (PAGE_SIZE - ((configMap[_id]) % PAGE_SIZE)) - So try to use aggregation operators $divide to get the value. We're doing { $subtract: [0, "$valueToBeSliced"] } to convert positive valueToBeSliced to negative as we can't just do like -valueToBeSliced which we usually do in javaScript.
The documents in the collection have an array field of sub-documents, each with a counter that should be increased up to three, but if the array doesn't have a sub-document with a given key it should create it with the default values.
The documentation for $addToSet says:
Behavior
$addToSet only ensures that there are no duplicate items added to the set and does not affect existing duplicate elements. $addToSet does not guarantee a particular ordering of elements in the modified set.
Missing Field
If you use $addToSet on a field is absent in the document to update, $addToSet creates the array field with the specified value as its element.
The problem is that the array field is not created in the document if it doesn't exist, as stated in the documentation.
This is what I'm currently using to accomplish the operation:
// increase counter in element of array if element exist and counter is less than 3
collection.updateOne({
key_1,
"array.key_2": key_2,
"array.counter": {$lt: 3}
}, {
$inc: {"array.$.counter": 1}
})
.then(res => {
console.log("!!!!1:", res.modifiedCount, res.upsertedId, res.upsertedCount, res.matchedCount);
if (res.matchedCount) return res.matchedCount;
// create element in array with the default values,
// also create the array field if it doesn't exist
collection.updateOne({
key_1
}, {
$addToSet: {
array: {key_2, counter: 1}
}
})
.then(res => {
console.log("!!!!2:", res.modifiedCount, res.upsertedId, res.upsertedCount, res.matchedCount);
return res.matchedCount;
})
.catch(e => console.log(e))
})
.catch(e => console.log(e))
Using upsert in the second query, it creates the array field if it doesn't exist but then when array.counter reaches 3 in subsequent calls to increase its value, the operation creates a new sub-document in the array with the same values for array.key_2 and array.date, effectible duplicating the entry although with array.counter set to 1 instead of 3.
I'm using mongo version 4.2.1
Update:
Below there is a sample document before trying to run the operation on the second subdocument:
{
"key_1": 1,
"array": [
{
"key_2": 1,
"counter" 1
}, {
"key_2": 2,
"counter" 3
}
]
}
This is what I'm getting as a result when using upsert:
{
"key_1": 1,
"array": [
{
"key_2": 1,
"counter" 1
}, {
"key_2": 2,
"counter" 3
}, {
"key_2": 2,
"counter" 1
}
]
}
The operation is duplicating the second subdocument in array, but if upsert is not used then the array field is not created if it's not already in the parent document, which is the oposite of the expected behavior for $addToSet from what it says in the documentation.
Update 2
These are the steps to reproduce the issue:
Run the operation with key_1 set to 1, and upsert disabled. None of the queries modifies the document. The array field is not created.
{
"key_1": 1
}
Enable upsert and run the operation again. The array field is created in the second query:
{
"key_1": 1,
"array": [
{
"key_2": 1,
"counter" 1
}
]
}
Run the operation again twice more. The first query modifies the document twice:
{
"key_1": 1,
"array": [
{
"key_2": 1,
"counter" 3
}
]
}
Run the operation once more. The first query doesn't modifies the document. The second query creates a duplicate:
{
"key_1": 1,
"array": [
{
"key_2": 1,
"counter" 3
}, {
"key_2": 1,
"counter" 1
}
]
}
Please try this :
var key_2Value = 2;
var firstFilterQuery = {
key_1: 1,
array: {
$elemMatch: {
"key_2": key_2Value,
"date": 'someDate',
"conter": { $lte: 3 }
}
}
}
var secondFilterQuery = {
key_1: 1,
"array.key_2": {$ne: key_2Value}
}
var defaultDoc = {key_2 : key_2Value, "date": 'someDefaultDate',counter: 1}
Query :
collection.bulkWrite([
{
updateOne:
{
"filter": firstFilterQuery,
"update": { $inc: { "array.$.conter": 1 } }
}
}, {
updateOne:
{
"filter": secondFilterQuery,
"update": { $push: { array: defaultDoc }
}
}
}
])
With the above query, you can achieve what you wanted in one DB call(at any given case only one 'updateOne' should update the DB), Output should look something like :
{
"acknowledged" : true,
"deletedCount" : 0.0,
"insertedCount" : 0.0,
"matchedCount" : 1.0,
"upsertedCount" : 0.0,
"insertedIds" : {},
"upsertedIds" : {}
}
I have documents that looks something like that, with a unique index on bars.name:
{ name: 'foo', bars: [ { name: 'qux', somefield: 1 } ] }
. I want to either update the sub-document where { name: 'foo', 'bars.name': 'qux' } and $set: { 'bars.$.somefield': 2 }, or create a new sub-document with { name: 'qux', somefield: 2 } under { name: 'foo' }.
Is it possible to do this using a single query with upsert, or will I have to issue two separate ones?
Related: 'upsert' in an embedded document (suggests to change the schema to have the sub-document identifier as the key, but this is from two years ago and I'm wondering if there are better solutions now.)
No there isn't really a better solution to this, so perhaps with an explanation.
Suppose you have a document in place that has the structure as you show:
{
"name": "foo",
"bars": [{
"name": "qux",
"somefield": 1
}]
}
If you do an update like this
db.foo.update(
{ "name": "foo", "bars.name": "qux" },
{ "$set": { "bars.$.somefield": 2 } },
{ "upsert": true }
)
Then all is fine because matching document was found. But if you change the value of "bars.name":
db.foo.update(
{ "name": "foo", "bars.name": "xyz" },
{ "$set": { "bars.$.somefield": 2 } },
{ "upsert": true }
)
Then you will get a failure. The only thing that has really changed here is that in MongoDB 2.6 and above the error is a little more succinct:
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 16836,
"errmsg" : "The positional operator did not find the match needed from the query. Unexpanded update: bars.$.somefield"
}
})
That is better in some ways, but you really do not want to "upsert" anyway. What you want to do is add the element to the array where the "name" does not currently exist.
So what you really want is the "result" from the update attempt without the "upsert" flag to see if any documents were affected:
db.foo.update(
{ "name": "foo", "bars.name": "xyz" },
{ "$set": { "bars.$.somefield": 2 } }
)
Yielding in response:
WriteResult({ "nMatched" : 0, "nUpserted" : 0, "nModified" : 0 })
So when the modified documents are 0 then you know you want to issue the following update:
db.foo.update(
{ "name": "foo" },
{ "$push": { "bars": {
"name": "xyz",
"somefield": 2
}}
)
There really is no other way to do exactly what you want. As the additions to the array are not strictly a "set" type of operation, you cannot use $addToSet combined with the "bulk update" functionality there, so that you can "cascade" your update requests.
In this case it seems like you need to check the result, or otherwise accept reading the whole document and checking whether to update or insert a new array element in code.
if you dont mind changing the schema a bit and having a structure like so:
{ "name": "foo", "bars": { "qux": { "somefield": 1 },
"xyz": { "somefield": 2 },
}
}
You can perform your operations in one go.
Reiterating 'upsert' in an embedded document for completeness
I was digging for the same feature, and found that in version 4.2 or above, MongoDB provides a new feature called Update with aggregation pipeline.
This feature, if used with some other techniques, makes possible to achieve an upsert subdocument operation with a single query.
It's a very verbose query, but I believe if you know that you won't have too many records on the subCollection, it's viable. Here's an example on how to achieve this:
const documentQuery = { _id: '123' }
const subDocumentToUpsert = { name: 'xyz', id: '1' }
collection.update(documentQuery, [
{
$set: {
sub_documents: {
$cond: {
if: { $not: ['$sub_documents'] },
then: [subDocumentToUpsert],
else: {
$cond: {
if: { $in: [subDocumentToUpsert.id, '$sub_documents.id'] },
then: {
$map: {
input: '$sub_documents',
as: 'sub_document',
in: {
$cond: {
if: { $eq: ['$$sub_document.id', subDocumentToUpsert.id] },
then: subDocumentToUpsert,
else: '$$sub_document',
},
},
},
},
else: { $concatArrays: ['$sub_documents', [subDocumentToUpsert]] },
},
},
},
},
},
},
])
There's a way to do it in two queries - but it will still work in a bulkWrite.
This is relevant because in my case not being able to batch it is the biggest hangup. With this solution, you don't need to collect the result of the first query, which allows you to do bulk operations if you need to.
Here are the two successive queries to run for your example:
// Update subdocument if existing
collection.updateMany({
name: 'foo', 'bars.name': 'qux'
}, {
$set: {
'bars.$.somefield': 2
}
})
// Insert subdocument otherwise
collection.updateMany({
name: 'foo', $not: {'bars.name': 'qux' }
}, {
$push: {
bars: {
somefield: 2, name: 'qux'
}
}
})
This also has the added benefit of not having corrupted data / race conditions if multiple applications are writing to the database concurrently. You won't risk ending up with two bars: {somefield: 2, name: 'qux'} subdocuments in your document if two applications run the same queries at the same time.
Here is array structure
contact: {
phone: [
{
number: "+1786543589455",
place: "New Jersey",
createdAt: ""
}
{
number: "+1986543589455",
place: "Houston",
createdAt: ""
}
]
}
Here I only know the mongo id(_id) and phone number(+1786543589455) and I need to remove that whole corresponding array element from document. i.e zero indexed element in phone array is matched with phone number and need to remove the corresponding array element.
contact: {
phone: [
{
number: "+1986543589455",
place: "Houston",
createdAt: ""
}
]
}
I tried with following update method
collection.update(
{ _id: id, 'contact.phone': '+1786543589455' },
{ $unset: { 'contact.phone.$.number': '+1786543589455'} }
);
But it removes number: +1786543589455 from inner array object, not zero indexed element in phone array. Tried with pull also without a success.
How to remove the array element in mongodb?
Try the following query:
collection.update(
{ _id: id },
{ $pull: { 'contact.phone': { number: '+1786543589455' } } }
);
It will find document with the given _id and remove the phone +1786543589455 from its contact.phone array.
You can use $unset to unset the value in the array (set it to null), but not to remove it completely.
You can simply use $pull to remove a sub-document.
The $pull operator removes from an existing array all instances of a value or values that match a specified condition.
Collection.update({
_id: parentDocumentId
}, {
$pull: {
subDocument: {
_id: SubDocumentId
}
}
});
This will find your parent document against given ID and then will remove the element from subDocument which matched the given criteria.
Read more about pull here.
In Mongoose:
from the document:
To remove a document from a subdocument array we may pass an object
with a matching _id.
contact.phone.pull({ _id: itemId }) // remove
contact.phone.pull(itemId); // this also works
See Leonid Beschastny's answer for the correct answer.
To remove all array elements irrespective of any given id, use this:
collection.update(
{ },
{ $pull: { 'contact.phone': { number: '+1786543589455' } } }
);
To remove all matching array elements from a specific document:
collection.update(
{ _id: id },
{ $pull: { 'contact.phone': { number: '+1786543589455' } } }
);
To remove all matching array elements from all documents:
collection.updateMany(
{ },
{ $pull: { 'contact.phone': { number: '+1786543589455' } } }
);
Given the following document in the profiles collection:
{ _id: 1, votes: [ 3, 5, 6, 7, 7, 8 ] }
The following operation will remove all items from the votes array that are greater than or equal to ($gte) 6:
db.profiles.update( { _id: 1 }, { $pull: { votes: { $gte: 6 } } } )
After the update operation, the document only has values less than 6:
{ _id: 1, votes: [ 3, 5 ] }
If you multiple items the same value, you should use $pullAll instead of $pull.
In the question having a multiple contact numbers the same use this:
collection.update(
{ _id: id },
{ $pullAll: { 'contact.phone': { number: '+1786543589455' } } }
);
it will delete every item that matches that number. in contact phone
Try reading the manual.