How can I update one document at nested array - node.js

{
"_id": "5e28b029a0c8263a8a56980a",
"name": "Recruiter",
"data": [
{
"_id": "5e28b0980f89ba3c0782828f",
"targetLink": "https://www.linkedin.com/in/dan-kelsall-7aa0926b/",
"name": "Dan Kelsall",
"headline": "Content Marketing & Copywriting",
"actions": [
{
"result": 1,
"name": "VISIT"
},
{
"result": 1,
"name": "FOLLOW"
}
]
},
{
"_id": "5e28b0980f89ba3c078283426f",
"targetLink": "https://www.linkedin.com/in/56wergwer/",
"name": "56wergwer",
"headline": "asdgawehethre",
"actions": [
{
"result": 1,
"name": "VISIT"
}
]
}
]
}
Here is one of my mongodb document. I'd like to update data->actions->result
So this is what I've done
Campaign.updateOne({
'data.targetLink': "https://www.linkedin.com/in/dan-kelsall-7aa0926b/",
'data.actions.name': "Follow"
}, {$set: {'data.$.actions.result': 0}})
But it seems not updating anything and even it can't find the document by this 'data.actions.name'

You need the positional filtered operator since the regular positional operator ($) can only be used for one level of nested arrays:
Campaign.updateOne(
{ "_id": "5e28b029a0c8263a8a56980a", "data.targetLink": "https://www.linkedin.com/in/dan-kelsall-7aa0926b/" },
{ $set: { "data.$.actions.$[action].result": 0 } },
{ arrayFilters: [ { "action.name": "Follow" } ] }
)

Related

how to query returns all values in the array that match the criteria in mongoose?

I want to return all values in the array that match the criteria.
but when I query, It returns only one result.
This is the value of my DB.
[{
"channel": "a",
"video": [
{
"name": 1
"status": ''
},
{
"name": 2
"status": 'err'
},
{
"name": 3
"status": 'err'
}
]
},
{
"channel": "b",
"video": [
{
"name": 4
"status": 'err'
},
{
"name": 5
"status": 'err'
},
{
"name": 6
"status": ''
}
]
}]
I want a get result like this
[
{
"channel": "a",
"video": [
{
"name": 2
"status": 'err'
},
{
"name": 3
"status": 'err'
}
]
},
{
"channel": "b",
"video": [
{
"name": 4
"status": 'err'
},
{
"name": 5
"status": 'err'
}
]
}
]
but when I using my code
var errData = await DB.find({
'video.status' : { $in : 'err' }
},
{
'channel': true,
'video' : {
$elemMatch : {
'status': { $in : 'err' }
}
}
} )
it returns like this
[
{
"channel": "a",
"video": [
{
"name": 2
"status": 'err'
}
]
},
{
"channel": "b",
"video": [
{
"name": 4
"status": 'err'
}
]
},
]
how can I fix it in mongoose(don't use aggregate)?
If it is impossible without using an aggregate, how can I make it by using an aggregate?
would you please help me?
As docs explain:
The $elemMatch operator limits the contents of an field from the query results to contain only the first element matching the $elemMatch condition
That's why you get only one element for each document.
Also, it seems (look here) that is not possible without aggregation.
Using aggregation you can use $filter in this way:
db.collection.aggregate([
{
"$set": {
"video": {
"$filter": {
"input": "$video",
"as": "v",
"cond": {"$eq": ["$$v.status","err"]}
}
}
}
}
])
Example here

Mongoose aggregate group items by tag

I'm trying to group items by tag. But I do not get what exactly I want to do.
There is schema Item (_id, name, tags) with tags is array of tags_id ( was referenced from Tag schema )
Item = new Schema({
id: Schema.ObjectId,
name: String,
tags: [ { type: Schema.ObjectId, ref: 'Tag' }]
});
Tag = new Schema({
id: Schema.ObjectId,
name: String
});
This is what I have done so far:
Item.aggregate([
{
"$lookup": {
"from": Tag.collection.name,
"let": { "tags": "$tags" },
"pipeline": [
{
"$match": {
"$expr": { "$in": ['$_id', '$$tags'] }
}
},
{
"$project": {
"_id": 1,
"name": 1
}
}
],
"as": 'tags'
}
},
{
"$group": {
"_id": "$tags._id",
"items": {
"$push": {
"id": "$_id",
"name": "$name",
}
}
}
}
]);
This is what I get
{
"data": [
{
"_id": [
"5eb95e8dcae79713f1de0a27"
],
"items": [
{
"id": "5eb95e9fcae79713f1de0a28",
"name": "My Item 1"
}
]
},
{
"_id": [
"5eb9564dc4317411fe79e1bf"
],
"items": [
{
"id": "5eb95b1430f138131ed90f4f",
"name": "My Item 2"
},
{
"id": "5eb95ed0cae79713f1de0a29",
"name": "My Item 3"
}
]
}
]
}
I would like to get the name of each tag, and not only the _id possibly not within an array. Below the result that I would like to receive:
{
"data": [
{
"_id": "5eb95e8dcae79713f1de0a27",
"name": "My Tag name 1",
"items": [
{
"id": "5eb95e9fcae79713f1de0a28",
"name": "My Item 1"
}
]
},
{
"_id": "5eb9564dc4317411fe79e1bf",
"name": "My Tag name 2",
"items": [
{
"id": "5eb95b1430f138131ed90f4f",
"name": "My Item 2"
},
{
"id": "5eb95ed0cae79713f1de0a29",
"name": "My Item 3"
}
]
}
]
}
What am I doing wrong?
One way to to this is to add the tag-name to your $group operator:
...
{
"$group": {
"_id": "$tags._id",
"tagName": {$first: "$tags.name"},
"items": {
"$push": {
"id": "$_id",
"name": "$name",
}
}
}
}
...
Note that this will take the first matching entry for each group. Another option would be to use $replaceRoot and the $mergeObjects operator, like it was done here.

Speeding up Cloudant query for type text index

We have a table with this type of structure:
{_id:15_0, createdAt: 1/1/1, task_id:[16_0, 17_0, 18_0], table:”details”, a:b, c: d, more}
We created indexes using
{
"index": {},
"name": "paginationQueryIndex",
"type": "text"
}
It auto created
{
"ddoc": "_design/28e8db44a5a0862xxx",
"name": "paginationQueryIndex",
"type": "text",
"def": {
"default_analyzer": "keyword",
"default_field": {
},
"selector": {
},
"fields": [
],
"index_array_lengths": true
}
}
We are using the following query
{
"selector": {
"createdAt": { "$gt": 0 },
"task_id": { "$in": [ "18_0" ] },
"table": "details"
},
"sort": [ { "createdAt": "desc" } ],
"limit”: 20
}
It takes 700-800 ms for first time, after that it decreases to 500-600 ms
Why does it take longer the first time?
Any way to speed up the query?
Any way to add indexes to specific fields if type is “text”? (instead of indexing all the fields in these records)
You could try creating the index more explicitly, defining the type of each field you wish to index e.g.:
{
"index": {
"fields": [
{
"name": "createdAt",
"type": "string"
},
{
"name": "task_id",
"type": "string"
},
{
"name": "table",
"type": "string"
}
]
},
"name": "myindex",
"type": "text"
}
Then your query becomes:
{
"selector": {
"createdAt": { "$gt": "1970/01/01" },
"task_id": { "$in": [ "18_0" ] },
"table": "details"
},
"sort": [ { "createdAt": "desc" } ],
"limit": 20
}
Notice that I used strings where the data type is a string.
If you're interested in performance, try removing clauses from your query one at-a-time to see if one is causing the performance problem. You can also look at the explanation of your query to see if it using your index correctly.
Documentation on creating an explicit text query index is here

Keeping nested arrays but pulling out all it's doubly nested arrays in mongodb [duplicate]

This question already has answers here:
How to Update Multiple Array Elements in mongodb
(16 answers)
Updating a Nested Array with MongoDB
(2 answers)
Closed 5 years ago.
Building a Nodejs app, I'm trying to pull all doubly nested records from a Mongo Database. Attempts that I've made only removed one doubly nested record or all nested records. As in the example data below I've been trying to remove all tickets that has the same keyId. I've reduced the example but tickets as an array there might be other elements with the same structure with different "keyIds" that shouldn't be removed. I've looked this question but it only refrains to removing one record of a doubly nested array, not all of them at once.
[
{
"_id": "59fe54098448d822f89a7e62",
"ownerId": "59b23449b20b7c1838eee1a3",
"name": "Home",
"keys": [
{
"id": "6d7435625564594f4a563554796c6a77",
"name": "Front Door"
}
],
"grants": [
{
"id": "307658775634774a556b677650347072",
"userId": "59b23449b20b7c1838eee1a3",
"tickets": [
{
"keyId": "6d7435625564594f4a563554796c6a77",
"iv": "b7090268bdaf9ab55270e133b5629e28"
}
]
},
{
"id": "37703369365765485763484a4358774d",
"userId": "59b23449b20b7c1838eee1a3",
"tickets": [
{
"keyId": "6d7435625564594f4a563554796c6a77",
"iv": "d2e2de0f9387c5d9b16424e8ac66a3c1"
}
]
},
{
"id": "3451483977564d755278397939593633",
"userId": "59b23449b20b7c1838eee1a3",
"tickets": [
{
"keyId": "6d7435625564594f4a563554796c6a77",
"iv": "582ff50ac3d337c62eb53094470e3161"
}
]
},
{
"id": "7059684f4e42754d55456e726b35664e",
"userId": "59b23449b20b7c1838eee1a3",
"tickets": [
{
"keyId": "6d7435625564594f4a563554796c6a77",
"iv": "b110ee5cb5da8941cc8ad6e1c3fe501c"
}
]
}
]
}
]
After removing all tickets with keyId=6d7435625564594f4a563554796c6a77 the intended data should look like this:
[
{
"_id": "59fe54098448d822f89a7e62",
"ownerId": "59b23449b20b7c1838eee1a3",
"name": "Home",
"keys": [
{
"id": "6d7435625564594f4a563554796c6a77",
"name": "Front Door"
}
],
"grants": [
{
"id": "307658775634774a556b677650347072",
"userId": "59b23449b20b7c1838eee1a3",
"tickets": []
},
{
"id": "37703369365765485763484a4358774d",
"userId": "59b23449b20b7c1838eee1a3",
"tickets": []
},
{
"id": "3451483977564d755278397939593633",
"userId": "59b23449b20b7c1838eee1a3",
"tickets": []
},
{
"id": "7059684f4e42754d55456e726b35664e",
"userId": "59b23449b20b7c1838eee1a3",
"tickets": []
}
]
}
]
This code remove all grants at once:
db.places.update({}, {
$pull: {
"grants": {
"tickets": {
$elemMatch: { "keyId": keyID }
}
}
}
}, { multi: true });
This pull out just the first ticket and with "$pullAll" doesn't do anything:
db.places.findAndModify(
{
ownerId: ownerID, "grants.tickets.keyId": keyID
},
[ ],
{ $pull: { "grants.$.tickets": { keyId: keyID } } },
{ multi: true },
next
);
And this throws me an error saying: cannot use the part (grants of grants.tickets.$*.keyId) to traverse the element
db.places.update({ "grants.tickets.keyId": keyID }, {
$pull: {
"grants.tickets.$*.keyId": keyID
}
}, { multi: true });

cloudant searching index by multiple values

Cloudant is returning error message:
{"error":"invalid_key","reason":"Invalid key use-index for this request."}
whenever I try to query against an index with the combination operator, "$or".
A sample of what my documents look like is:
{
"_id": "28f240f1bcc2fbd9e1e5174af6905349",
"_rev": "1-fb9a9150acbecd105f1616aff88c26a8",
"type": "Feature",
"properties": {
"PageName": "A8",
"PageNumber": 1,
"Lat": 43.051523,
"Long": -71.498852
},
"geometry": {
"type": "Polygon",
"coordinates": [
[
[
-71.49978935969642,
43.0508382914137
],
[
-71.49978564033566,
43.052210148524
],
[
-71.49791499857444,
43.05220740550381
],
[
-71.49791875962663,
43.05083554852429
],
[
-71.49978935969642,
43.0508382914137
]
]
]
}
}
The index that I created is for field "properties.PageName", which works fine when I'm just querying for one document, but as soon as I try for multiple ones, I would receive the error response as quoted in the beginning.
If it helps any, here is the call:
POST https://xyz.cloudant.com/db/_find
request body:
{
"selector": {
"$or": [
{ "properties.PageName": "A8" },
{ "properties.PageName": "M30" },
{ "properties.PageName": "AH30" }
]
},
"use-index": "pagename-index"
}
In order to perform an $or query you need to create a text (full text) index, rather than a json index. For example, I just created the following index:
{
"index": {
"fields": [
{"name": "properties.PageName", "type": "string"}
]
},
"type": "text"
}
I was then be able to perform the following query:
{
"selector": {
"$or": [
{ "properties.PageName": "A8" },
{ "properties.PageName": "M30" },
{ "properties.PageName": "AH30" }
]
}
}

Resources