mongodb won't take $ as index in update query - node.js

I udpated my mongo instance from version 2.4 to 3.4 and all of my update queries stopped working where I was passing $ as index.
If I pass static 0 or 1 in the query it works fine, but earlier syntax of $ won't work at all.
Below is my query :
db.collection('users').update({"email": "u1#u1.com","companies":{"$elemMatch":{"id":"1487006991927"}}},
{
$set: {
"companies.$.details" : {"company_name":"hey updated"}
}
});
Response that I get :
{ result: { _t: 'UpdateResponse', ok: 1, n: 1, nModified: 1 },
This worked perfectly while I was on mongo vesrion 2.4 but not anymore. I can't always pass static 0 / 1 or index, what is the right way to do it ?
Also to note : Response says that 1 record was modified, but nothing was modified actually.
{
"_id": "589aa3509a248a3d7a01b784",
"businessAndPersonal": "true",
"companies": [
{
"details": {
"company_name": "afsfhey updated"
},
"locations": [],
"websites": [],
"id": "1487006991927"
},
{
"details": {
"company_name": "hey updated"
},
"locations": [],
"websites": [],
"id": "1487007435955"
}
]
}
Thanks in advance

Answer for my own question
I am using CosmosDB mongodb service as far they doesn't support positional operator of mongodb
Here is link that has discussion on positional array update via '$' query support

Related

how can i sort data with a array element in mongodb without using unwind

this is my sample data in this I have a userId and a array "watchHistory", "watchHistory" array contains the list of videos that is watched by the user :
{
"_id": "62821344445c30b35b441f11",
"userId": 579,
"__v": 0,
"watchHistory": [
{
"seenTime": "2022-05-23T08:29:19.781Z",
"videoId": 789456,
"uploadTime": "2022-03-29T12:33:35.312Z",
"description": "Biography of Indira Gandhi",
"speaker": "andrews",
"title": "Indira Gandhi",
"_id": "628b45df775e3973f3a670ec"
},
{
"seenTime": "2022-05-23T08:29:39.867Z",
"videoId": 789455,
"uploadTime": "2022-03-31T07:37:39.712Z",
"description": "What are some healthy food habits to stay healthy",
"speaker": "morris",
"title": "Healthy Food Habits",
"_id": "628b45f3775e3973f3a670"
},
]
}
I need to match the userId and after that i need to sort it with "watchHistory.seenTime", seenTime field indicates when the user saw the video. so i need to sort like the last watched video should come first in the list.
I don't have permission to use unwind so can any one help me from this. Thank you.
If you are using MongoDB version 5.2 and above, you can use $sortArray operator in an aggregation pipeline. Your pipeline should look something like this:
db.collection.aggregate(
[
{"$match":
{ _id: '62821344445c30b35b441f11' }
},
{
"$project": {
_id: 1,
"userId": 1,
"__v": 1,
"watchHistory": {
"$sortArray": { input: "$watchHistory", sortBy: { seenTime: -1 }}
}
}
}
]
);
Please modify the filter for "$match" stage, according to the key and value you need to filter on. Here's the link to the documentation.
Without using unwind, it's not possible to do it via an aggregation pipeline, but you can use update method and $push operator, as a workaround like this:
db.collection.update({
_id: "62821344445c30b35b441f11"
},
{
$push: {
watchHistory: {
"$each": [],
"$sort": {
seenTime: -1
},
}
}
})
Please see the working example here

Can't use aggregation operator $add to update dates while using Array Filters (MongoDB)

Below is an example of a document in a User collection below.
{
"_id" : 1,
"username" : bob,
"pause" : true,
"pause_date" : ISODate("2021-07-16T07:13:48.680Z"),
"learnt_item" : [
{
"memorized" : false,
"character" : "一",
"next_review" : ISODate("2021-07-20T11:02:44.979Z")
},
{
"memorized" : false,
"character" : "二",
"next_review" : ISODate("2021-07-20T11:02:44.979Z")
},
...
]
}
I need to update all the nested document in "learnt_item" if the "memorized" field is false.
The updates are:
"pause_date" to Null
"pause" to False
Update the ISOdate in "next_review" based on the duration that has passed between "pause_date" and the current time.
E.g. pause_date is 4 hours ago, then I want to add 4 hours to the "next_review"
I was able to achieve 1 & 2 using findOneAndUpdate with arrayFilters and also tested no.3 by updating the "next_review" field with a current date to make sure it is updating correctly.
User.findOneAndUpdate({"_id": req.user._id},
{$set:{"learnt_item.$[elem].next_review": DateTime.local(),"pause_date": null, "pause": value }},
{new:true, arrayFilters: [{"elem.memorized": false}]}).exec((err, doc) =>{if (err){res.send(err)} else {res.send(doc)}});
I was thinking of using the $add aggregation operator to increase the date base
"learnt_item.$[elem].next_review": {$add: ["$learnt_item.$[elem].next_review","$pause_date"]}
However, according to the documentation, arrayFilters is not available for updates that use an aggregation pipeline.
Is there another way more efficient way that I can update the ISOdate?
If you are running MongoDB 4.2 or later you can use a pipeline as the second parameter for the update function, this way you can use the operator $map with $cond to find the entries where the property memorized is equal to false and then add 4 days in milliseconds to the next_review date:
db.collection.update({
"_id": 1
},
[
{
$set: {
"pause_date": null,
"pause": false,
"learnt_item": {
$map: {
input: "$learnt_item",
as: "item",
in: {
$cond: [
{
$eq: [
"$$item.memorized",
false
]
},
{
memorized: "$$item.memorized",
character: "$$item.character",
next_review: {
$add: [
"$$item.next_review",
345600000
]
}
},
"$$item"
]
}
}
},
}
}
],
{
new: true,
});
You can check a running example here: https://mongoplayground.net/p/oHh1JWiP8vs

MongoDB is returning all Results in the find function [duplicate]

This question already has answers here:
Retrieve only the queried element in an object array in MongoDB collection
(18 answers)
Closed 3 years ago.
I'm trying to perform a simple find using MongoDB, I'm new on it, so I don't know what is wrong, but it seems that it's always bringing all the results without filtering it.
You can run the code here and see what is happening:
https://mongoplayground.net/
Collection - Paste it here https://mongoplayground.net/ - FIRST Text Area
[
{
"collection": "collection",
"count": 10,
"content": [
{
"_id": "apples",
"qty": 5
},
{
"_id": "bananas",
"qty": 7
},
{
"_id": "oranges",
"qty": {
"in stock": 8,
"ordered": 12
}
},
{
"_id": "avocados",
"qty": "fourteen"
}
]
}
]
Find - Paste it here https://mongoplayground.net/ - SECOND Text Area
db.collection.find({
"content.qty": 5
})
Check the results and you will see the entire JSON as a result.
What Am I doing wrong? Thanks!
You can use $filter with $project after your $match in order to get just one item from the array:
db.collection.aggregate([
{ $match: { "content.qty": 5 }},
{
$project: {
collection: 1,
count: 1,
content: {
$filter: {
input: "$content",
as: "item",
cond: { $eq: [ "$$item.qty", 5 ]
}
}
}
}
}
])
Without having to unwind etc. You are getting all since $find returns the first document which matches and in your case that is the main doc.
See it working here
First, the query is bringing back what it should do. it bring you the document that satisfy your query, try to add element or more in the array you search in to see the difference.
Secondly, what you want to reach - to get only the specific elements in the nested array - can be done by aggregation you can read in it more here:
https://docs.mongodb.com/manual/aggregation/

Search and Push Array to Nested Object Array in MongoDB [duplicate]

This question already has answers here:
Mongodb $push in nested array
(4 answers)
Closed 4 years ago.
"date_added": {
"$date": "2018-02-27T21:34:31.144Z"
},
"malls": [
{
"name": "DFM",
"geocoordinates": "-6.7726935,39.2196418",
"region": "Kentucky",
"show_times": [],
"_id": {
"$oid": "5a95d3ed053cc1444eadaeae"
}
},
{
"name": "MkHouse",
"geocoordinates": "-6.8295944,39.2738459",
"region": "Kenon",
"show_times": [],
"_id": {
"$oid": "5a95d429053cc1444eadaeaf"
}
}
],
"title": "Black Panther",
I need to find/query malls with name == "DFM" and push data to show_times array, can anybody help! Which is the best way to handle this. I already query using _id and it worked and have this document. Now how can i push show_times? I'm using mongoose v5.5.1
Try this, It basically inserts the showtime where it found name field in malls array equal to DFM, $ operator is used for this
model.update(
{ _id: "givenObjectId",
"malls.name" : "DFM"
},
{
$push : {"malls.$.show_times" : data }
}
)
More details on the $ operator

Query all unique values of a field with Elasticsearch

How do I search for all unique values of a given field with Elasticsearch?
I have such a kind of query like select full_name from authors, so I can display the list to the users on a form.
You could make a terms facet on your 'full_name' field. But in order to do that properly you need to make sure you're not tokenizing it while indexing, otherwise every entry in the facet will be a different term that is part of the field content. You most likely need to configure it as 'not_analyzed' in your mapping. If you are also searching on it and you still want to tokenize it you can just index it in two different ways using multi field.
You also need to take into account that depending on the number of unique terms that are part of the full_name field, this operation can be expensive and require quite some memory.
For Elasticsearch 1.0 and later, you can leverage terms aggregation to do this,
query DSL:
{
"aggs": {
"NAME": {
"terms": {
"field": "",
"size": 10
}
}
}
}
A real example:
{
"aggs": {
"full_name": {
"terms": {
"field": "authors",
"size": 0
}
}
}
}
Then you can get all unique values of authors field.
size=0 means not limit the number of terms(this requires es to be 1.1.0 or later).
Response:
{
...
"aggregations" : {
"full_name" : {
"buckets" : [
{
"key" : "Ken",
"doc_count" : 10
},
{
"key" : "Jim Gray",
"doc_count" : 10
},
]
}
}
}
see Elasticsearch terms aggregations.
Intuition:
In SQL parlance:
Select distinct full_name from authors;
is equivalent to
Select full_name from authors group by full_name;
So, we can use the grouping/aggregate syntax in ElasticSearch to find distinct entries.
Assume the following is the structure stored in elastic search :
[{
"author": "Brian Kernighan"
},
{
"author": "Charles Dickens"
}]
What did not work: Plain aggregation
{
"aggs": {
"full_name": {
"terms": {
"field": "author"
}
}
}
}
I got the following error:
{
"error": {
"root_cause": [
{
"reason": "Fielddata is disabled on text fields by default...",
"type": "illegal_argument_exception"
}
]
}
}
What worked like a charm: Appending .keyword with the field
{
"aggs": {
"full_name": {
"terms": {
"field": "author.keyword"
}
}
}
}
And the sample output could be:
{
"aggregations": {
"full_name": {
"buckets": [
{
"doc_count": 372,
"key": "Charles Dickens"
},
{
"doc_count": 283,
"key": "Brian Kernighan"
}
],
"doc_count": 1000
}
}
}
Bonus tip:
Let us assume the field in question is nested as follows:
[{
"authors": [{
"details": [{
"name": "Brian Kernighan"
}]
}]
},
{
"authors": [{
"details": [{
"name": "Charles Dickens"
}]
}]
}
]
Now the correct query becomes:
{
"aggregations": {
"full_name": {
"aggregations": {
"author_details": {
"terms": {
"field": "authors.details.name"
}
}
},
"nested": {
"path": "authors.details"
}
}
},
"size": 0
}
Working for Elasticsearch 5.2.2
curl -XGET http://localhost:9200/articles/_search?pretty -d '
{
"aggs" : {
"whatever" : {
"terms" : { "field" : "yourfield", "size":10000 }
}
},
"size" : 0
}'
The "size":10000 means get (at most) 10000 unique values. Without this, if you have more than 10 unique values, only 10 values are returned.
The "size":0 means that in result, "hits" will contain no documents. By default, 10 documents are returned, which we don't need.
Reference: bucket terms aggregation
Also note, according to this page, facets have been replaced by aggregations in Elasticsearch 1.0, which are a superset of facets.
The existing answers did not work for me in Elasticsearch 5.X, for the following reasons:
I needed to tokenize my input while indexing.
"size": 0 failed to parse because "[size] must be greater than 0."
"Fielddata is disabled on text fields by default." This means by default you cannot search on the full_name field. However, an unanalyzed keyword field can be used for aggregations.
Solution 1: use the Scroll API. It works by keeping a search context and making multiple requests, each time returning subsequent batches of results. If you are using Python, the elasticsearch module has the scan() helper function to handle scrolling for you and return all results.
Solution 2: use the Search After API. It is similar to Scroll, but provides a live cursor instead of keeping a search context. Thus it is more efficient for real-time requests.

Resources