MongoDB - Mongoose find command to get only values from an embedded object - node.js

My Schema looks something like this.
{
_id: '1',
items: {
'id1': 'item1',
'id2': 'item2',
'id3': 'item3'
}
}
Following is the query
ItemModel.find({}, {
items: 1,
_id: 0
});
And the result of the find query is:
{ "items" : { "21" : "item21", "22" : "item22", "23" : "item23" } }
{ "items" : { "31" : "item31", "32" : "item32", "33" : "item33" } }
{ "items" : { "11" : "item11", "12" : "item32", "13" : "item13" } }
What I want is:
["item21", "item22", "item23",
"item31", "item32", "item33",
"item11", "item12", "item13"]
Currently, I am doing the processing on the node.js end for getting the above. I want to reduce the output payload size coming from MongoDB. The "items" key is redundant and the IDs mentioned are not required as well when I fetch it. Here, the IDs are quite small like 21, 22, 13, etc. but those are acutally 50 characters in length.
If not the above, any other efficient alternatives are also welcome.

One example of how to achieve that is the following aggregation:
[
{
$project: {
items: {
$objectToArray: '$items',
},
},
},
{ $unwind: '$items' },
{
$project: {
items: '$items.v',
},
},
{
$group: {
_id: null,
items: {
$push: '$items',
},
},
}
];
What this does is first we convert with $project & $objectToArray field to an array so that we could use $unwind. This way we'll have documents with different items. Now we convert with another $project to make it a string instead of an object (which would be { v: <value>, k: <value> }. And, finally, we $group them together.
Final result:
To get exactly that list, you'll need in your code to access items field, like result[0].items ([0] because aggregation will return an array).

Related

Update all the key values of a dynamic object in MongoDB

I have a object which has dynamic keys all the values in that are numeric integers, i like to update all the key values in that object
{
"_id" : ObjectId("6395fc7b1c5a0c4a5fc9bd8e"),
"users" : [
ObjectId("638da89d0066308efe081709"),
ObjectId("63844feadf507942caaf90e3"),
ObjectId("638455e5fa983e9cf84c0f3f")
],
"type" : "GROUP",
"unReadCount" : {
"638da89d0066308efe081709" : 0,
"63844feadf507942caaf90e3" : 0,
"638455e5fa983e9cf84c0f3f" : 0
},
"createdAt" : ISODate("2022-12-11T21:21:23.815+05:30"),
"updatedAt" : ISODate("2022-12-11T22:48:33.953+05:30"),
"__v" : 0
},
I want to increment the unReadCount entire object values, please note the unReadCount object keys are not static it varies document to document. I tried with normal $inc operator it thrown error stating that has the field 'unReadCount' of non-numeric type object" $ wouldn't work as its not an array.
Please note that am trying to achieve this in MongoDB, i can do this via JS code by fetching the records and looping through it, but i like to do it in MongoDB/Mongoose. Any clue/help is appreciated
I think here is what you need
db.tests.updateMany({},[
{
$addFields: {
unReadCountArray: { $objectToArray: "$unReadCount" }
}
},
{
$addFields: {
unReadCountArray: {
$map: {
input: "$unReadCountArray",
as: "unReadCount",
in:
{
$mergeObjects: [ { k:"$$unReadCount.k", v: {$add: ["$$unReadCount.v", 1] }}, null ]
}
}
}
}
},
{
$addFields: {
unReadCount: {
$arrayToObject: '$unReadCountArray'
}
}
},{
$unset:'unReadCountArray'
},
{ $set: { modified: "$$NOW"} }])

MongoDB aggregation, take an array of values and get their amount

this is my first time asking in StackOverflow and I hope I can explain what I'm aiming for.
I've got documents that look like this:
"_id" : ObjectId("5fd76b67a7e0fa652a297a9f"),
"type" : "play",
"session" : "5b0b5d57-c3ca-415f-8ef6-49bbd5805a23",
"episode" : 1,
"show" : 1,
"user" : 1,
"platform" : "spotify",
"currentTime" : 0,
"date" : ISODate("2020-12-14T13:40:51.906Z"),
"__v" : 0
}
I'd like to fetch for a show and group them by episode. I've got this far with my aggregattion:
const filter = { user, show, type: { $regex: /^(play|stop|close)$/ } }
const requiredFields = { "episode": 1, "session": 1, "date": 1, "currentTime": 1 }
// Get sessions grouped by episode
const it0 = {
_id: '$episode',
session:
{$addToSet:
{_id: "$session",
date:{$dateToString: { format: "%Y-%m-%d", date: "$date" }},
averageOfSession: {$cond: [ { $gte: [ "$currentTime", 0.1 ] }, "$currentTime", null ] }
},
},
count: { $sum: 1 }
}
// Filter unique sessions by session id and add them to a sessions field
const reduceSessions = {$addFields:
{sessions: {$reduce: {input: "$session",initialValue: [],in:
{$concatArrays: ["$$value",{$cond: [{$in: ["$$this._id","$$value._id"]},[],["$$this"]]}]}
}}}}
const projection = { $project: { _id: 0, episode: "$_id", plays: {$size: '$sessions'}, dropoff: {$avg: "$sessions.averageOfSession"}, sessions: '$session.date', events: "$count" } }
const arr = await Play.aggregate([
{ $match: filter }, {$project: requiredFields}, {$group: it0}, reduceSessions,
projection,{ $sort : { _id : 1 } }
])
and this is what my result looks like so far:
{
"episode": 5,
"plays": 4,
"dropoff": 3737.25,
"sessions": [
"2020-11-15",
"2020-11-15",
"2020-11-16",
"2020-11-15"
],
"events": 4
}...
What I'd like is for the 'sessions' array to be an object with one key for each distinct date which would contain the count, so something like this:
{
"episode": 5,
"plays": 4,
"dropoff": 3737.25,
"sessions": {
"2020-11-15": 3,
"2020-11-16": 1
},
"events": 4
}...
Hope that makes sense, thank you!!
You can first map sessions into key-value pairs. Then $group them to add up the sum. Then use $arrayToObject to convert to the format you want.
This Mongo playground is referencing this example.

mongodb aggregation with list of ids

I got the following aggregation:
It scans all the messages and groups them by a docId and returns only the last updated message in each group.
db.getCollection('Messages').aggregate([ { '$match': { docType: 'order' }}, { '$sort': { updatedAt: -1 } }, { '$group': { _id: '$docId', content: { '$first': '$content' }}}])
which returns -
[
{
"_id" : "some id1",
"content" : "some msg1
}
/* 11 */
{
"_id" : "some id2",
"content" : "some msg2"
}
...
]
It is working as intended (not sure about optimization).
But now I need to add another thing on top of that.
In the UI I got a list of documents and I need to show only the latest message for each. But I also got paging so I dont need to bring the last message for XXXXXX documents but only for 1 page.
So basically something like this -
.find({'docId':{$in:['doc1', 'doc2', 'doc3'...]}}) - if the page had 3 items
But I am not sure how to combine all of that together.
Message sample:
{
"_id": "11111"
"docType": "order",
"docId": "12345", - this is not unique there can be many messages for 1 docId
"content": "my message",
"updatedAt" "01/01/2020..."
}
Adding
{ '$match': { _id: { '$in': ["docId1", "docId2"]} }}
at the end did the trick!
edit:
or actually I think It might be better to add it as the first pipeline so:
db.getCollection('Messages').aggregate([ { '$match': { docId: { '$in': ["5d79cba1-925b-416b-9408-6f4429d7c107", "8e31c748-c86d-407e-8d83-9810c8e23e3e"]} }}, { '$match': { docType: 'order' }}, { '$sort': { cAt: -1 } }, { '$group': { _id: '$docId', content: { '$first': '$content' }}}])
Since I am adding those dynamically I ended up with 2 $match properties. I actually not so sure what difference does it make to use $match + $and vs having 2 different $match (optimization wise).

Use $addToSet (aggregate) to add objects to an array with one or more distinct properties

function findProfilesBySteamIds(ids, res) {
var match = {
$match: {
$and : [
{ steamid: { $in : ids } }
]
}
}
var sort = {
$sort: {
createdAt : -1
}
}
var group = {
$group: {
_id : "$steamid",
profile : { $first : "$$ROOT" },
personahistory : { $push : "$$ROOT" }
}
}
SteamUser
.aggregate([match, sort, group])
}
Okay so here is my issue. I have a collection of profiles with a unique identifier steamid. I am grouping by $steamid and that functionality is working as expected. I also want to add a new field (not in the Schema) called personahistory that has an array of objects that is DISTINCT based on another property of the document, called personaname.
I have tried to use $addToSet following the mongoose reference docs but so far can only use it to create an array of that property only:
$group: {
_id : "$steamid",
profile : { $first : "$$ROOT" },
personahistory : { $addToSet : "$personaname" }
}
This outputs:
{
"_id": "1234567890",
"profile": { ... },
"personahistory" : [
"personaname1",
"personaname2",
"personaname3"
]
}
Instead I'd like the output to be along the lines of:
{
"_id": "1234567890",
"profile": { ... },
"personahistory" : [
{
"_id": "1234567890",
"personaname": "personaname1",
...
},
{
"_id": "1234567891",
"personaname": "personaname2",
...
},
{
"_id": "1234567892",
"personaname": "personaname3",
...
}
]
}
I've tried something like:
{
$addToSet : {
"_id": "$steamid",
"personaname": "$personaname"
}
}
but to no avail.
Furthermore, if this behavior is even possible, I'd like to be able to use two or more DISTINCT fields. So, get all unique combinations of personaname and avatarurl and add those to the set.
Basically, I have a record each time the profile is queried but I only want to return records that are unique based on those two fields.
I'll be happy to provide more information if I haven't been clear enough.

MongoDB find projection, matching elements with $ne

I have a query that returns objects containing an array of objects. Within that array of objects, there are some that should not be processed. Here is an object simmilar to what I have:
{
_id: 12345,
data: [
{
state: 1,
info: "abc"
},{
state: 2,
info: "cde"
},{
state: 2,
info: "efg"
}
]
}
I want to show only the objects where state does not equal to 1. So I want to get back something like this:
{
_id: 12345,
data: [
{
state: 2,
info: "cde"
},{
state: 2,
info: "efg"
}
]
}
There can be hundreds of "main" objects with tens of "sub" objects. I tried using the query:
col.find({'data.info': {$in: [] }, {_id: 1, data: { $elemMatch: { state: {$ne: 1 } } } }, {}, callback);
But that only gives me this:
{
_id: 12345,
data: [
{
state: 2,
info: "cde"
}
]
}
In other words, $elemMatch does what it is supposed to do, but I need to get a different result. So is there a way to do that in one query or without pre-processing results (removing entries before any further code reads the data)?
The $elemMatch projection operator only returns the first matching element in an array.
To filter the whole array, the best approach in MongoDB 2.2+ would be using the Aggregation Framework. Alternative, you could also do this using Map/Reduce or in your application code.
Example aggregation:
db.data.aggregate(
// Initial match to limit number of documents
{ $match : {
data: { $elemMatch: { state: {$ne: 1 } } }
}},
// Convert the data array into a stream of documents
{ $unwind: "$data" },
// Limit to matching elements of the data array
{ $match : {
"data.state": {$ne: 1 }
}},
// Re-group by original document _id
{ $group: {
_id: "$_id",
data: { $push: "$data" }
}}
)
Sample output:
{
"_id" : 12345,
"data" : [
{
"state" : 2,
"info" : "cde"
},
{
"state" : 2,
"info" : "efg"
}
]
}

Resources